The unifying trait of analytic philosophy is the notion that the key to philosophy lies in the analysis of language. Where analytic philosophers differ is in naming the best method for language analysis.
Consider two classes of questions.
Questions from (1) are lofty and perennial, the stuff of true philosophy, but those from (2) seem more down to earth and answerable. The question is, have we ever, indeed, could we ever make inroads into the philosophical questions, those from (1)? Or is it their nature to stagnate? An extreme optimist would say they are answerable, but in order to back this claim up he must either answer them or show how one might go about doing that. An extreme pessimist would say philosophical questions are inherently unanswerable—but then he must give a method to distinguish such questions from non-philosophical, answerable ones. And then there is the philosopher of the middle position who might see some philosophical questions as answerable and others as unanswerable. He too must give a method for distinguishing them, and beyond this answer or point the way to answering those philosophical questions he holds out hope for.
Running down the list of well known philosophers, we might put Descartes and Plato (as he represents Socrates in the Republic) as extreme optimists. Hume and Kant (as of the first Critique) would count as extreme pessimists.
As for the analytic philosophers, they were extreme pessimists in one sense, namely that they felt only questions like those of (2) are answerable. But rather than saying the philosophical questions of (1) are unanswerable, they employed linguistic analysis to show that those questions are not even real.
That is, analytic philosophy altered the view of what philosophers ought to be doing: unmasking type (1) questions as pseudo-questions. Where the British Empiricists sought an empiricist theory of knowledge, the analytic philosophers sought an empiricist theory of meaning.
One can argue that Logical Positivism was the most ambitions attempt ever to make an empiricist theory of anything. This fact, along with the fact of its manifest failure, makes its story one of the great dramas of human culture.
Consider two classes of questions.
- Is there a God? What is the meaning of life? Is there an objective standard of right and wrong? Does a tree falling in a forest devoid of people make a noise? Am I currently awake or in a dream?
- How many passengers will there be on my bus up to New York on Friday? Why does water boil at 100 degrees? Is Phologiston Theory true? Is Quantum Theory true?
Questions from (1) are lofty and perennial, the stuff of true philosophy, but those from (2) seem more down to earth and answerable. The question is, have we ever, indeed, could we ever make inroads into the philosophical questions, those from (1)? Or is it their nature to stagnate? An extreme optimist would say they are answerable, but in order to back this claim up he must either answer them or show how one might go about doing that. An extreme pessimist would say philosophical questions are inherently unanswerable—but then he must give a method to distinguish such questions from non-philosophical, answerable ones. And then there is the philosopher of the middle position who might see some philosophical questions as answerable and others as unanswerable. He too must give a method for distinguishing them, and beyond this answer or point the way to answering those philosophical questions he holds out hope for.
Running down the list of well known philosophers, we might put Descartes and Plato (as he represents Socrates in the Republic) as extreme optimists. Hume and Kant (as of the first Critique) would count as extreme pessimists.
As for the analytic philosophers, they were extreme pessimists in one sense, namely that they felt only questions like those of (2) are answerable. But rather than saying the philosophical questions of (1) are unanswerable, they employed linguistic analysis to show that those questions are not even real.
That is, analytic philosophy altered the view of what philosophers ought to be doing: unmasking type (1) questions as pseudo-questions. Where the British Empiricists sought an empiricist theory of knowledge, the analytic philosophers sought an empiricist theory of meaning.
One can argue that Logical Positivism was the most ambitions attempt ever to make an empiricist theory of anything. This fact, along with the fact of its manifest failure, makes its story one of the great dramas of human culture.
GE Moore and Bertrand Russell, while never belonging to the Logical Positivist movement, contributed greatly to its development in their regard for the sense datum and their recourse to linguistic analysis. Importantly, neither Moore nor Russell thought that all philosophical questions can be banished by analysis.
In his 1925 article “Defence of Common Sense”, Moore cast a jaded eye at some of the traditional problems of philosophy, those of the “external world” (is what looks like is out there really out there?) and of “other minds”, or “solipsism” (assuming there is and external world and there are the human bodies I perceive, how can I know those bodies have an inner life as I do?).
At the heart of Moore’s response to these problems is his enumeration of theories of the sense-datum, theories that will echo through the works of the Logical Positivists. Say I take a look at my hand. What is it that I see?
Without giving a conclusion, Moore leans toward Phenomenalism. But this is hard to square with his adamant defense of commonsense notions of the outer world and other selves. The outer world, as an existent, would have some physical facts that would not be psychological facts. Further, if every physical fact requires a psychological fact, then the self is a collection of psychological facts. But where would they be housed, if not in a commonsense self?
Theory of Descriptions
Russell’s Theory of Descriptions was in part a response to the Problem of Intentionals. Franz Brentano had noted that intentionality is a feature of phychology: to “believe” always means to “believe that p”. A problem arises in case when we believe in things that are not real, such as unicorns, or are even self-contradictory (“round squares”). In a sense they exist, namely in the mind, but in a sense they do not, as they are not “out there”. Does my imaginary belief create a mental, if impossible, object? Or must this object correspond to something in reality?
Then there was the Problem of Identity. Some identity is manifestly tautological: a=a. But sometimes identity is discovered: the morning star is the evening star, Samuel Clemens is Mark Twain, etc. But how is this so? If all identity is self-identity, how is it possible to get new information out of an identity statement?
Russell felt that both the Problem of Intentionals and the Problem of Identity are mere illusions of language, to be dispelled with linguistic analysis. His solution lay in drawing a distinction between surface grammar and depth grammar. If you do it right, philosophical analysis conjures the depth grammar from the surface grammar and reveals the true structure of the sentence.
Consider two superficially similar arguments where only one is valid.
The problem lies in confusing names and descriptions. On Russell’s Theory of Descriptions, x cannot be a real name if you can question whether or not x existed. The object of a name is always an individual. If you can further analyze the name—then you have a “relative individual”, as in “the author of Waverley”:
In his 1925 article “Defence of Common Sense”, Moore cast a jaded eye at some of the traditional problems of philosophy, those of the “external world” (is what looks like is out there really out there?) and of “other minds”, or “solipsism” (assuming there is and external world and there are the human bodies I perceive, how can I know those bodies have an inner life as I do?).
At the heart of Moore’s response to these problems is his enumeration of theories of the sense-datum, theories that will echo through the works of the Logical Positivists. Say I take a look at my hand. What is it that I see?
- Direct, or Naive Realism: the sense datum is nothing other than the surface of my hand. One problem here is that if you press one eye while looking, two images will appear, and it is highly doubtful that you have grown a third hand by pressing an eye.
- Representative Realism (a position we get from Locke): the sense datum relates to the surface of my hand indirectly. This relation is unanalyzable: when I get “hand” sense-data, I judge that there is some x that these data relate to, though I could never know exactly how.
- Phenomenalism (from Mill): the hand is a set of hypothetical conditions of having this sense datum, or the ongoing possibility of having this sense datum. Important for phenomenalism is that every physical fact could get unpacked as one or more psychological facts.
Without giving a conclusion, Moore leans toward Phenomenalism. But this is hard to square with his adamant defense of commonsense notions of the outer world and other selves. The outer world, as an existent, would have some physical facts that would not be psychological facts. Further, if every physical fact requires a psychological fact, then the self is a collection of psychological facts. But where would they be housed, if not in a commonsense self?
Theory of Descriptions
Russell’s Theory of Descriptions was in part a response to the Problem of Intentionals. Franz Brentano had noted that intentionality is a feature of phychology: to “believe” always means to “believe that p”. A problem arises in case when we believe in things that are not real, such as unicorns, or are even self-contradictory (“round squares”). In a sense they exist, namely in the mind, but in a sense they do not, as they are not “out there”. Does my imaginary belief create a mental, if impossible, object? Or must this object correspond to something in reality?
Then there was the Problem of Identity. Some identity is manifestly tautological: a=a. But sometimes identity is discovered: the morning star is the evening star, Samuel Clemens is Mark Twain, etc. But how is this so? If all identity is self-identity, how is it possible to get new information out of an identity statement?
Russell felt that both the Problem of Intentionals and the Problem of Identity are mere illusions of language, to be dispelled with linguistic analysis. His solution lay in drawing a distinction between surface grammar and depth grammar. If you do it right, philosophical analysis conjures the depth grammar from the surface grammar and reveals the true structure of the sentence.
Consider two superficially similar arguments where only one is valid.
- Jack stole my car and Jack is six feet tall. Therefore, Jack stole my car and is six feet tall: valid.
- Someone stole my car and someone is six feet tall. Therefore, someone stole my car and is six feet tall: invalid.
The problem lies in confusing names and descriptions. On Russell’s Theory of Descriptions, x cannot be a real name if you can question whether or not x existed. The object of a name is always an individual. If you can further analyze the name—then you have a “relative individual”, as in “the author of Waverley”:
Enter Moore and Russell
A name is a simple symbol whose meaning is something that can only occur as a subject… And a simple symbol is one which has no parts that are symbols. Thus “Scott” is a simple symbol, because, though it has parts (namely, separate letters), these parts are not symbols. On the other hand, “the author of Waverley” is not a simple symbol, because the separate words that compose the phrase are parts which are symbols.
Sometimes what seems like a name is really a description, as in Homer:
And so, when we ask whether Homer existed, we are using the word “Homer” as an abbreviated description: we may replace it by (say) “the author of the Iliad and the Odyssey.”
Importantly, you cannot ask after the existence of a thing named but only of a thing described: to name something entails that that thing exists. So a seemingly non-existent name “phlogiston” is a description in disguise.
Applying this to some simple statements, we can assess their meaningfulness.
In Russell's "What There Is", he claims we make our world up of:
In this Russell shows his affinity for Mill's Phenomenalist account of the sense-datum (Moore's third option). But strictly speaking Russell allies himself with William James' "neutral monism", which attempts to avoid the whole mental/physical dichotomy altogether. For all of his passion in rejecting the Hegelian influence on the philosophy of his student days, it would appear that the apple did not fall far from the tree.
But have you ever found yourself wondering whether the sense-datum itself is a unit? a natural unit? a discrete unit? an artificial unit? a plurality? an infinity?
Applying this to some simple statements, we can assess their meaningfulness.
- the “so-and-so exists” (where “so-and-so” is a definite description) is a meaningful phrase
- “a exists” (where “a” is a logically proper name): meaningless phrase.
- “a does not exist” (where “a” is a logically proper name): meaningless phrase
In Russell's "What There Is", he claims we make our world up of:
- simples, or atomic facts, such as individuals and properties
- the familiar logical connectives (if, and, or, ...)
- the Theory of Descriptions, by which we resolve everything into simples and connectives.
In this Russell shows his affinity for Mill's Phenomenalist account of the sense-datum (Moore's third option). But strictly speaking Russell allies himself with William James' "neutral monism", which attempts to avoid the whole mental/physical dichotomy altogether. For all of his passion in rejecting the Hegelian influence on the philosophy of his student days, it would appear that the apple did not fall far from the tree.
But have you ever found yourself wondering whether the sense-datum itself is a unit? a natural unit? a discrete unit? an artificial unit? a plurality? an infinity?
Enter Carnap
"The meaning of a sentence is its method of verification": if our aim were to boil the Vienna Circle down to a single soundbite, this would have to be it. Devotees of science, and one could just as easily say scientism, the Logical Positivists, too, noted the tendency of metaphysics toward stagnation. In addressing this Rudolph Carnap used Russell's basic analytic approach in distinguishing between surface grammar (he called it "logical syntax") and depth grammar ("grammatical syntax"). To this he added the notion of the elementary sentence, roughly what Russell called the "atomic proposition, which best isolates a term for purposes of verification. The elementary sentence for stone is "x is a stone"; for unicorn, "x is a unicorn". Once the elementary sentence has been teased out, one is in place to verify that x is such a thing as to be a matter of experience or logic.
In his essay "The Elimination of Metaphysics Through Logical Analysis of Language" (part of which I have translated just for kicks), Carnap takes a stab at answering why meaningless terms arise in the first place. Well, he says, over the course of history terms once meaningful get coopted toward meaningless ends. Consider the term "God". Once a reference to entities thought to dwell in places like Mt. Olympus, the term wandered off to refer to things which could never enter experience, the things of metaphysics. Theologians use the term, too, vacillating between its mythical and metaphysical usages as convenient. Similarly "principle" has digressed from its original reference to temporal beginnings, in metaphysics being deployed to refer to extra-temporal priority.
And finally, there is the strange case of Dr. Heidegger, who presents us with a true rarity, a term that is meaningless from its very first appearance: "Das Nichts nichtet". There's no such word as "to nothing"! How can "nothing" be a verb? Let's forget for a moment the political charge behind this asperity (Carnap was a socialist, and Heidegger was soon to be a loud proponent of fascism). Like music such expressions might give voice to emotion, to one's Lebenseinstellung, and that's all well and good (or maybe not!). But let's call a spade a spade: there is nothing meaningful in Heidegger's phrase.
Another question for the idle-minded: Carnap, and for that matter Hume and pretty much every other empiricist to put pen to paper assumes that experience must be in every case of the mundane, unregenerate, "five senses" kind. Is this an inductive error? "I, Rudolph Carnap, to this point in my life have never experienced anything beyond what I could easily explain in terms of touch, taste, smell, sight, and hearing. Therefore, nobody ever in the history of the world has experienced differently, nor can they ever". Think bats and mystics.
In his essay "The Elimination of Metaphysics Through Logical Analysis of Language" (part of which I have translated just for kicks), Carnap takes a stab at answering why meaningless terms arise in the first place. Well, he says, over the course of history terms once meaningful get coopted toward meaningless ends. Consider the term "God". Once a reference to entities thought to dwell in places like Mt. Olympus, the term wandered off to refer to things which could never enter experience, the things of metaphysics. Theologians use the term, too, vacillating between its mythical and metaphysical usages as convenient. Similarly "principle" has digressed from its original reference to temporal beginnings, in metaphysics being deployed to refer to extra-temporal priority.
And finally, there is the strange case of Dr. Heidegger, who presents us with a true rarity, a term that is meaningless from its very first appearance: "Das Nichts nichtet". There's no such word as "to nothing"! How can "nothing" be a verb? Let's forget for a moment the political charge behind this asperity (Carnap was a socialist, and Heidegger was soon to be a loud proponent of fascism). Like music such expressions might give voice to emotion, to one's Lebenseinstellung, and that's all well and good (or maybe not!). But let's call a spade a spade: there is nothing meaningful in Heidegger's phrase.
Another question for the idle-minded: Carnap, and for that matter Hume and pretty much every other empiricist to put pen to paper assumes that experience must be in every case of the mundane, unregenerate, "five senses" kind. Is this an inductive error? "I, Rudolph Carnap, to this point in my life have never experienced anything beyond what I could easily explain in terms of touch, taste, smell, sight, and hearing. Therefore, nobody ever in the history of the world has experienced differently, nor can they ever". Think bats and mystics.
synthetic a priori?
Not only did the Logical Positivists want to eliminate empty references and linguistic novelties, they wanted to get rid of anything at all that was not synthetic a posteriori or analytic a priori. So they had to figure out what to do with mathematics and logic. On some accounts math and logic are synthetic a priori. But this was anathema to the Vienna Circle. As Hans Hahn put it, "The idea that thinking is an instrument for learning more about the world than has been observed, for acquiring knowledge of something that has absolute validity always and everywhere in the world, an instrument for grasping general laws of all being, seems to us wholly mystical" (from "Logic, Mathematics and Knowledge of Nature").
Hahn's own solution was to regard the a priori as true, not by necessity, but by convention. They are simply products of how we use our language, telling us nothing of extralinguistic reality. "...logic does not by any means treat of the totality of things, it does not treat of objects at all but only of our way of speaking about objects; logic is first generated by language. The certainty and universal validity, or better, the irrefutability of a proposition of logic derives just from the fact that it says nothing about objects of any kind." Bold indeed.
How does this play out, say in the principles of contradiction and the excluded middle? Well, I am taught that some things are red, that nothing can be both red and not red (at the same time, in the same way, etc.) and that everything else is not red. But these are prescriptions, they give a "method of speaking about things". Thus though we cannot say of a cocker spaniel we have never seen whether it is black or brown, but we can say that it is either black or brown. What's at stake here? Omniscience. If we did indeed know everything we wouldn't need these roundabout manners of expression: "Were I omniscient, I could do without this 'or' and could say immediately 'it is brown'". Logic and mathematics, then, are not universal laws inscribed on every being. They are instead methods for coping with limited knowledge.
Yeah, so maybe this handles the 'or', but what about the 'and', 'not','true','false','if','then',...? Could we have language at all without use of these? Wouldn't the world have to be a certain way for this to be the case?
In his "Old and New Logic, Carnap takes a different tack to the matter of a priori logic. His approach was to show that all a priori logic is not conventional but tautological. The basic idea was that all meaningful statements can be expressed as truth-functional statements, statements whose every atomic component is completely determined as true or false.
Let the connective x stand for "and", "after", or "because" in the sentence "John left x Mary arrived". Now "John left and Mary arrived" is truth functional, as is "John left after Mary arrived", so long as you supplement the sentence with some further sentences involving the times John and Mary were seen. But "John left because Mary arrived" would not be truth-functional, unless you had a preternatural ability to read minds.
What you ultimately want to do is rework your complex "truth-functional" sentence into a truth table whose output is true for all cases, such as in the law of the excluded middle: p | ~p | p∨~p (on a truth table, p∨~p is all T).
Maybe you can get this to work for John and Mary, maybe you can't. But if you do you are left facing the problematic definition of numbers inherited from the Principia Mathematica. How do we define the number 2? For Russell and Whitehead, 2 is not a property, but a set. Specifically, it is the set of all two-fold sets. That set is two-fold if for any two objects they are unequal to each other, they are in the same set, and any third object in that set is equal to the one or the other. Or (if I've got my propositional calculus in order):
∃xy: x≠y, f(x),f(y),∀z:f(z)→(z=x∨z=y)
The problem here is that in combining symbolic logic with set theory you need to treat sets as having a reality over and above their members; you have to take the null set seriously. The disgruntled positivist thus catches an ugly whiff of platonism.
Let A={1,3}
Let B={2,3}
Let C={1,2}
Let D=A ⋂ B ⋂ C
Then D is the set of all sets with 2 members. D is "2" on the Russell and Whitehead definition.
Now let E={1,2,3}
Let F={E}
F is a set of a set containing three members.
Now if we pretend there is no such thing as a set and mush the ultimate contents of D together and do the same thing for F, we get D={1,2,3}=F. But if we uphold the ontological status of the set, no such mushing is possible: D={{1,3},{2,3},{1,2}}≠F={{1,2,3}}.
In other words, F=D ∧ F≠D, depending on whether you allow sets their own ontological reality. And such spooky entities of sets was exactly the sort of thing that set the Logical Positivists on their eliminative quest in the first place.
More musings for the idle-minded: isn’t this need for set theory obviated by the approach used in digital computers: half-adders and the rest? They do all numerical computations using nothing but 'and','or','not','true', and 'false', right?
Also, couldn't you replace the set with truth-functional descriptions of actions? E.g.: "If you were to grab any items x and y and find that they both exhibit property p, and if you then were to (put them down and) grab any other item z and find that it, too, exhibited p, and that it was the same as x or y, you could conclude that this collection of items exhibited two-ness." In this case mathematics would be empirical and conventional, along the lines of Hahn.
Hahn's own solution was to regard the a priori as true, not by necessity, but by convention. They are simply products of how we use our language, telling us nothing of extralinguistic reality. "...logic does not by any means treat of the totality of things, it does not treat of objects at all but only of our way of speaking about objects; logic is first generated by language. The certainty and universal validity, or better, the irrefutability of a proposition of logic derives just from the fact that it says nothing about objects of any kind." Bold indeed.
How does this play out, say in the principles of contradiction and the excluded middle? Well, I am taught that some things are red, that nothing can be both red and not red (at the same time, in the same way, etc.) and that everything else is not red. But these are prescriptions, they give a "method of speaking about things". Thus though we cannot say of a cocker spaniel we have never seen whether it is black or brown, but we can say that it is either black or brown. What's at stake here? Omniscience. If we did indeed know everything we wouldn't need these roundabout manners of expression: "Were I omniscient, I could do without this 'or' and could say immediately 'it is brown'". Logic and mathematics, then, are not universal laws inscribed on every being. They are instead methods for coping with limited knowledge.
Yeah, so maybe this handles the 'or', but what about the 'and', 'not','true','false','if','then',...? Could we have language at all without use of these? Wouldn't the world have to be a certain way for this to be the case?
In his "Old and New Logic, Carnap takes a different tack to the matter of a priori logic. His approach was to show that all a priori logic is not conventional but tautological. The basic idea was that all meaningful statements can be expressed as truth-functional statements, statements whose every atomic component is completely determined as true or false.
Let the connective x stand for "and", "after", or "because" in the sentence "John left x Mary arrived". Now "John left and Mary arrived" is truth functional, as is "John left after Mary arrived", so long as you supplement the sentence with some further sentences involving the times John and Mary were seen. But "John left because Mary arrived" would not be truth-functional, unless you had a preternatural ability to read minds.
What you ultimately want to do is rework your complex "truth-functional" sentence into a truth table whose output is true for all cases, such as in the law of the excluded middle: p | ~p | p∨~p (on a truth table, p∨~p is all T).
Maybe you can get this to work for John and Mary, maybe you can't. But if you do you are left facing the problematic definition of numbers inherited from the Principia Mathematica. How do we define the number 2? For Russell and Whitehead, 2 is not a property, but a set. Specifically, it is the set of all two-fold sets. That set is two-fold if for any two objects they are unequal to each other, they are in the same set, and any third object in that set is equal to the one or the other. Or (if I've got my propositional calculus in order):
∃xy: x≠y, f(x),f(y),∀z:f(z)→(z=x∨z=y)
The problem here is that in combining symbolic logic with set theory you need to treat sets as having a reality over and above their members; you have to take the null set seriously. The disgruntled positivist thus catches an ugly whiff of platonism.
Let A={1,3}
Let B={2,3}
Let C={1,2}
Let D=A ⋂ B ⋂ C
Then D is the set of all sets with 2 members. D is "2" on the Russell and Whitehead definition.
Now let E={1,2,3}
Let F={E}
F is a set of a set containing three members.
Now if we pretend there is no such thing as a set and mush the ultimate contents of D together and do the same thing for F, we get D={1,2,3}=F. But if we uphold the ontological status of the set, no such mushing is possible: D={{1,3},{2,3},{1,2}}≠F={{1,2,3}}.
In other words, F=D ∧ F≠D, depending on whether you allow sets their own ontological reality. And such spooky entities of sets was exactly the sort of thing that set the Logical Positivists on their eliminative quest in the first place.
More musings for the idle-minded: isn’t this need for set theory obviated by the approach used in digital computers: half-adders and the rest? They do all numerical computations using nothing but 'and','or','not','true', and 'false', right?
Also, couldn't you replace the set with truth-functional descriptions of actions? E.g.: "If you were to grab any items x and y and find that they both exhibit property p, and if you then were to (put them down and) grab any other item z and find that it, too, exhibited p, and that it was the same as x or y, you could conclude that this collection of items exhibited two-ness." In this case mathematics would be empirical and conventional, along the lines of Hahn.
The Sense-Datum Variety of Logical Positivism and Behaviorism
When Moritz Schlick took up the question of elimination in "Positivism and Realism", he proposed this question as a criterion of meaningfulness: "what difference does the truth of this proposition make in the world?"
As for Russell and Carnap, for Schlick all sentences bottom out in something nonverbal, or ostensive, and the set of those constitute the meaning of the sentence. Accordingly, in order to find the meaning of a proposition, we must transform it by successive definitions until finally only such words occur in it as can no longer be defined, but whose meanings can only be directly pointed out.
Schlick's specific aim in this essay is to eliminate the problem of realism vs. idealism. Verificationism had been getting a reputation as being crypto-idealist, "Berkeley without God", they said. Schlick's response: where a representative realist would say that the things out there indirectly cause our sensations, the idealist says that there is in fact nothing out there beyond our sensations. But the truth of either of these assertions makes no difference in how getting on with things (and yes, you can still be a good parent even if you regard your little one as nothing but an ongoing possibility of sense-data), and so both are meaningless.
Returning to phenomenalism, we recall it holds that insofar as we know physical objects, we must know them as logical constructions out of sense data. But thinking of these objects is not meaningless. But for Schlick, phenomenalism is a theses about language, not reality. To avoid slippage into realism, translate your woes away. "There is a table next door" becomes "if you were to go over and look, you would get such and such sense-data". Whatever difference there may be between reality and illusion is, for us, nothing but a matter of regularly of sense-patterns.
Even pain is not to be regarded as a reality, but a concept standing for a describable class of experiences. And the whole "Problem of the Inverted Spectrum" disintegrates: in principle there is no way of verifying whether you see red when I see green, and it makes utterly no difference for the world anyways. You react to what you call red the same way I react to what I call red. Outer behavior is all that matters.
And as long as we make the move toward behaviorism, the Problem of Solipsism goes away, too. The problem arises when we move toward the prescriptive: for each of us, we can’t know about another’s conscious states (which assumes they have such states) so each of us have to rely on our own conscious states. This prescription both asserts the universality of consciousness and the unknowability of universal consciousness. As a result of this uncomfortable situation, some Logical Positivists dropped the idea that protocol sentences are about sense-data, instead favoring them being about sense-objects.
yeah, I have a long ways to go before this post is done...
As for Russell and Carnap, for Schlick all sentences bottom out in something nonverbal, or ostensive, and the set of those constitute the meaning of the sentence. Accordingly, in order to find the meaning of a proposition, we must transform it by successive definitions until finally only such words occur in it as can no longer be defined, but whose meanings can only be directly pointed out.
Schlick's specific aim in this essay is to eliminate the problem of realism vs. idealism. Verificationism had been getting a reputation as being crypto-idealist, "Berkeley without God", they said. Schlick's response: where a representative realist would say that the things out there indirectly cause our sensations, the idealist says that there is in fact nothing out there beyond our sensations. But the truth of either of these assertions makes no difference in how getting on with things (and yes, you can still be a good parent even if you regard your little one as nothing but an ongoing possibility of sense-data), and so both are meaningless.
Returning to phenomenalism, we recall it holds that insofar as we know physical objects, we must know them as logical constructions out of sense data. But thinking of these objects is not meaningless. But for Schlick, phenomenalism is a theses about language, not reality. To avoid slippage into realism, translate your woes away. "There is a table next door" becomes "if you were to go over and look, you would get such and such sense-data". Whatever difference there may be between reality and illusion is, for us, nothing but a matter of regularly of sense-patterns.
Even pain is not to be regarded as a reality, but a concept standing for a describable class of experiences. And the whole "Problem of the Inverted Spectrum" disintegrates: in principle there is no way of verifying whether you see red when I see green, and it makes utterly no difference for the world anyways. You react to what you call red the same way I react to what I call red. Outer behavior is all that matters.
And as long as we make the move toward behaviorism, the Problem of Solipsism goes away, too. The problem arises when we move toward the prescriptive: for each of us, we can’t know about another’s conscious states (which assumes they have such states) so each of us have to rely on our own conscious states. This prescription both asserts the universality of consciousness and the unknowability of universal consciousness. As a result of this uncomfortable situation, some Logical Positivists dropped the idea that protocol sentences are about sense-data, instead favoring them being about sense-objects.
yeah, I have a long ways to go before this post is done...