Tuesday, January 28, 2014

Jerry-built atheism


David Bentley Hart’s recent book The Experience of God has been getting some attention.  The highly esteemed William Carroll has an article on it over at Public Discourse.  As I noted in a recent post, the highly self-esteemed Jerry Coyne has been commenting on Hart’s book too, and in the classic Coyne style: First trash the book, then promise someday actually to read it.  But it turns out that was the second post Coyne had written ridiculing Hart’s book; the first is here.  So, by my count that’s at least 5100 words so far criticizing a book Coyne admits he has not read.  Since it’s Jerry Coyne, you know another shoe is sure to drop.  And so it does, three paragraphs into the more recent post:

[I]t’s also fun (and marginally profitable) to read and refute the arguments of theologians, for it’s only there that one can truly see intelligence so blatantly coopted and corrupted to prove what one has decided is true beforehand. [Emphasis added]

Well, no, Jerry, not only there.
 
Now, criticizing what a book says when you haven’t actually read it is no mean feat.  After all, you’re lacking some of the basic resources commonly thought to be useful in doing the job, such as knowledge of what the book says.  How does Coyne pull it off?  MacGyver style.  He jerry-builds a critique out of the metaphysical equivalent of rubber bands and paper clips.   Unfortunately, Coyne is more of a MacGruber than a MacGyver, so the result is (as it were) an explosion which brings the house down upon Coyne and his combox sidekicks while leaving Hart unscathed.

Where most reviewers would prepare to attack an author’s arguments by consulting his book to find out what they are, Coyne’s procedure is to consult his own hunches about what might be in the book.  (All part of not “prov[ing] what one has decided is true beforehand,” you see.)  Coyne writes:

[A reviewer says that] Hart has presented the Best Case for God, and we’ve all ignored it… 

But what, exactly do we mean by “the opposition’s strongest case”?  I can think of three ways to construe that:

1. The case that provides the strongest evidence for God’s existence.  This is the way scientists would settle an argument about existence claims: by adducing data. This category’s best argument for God used to be the Argument from Design, since there was no plausible scientific alternative to God’s creation of the marvelous “designoid” features of plants and animals. But Darwin put paid to that one…

2. The philosophical argument that is most tricky, or hardest to refute: in other words, the argument for God that has the greatest degree of sophistry.  This used to include the Ontological Arguments, which briefly stymied even Bertrand Russell. But we soon realized that “existence is not a quality”, and that, in fact, existence claims can be settled only by observation or testing, not by logic.

3. The argument that is irrefutable because it’s untestable.  Given that arguments in the first two categories are now untenable, people like Hart have proposed conceptions of God that are so nebulous that we can’t figure out what they mean.  And because they are not only obscure but don’t say anything about the nature of God that can be compared to the way the universe is, they can’t be refuted…

And this, in fact, is what Hart has apparently done in his new book…

End quote.  Now, it’s interesting that Coyne’s first two possibilities roughly correspond to the contemporary philosophical naturalist’s standard assumption that if you’re not doing natural science, then the only thing left for you to be doing is mere “conceptual analysis,” which (so the standard objection goes) can only ever capture how we think about reality, but not reality itself.  Traditional metaphysics, which purports to be neither of these things, would thus be ruled out as groundless at best and (as the logical positivists claimed) strictly meaningless at worst -- not too different from Coyne’s third option.

The thing is, this commonly parroted contemporary naturalist assumption is just a modern riff on Hume’s Fork, viz. the thesis that “all the objects of human reason or enquiry may naturally be divided into two kinds, to wit, Relations of Ideas, and Matters of Fact” (Hume, Enquiry IV.1).  And Hume’s Fork is notoriously self-refuting, since it is not itself either a conceptual truth (a matter of the “relations of ideas”) or empirically testable (a “matter of fact”).  Now, the contemporary naturalist’s variation is in exactly the same boat.  The claim that the only respectable options are natural science and conceptual analysis is itself neither a claim that is supported by natural science, nor something revealed by conceptual analysis.  (The naturalist might try to bluff his way past this difficulty by asserting that neuroscience or cognitive science supports his case, but if so you should call his bluff.  For neuroscience and cognitive science, when they touch on matters of metaphysical import, are rife with tendentious and unexamined metaphysical assumptions.  And insofar as such assumptions are naturalist assumptions, the naturalist merely begs the question in appealing to them.)

So, the naturalist unavoidably takes a third cognitive stance distinct from natural science or conceptual analysis, in the very act of denying that it can be taken.  That is to say, he takes a distinctively metaphysical stance.  And so does Coyne.  Like his more philosophically sophisticated fellow contemporary naturalists, Coyne supposes that if a claim isn’t (1) a proposition of natural science or (2) what Coyne calls a proposition of “logic,” which his example (the ontological argument) indicates he takes to involve a mere analysis of concepts with no purchase on objective reality, then it must be (3) “untestable,” “nebulous,” “obscure,” etc.  But this supposition is itself neither a proposition of type (1) nor of type (2), in which case, by Coyne’s criterion, his own position must be regarded as (3) “untestable,” “nebulous,” “obscure,” etc.

In fact traditional metaphysics is not “untestable,” “nebulous,” “obscure,” etc., and neither are the traditional arguments of natural theology that are built upon it.  Take, for example, the Aristotelian-Scholastic theory of actuality and potentiality.  It is motivated completely independently of any theological application, and has been worked out over the centuries in systematic detail.  It argues that neither a static Parmenidean conception of the material universe nor a radically dynamic Heraclitean conception can in principle be correct; that natural science would not in principle be possible if either extreme position were correct; and that the only way in principle that both extremes can be avoided is by acknowledging that actuality and potentiality (or “act and potency,” to use the traditional jargon) are both irreducible aspects of mind-independent reality. 

Now precisely because the theory concerns what must be presupposed by any possible natural science, it is not the sort of thing that can be overthrown by any scientific discovery.  It goes deeper than any possible scientific discovery.  But that does not make it “untestable.”  To be sure, it is not going to be refuted by observation and experiment -- precisely since it concerns what any possible observation and experiment must presuppose -- but it can be challenged in other ways.  Are the arguments given for it valid?  Are the distinctions it makes carefully drawn?  Are there alternative ways of dealing with the facts it claims that it alone can account for?  And so forth.  Defenders of the theory take such challenges seriously and offer responses to them.  And they offer arguments, not appeals to intuition, or faith, or ecclesiastical authority.  (I’ve defended the theory of actuality and potentiality in several places, such as in Chapter 2 of Aquinas.  An even more detailed exposition and defense will be available in my forthcoming book Scholastic Metaphysics: A Contemporary Introduction.  The book won’t be out until May, but Coyne will no doubt have a 2500 word refutation up by tomorrow.) 

Now the core Scholastic arguments for the existence of God rest on the theory of actuality and potentiality.  (I defend these arguments too in several places, such as Chapter 3 of Aquinas.  For a popular presentation of one of them, see this public lecture.)  Because that theory is concerned with what any possible natural science must presuppose, the theistic arguments built upon it, like the theory itself, cannot in principle be overthrown by natural science.  But, like that theory, that does not make the arguments “untestable.”  As with the theory of actuality and potentiality, we can ask various critical questions of the arguments -- Are the arguments valid?  Are their premises true?  Are there alternative ways of dealing with the facts they claim that they alone can account for?  Etc. -- and we can see how well the arguments can be defended against them.  At no point do the arguments appeal to intuition, faith, authority, etc.

New Atheist types will insist that there can be no rationally acceptable and testable arguments that are not empirical scientific arguments, but this just begs the question.  The Scholastic claims to have given such arguments, and to show that he is wrong, it does not suffice merely to stomp one’s feet and insist dogmatically that it can’t be done.  The critic has to show precisely where such arguments are in error -- exactly which premise or premises are false, or exactly where there is a fallacy committed in the reasoning.  (In Aquinas and in the public lecture just linked to, I show why the usual objections have no force.)  Moreover, as we have seen, the New Atheist refutes himself in claiming that only the methods of natural science are legitimate, for this assertion itself has no non-question-begging scientific justification.  It is merely one piece of metaphysics among others.  The difference between the New Atheist metaphysician and the Scholastic metaphysician is that the Scholastic knows that he is doing metaphysics and presents arguments for his metaphysical positions which are open to rational evaluation.  The New Atheist, by contrast, has no non-question-begging arguments for his naturalist metaphysics, but only shrill and dogmatic assertion.   He thinks that to show that he is rational and that his opponent is not, all he needs to do is loudly to yell “I am rational and you are not!” 

Coyne is, of course, evidently unfamiliar with any of the ideas referred to, even though they are at the heart of the Western theological tradition he ridicules.  He will dismiss them preemptively as “bafflegab,” “nebulous,” etc., though he has absolutely no non-question-begging reason for doing so.  He is, as I have pointed out before, exactly like the populist anti-science bigot who dismisses quantum mechanics, relativity theory, and the like merely because the terminology of such theories sounds odd to him and the conclusions seem counterintuitive.  Coyne would deny that the analogy is any good, but of course this just begs the question yet again.  What he needs to do is actually carefully to study the arguments of those he disagrees with, and then to show specifically where the arguments go wrong -- rather than engage in the usual New Atheist hand-waving about how they’re not worth the time, or that someone somewhere has already refuted them anyway, or that they’re motivated by wishful thinking, etc.  But that is exactly what he refuses to do.

Then again, Coyne assures us that he has in fact “spent several years reading theology.”  Really?  Apparently it was all in badly transliterated Etruscan, viewed through gauze bandages on a Kindle with a cracked and flickering screen.  While drunk.  And asleep.  How else to explain the following?  Of the claim that:

God is what grounds the existence of every contingent thing, making it possible, sustaining it through time, unifying it, giving it actuality. God is the condition of the possibility of anything existing at all.

Coyne, wearing his vast theological learning lightly, casually asserts:

Aquinas, Luther, Augustine: none of those people saw God in such a way.

I can’t top Kenny Bania’s reaction when reading this passage from Coyne.  Unlike Kenny, though, Jer, we’re not laughing with you.

454 comments:

  1. "It illustrates that 'being a really smart scientist' does not amount to 'being really good at philosophy'."

    So again why is that relevant, and did he say anything about meaning not existing?

    Some philosophers have said things that others philosophers have said are nonsensical, so some of them must be wrong, too. Does that remove all philosophy from consideration, as well?

    "Your argument here was that 'scientists are materialists and they probably don't deny the obvious therefore materialism must totally be compatible with the existence of meaning and intentionality and...'"

    No, that wasn't my argument.

    I've said, repeatedly, that there may be errors in materialism. But I don't believe that they're such *trivial* errors that you'd have to be an idiot not to realise. And this would be such an error.

    In this particular statement I'm not saying that materialism *is* consistent with meaning, I'm saying that materialism must *claim* to be consistent with meaning. That's different.

    "First, no, materialists don't claim that - not all of them. In fact some clearly believe that meaning has to be sacrificed. Your explanation for that is 'well they're philosophers I bet!'"

    Well? *Were* they philosophers? ;-)

    My answer to them is that all I can hear is meaningless noises coming out of their mouths, so I don't have to pay any attention to their arguments. Whatever they are.

    I think this is the sort of thing that the layman would say "This is getting silly!" to.

    "You seem mystified at the very suggestion that an incompatibility could exist here."

    I'm mystified at the idea that after *repeatedly* saying I accept that there might be an incompatibility, that you somehow have managed to get the impression that I've been saying the exact opposite.

    "I'm sure someone hasn't grown tired of that yet."

    It's a trite argument. Suppose the question is "What are words?" Somebody provides a definition/description. But whatever they say, the answer comes back: "But you're definition itself consists of words, so is circular and invalid." How do you explain "words" without using them? How do you explain "meaning" without invoking it? But does that imply that there can *be* no explanation? That words do not consist merely of chains of letters or sounds strung together, but must have some magic and ineffable ingredient? No, I don't think so. I think that's just a straightforward confusion of levels of reference, and has no bearing on the matter.

    If you're tired of it, please don't feel any obligation to argue. I'm not out to annoy people. I'm not trying to convince anybody, and I'm not really expecting any of you to agree. What I'm looking for is for philosophically-literate people who disagree with me to provide the strongest arguments they can against it, so that I can see if there's any convincing reason to change my mind. I think I understand that argument and I don't find it convincing. So are there any more?

    This post is basically complaining that certain materialists aren't up to date with the latest philosophical arguments, able to hit their positions out of the park - so here I am, trying to learn what those arguments are. Is that an an unreasonable thing to do?

    ReplyDelete
  2. Now Coyne has attacked Craig. It's funny how these types always seem to attack people whom they know won't even know about the attack, much less bother responding to it.

    ReplyDelete
  3. "Now Coyne has attacked Craig. It's funny how these types always seem to attack people whom they know won't even know about the attack, much less bother responding to it."

    I just went and had a look. As usual, Coyne misunderstands the position he's attacking even when he has the quote right there in front of him.

    ReplyDelete
  4. @ Glenn, you really are missing the whole skeptiko genre. Holding Alex Tsakiris to any kind of professional academic standard is going to end in disappointment. IMO it's partly an entertainment based show :)

    Anyhow back to the regular programming.

    ReplyDelete
  5. Coyne is unbelievably inane. If he is going to take the time to type "unless I’m misinterpreting Craig," apparently without looking at the quote to check whether or not he was misinterpreting Craig.

    And someone sent Coyne that article. So that makes two who are so far gone that they can't read.

    ReplyDelete
  6. @John Quin:

    "Holding Alex Tsakiris to any kind of professional academic standard is going to end in disappointment."

    Which is precisely what's wrong with the "skeptiko genre" and why it's a mistake to take Pat Churchland's behavior (however questionable it may be otherwise) as indicating any knowledge on her part of a weakness in her position.

    Make no mistake, I think her position is weak, and indeed fatally so; I just don't think her behavior in that particular context is any basis for drawing any serious conclusions about her motivations. The primary purpose of that show is indeed entertainment—not intellectual enlightenment.

    ReplyDelete
  7. " . . . apparently without looking at the quote to check whether or not he was misinterpreting Craig."

    Yeah. It's just amazing to me that, with the article right there in front of him and having just quoted the relevant bit, Coyne can actually ask questions like, "If the petitioner chooses not to pray, and thereby affects God's actions, did God know that in advance, too?"

    Moreover, Craig is very well known as an advocate of Molinism, and this isn't Coyne's first rodeo with him. Coyne should already know Craig's view here even without referring to that article.

    ReplyDelete
  8. "No, it requires intentionality to be able to explain how non-intentionality explains intentionality. The intentionality itself doesn't require it, but the *explanation* does."

    If the explanation requires intentionality, then how do we know whether or not intentionality itself requires it?

    ReplyDelete
  9. "If you included the frog in the equations, the startled frog would be part of the solution, too. I'm not sure how that helps, though."

    That sounds like there is no objective equation or solution in a physical causal chain. Whether or not something counts as an equation or solution is subject dependent.

    ReplyDelete
  10. Re: Coyne's predictably uncharitable attack on Craig

    There's also the distinction between God "caring about a game" and God "caring enough about a game to do something about the game." Strictly speaking, he cares about everything, including the pencil on my desk. I suspect Craig may have been tip-toeing around the issue to avoid saying, point-blank, "Yes, God cares about the outcome, but in a very 'neutered' sense. Therefore, though theoretically "appropriate" and innocuous, practically speaking, praying for the outcome of a football game isn't likely to do much of anything."

    I part ways with Craig on a number of issues, particularly on the nature of God, but he has always struck me as a tough-minded, serious Christian, who actively dislikes the intellectually bankrupt Christianity that saturates our culture. I just cannot imagine him down on one knee and praying over the outcome of a football game.

    ReplyDelete
  11. "If the explanation requires intentionality, then how do we know whether or not intentionality itself requires it?"

    By understanding the explanation.

    State transition systems exist independently, irrespective of whether there are any other systems. The relationships between states and operations form patterns irrespective of any other systems. If the patterns are the same, as sometimes happens, then one system can be used to predict the behaviour of the other. The patterns being the same does not require intentionality to be true, but the implication, that the one system can 'represent' the other well enough to predict its properties and behaviour, constitutes a form of intentionality - and can be a building block in more complex intentional systems. The intentionality results from non-intentionality.

    "That sounds like there is no objective equation or solution in a physical causal chain."

    They're different questions. If there's a frog, put a frog in the equations. If there's no frog, there's no frog in the equations either. In my original scenario there was no frog. I don't know where the frog came from. But it's not a problem it being there.

    ReplyDelete
  12. "I suspect Craig may have been tip-toeing around the issue . . . "

    So do I, and I suspect he was doing so was because he didn't want to give offense to people who do pray over such things; moreover, he did it carefully so that he didn't tell them anything that he didn't really think was true.

    I disagree with him in his denial of divine simplicity, and I'm not entirely persuaded by Molinism either. But he's most certainly not somebody who thinks God roots for the Broncos if only Bronco fans pray hard enough.

    ReplyDelete
  13. @ Scott

    "Which is precisely what's wrong with the "skeptiko genre" and why it's a mistake to take Pat Churchland's behavior (however questionable it may be otherwise) as indicating any knowledge on her part of a weakness in her position."

    Right so from my perspective the whole thing was NEVER about Churchland's position.

    "Make no mistake, I think her position is weak, and indeed fatally so;"

    I agree with you. I disagree with both of their positions.

    " I just don't think her behavior in that particular context is any basis for drawing any serious conclusions about her motivations.
    "

    Well I think we certainly can infer contempt. IMO she should have toughed it out more or just said the interview was over and hung up

    " The primary purpose of that show is indeed entertainment—not intellectual enlightenment.

    Well the host doesn't think so LOL but yes it is just a colourful show IMO. The mystery is how he gets the guests that he does.

    ReplyDelete
  14. @John Quin:

    "IMO she should have toughed it out more . . . "

    Which, to his credit, is what Coyne did in a similar situation, according to Glenn's research.

    ReplyDelete
  15. "Right so from my perspective the whole thing was NEVER about Churchland's position."

    I agree and made a similar point earlier in the thread.

    ReplyDelete
  16. @Scott

    "IMO she should have toughed it out more . . . "

    Which, to his credit, is what Coyne did in a similar situation, according to Glenn's research.
    ----

    Not just Coyne almost every guest on that show has to endure the steamrolling. As listener you just get used to it which is why in this interview you sort of go "hey what happened? Why did she go all postal, Alex is just being Alex"

    Oh well perhaps this episode will scare some of the higher profile guests off in future.


    ReplyDelete
  17. NiV writes,


    It depends what you mean by "be mapped". What I'm saying is that the intentionality exists if such a mapping is possible - i.e. the same pattern of behaviour exists in two different systems. But for some real world entity to actually explicitly construct a representation of the mapping, to know about it, explain it, or describe it, it requires a separate model representing the first model, the system being modelled, and the relationship between the two.


    Well, it was you whowas using the term mapping.

    I'm trying to understand how, in a non-question begging way, you can give a materialist explanation to how these patterns of behaviour or relationships properly account for intentionality.

    ReplyDelete
  18. Since it seems you have not grasped the point, let me try to explain.

    What is sought is a reduction of meaning and intentionality to materialism. What that entails is a mapping of sentences that involve meaning and intentionality to sentences only involving physical processes. The objection raised here to your attempt to do so is that your reduction uses terms from the vocabulary it is trying to reduce, e.g. 'stands for,' 'mapping,' etc. and thus is not a valid reduction.

    ReplyDelete
  19. Don Jindra: Science is based on a metaphysical view. There's no question about that. You say yours is also compatible. But it has not been necessary to invoke it.

    Oh, I see what's happening here. You believed certain modern scientists who said they weren't using formal and final causes and didn't notice that they were practically drowning in a sea of formal/final causes every time they opened their mouths to assure you how unnecessary they were.

    There's a really good website that explains the story behind this, it's called: edwardfeser.blogspot.com. You oughtta check it out some time.

    ReplyDelete
  20. "State transition systems exist independently, irrespective of whether there are any other systems..."

    "Predict" is a problem here, because in order for a mind or structure to predict something, it has to be about something (have intentionality). Why is represent is quotes?

    "They're different questions. If there's a frog, put a frog in the equations. If there's no frog, there's no frog in the equations either. In my original scenario there was no frog. I don't know where the frog came from. But it's not a problem it being there."

    This still sounds subjective. Questions are asked by subjects, depending on what they are focusing on. In our universe, such scenarios exist, but they are preceded by many causal chains/networks (all the way to the Big Bang, maybe even before that) and they are succeeded by many causal events (perhaps until the death of universe or infinity). Where along this long causal network are the objective, mind independent boundaries that denote "this is the scenario?"

    ReplyDelete
  21. What that entails is a mapping of sentences that involve meaning and intentionality to sentences only involving physical processes.

    In the basic sense of intent implying a motor response, this should qualify as a physical process that maps to an intent.

    ReplyDelete
  22. No, Jerry, we don't know. But, please, do enlighten us on the prudence involved in not commenting on what you haven't read.

    Sigh.

    That sigh is comedy perfection. Golf clap.

    ReplyDelete
  23. Step2,

    In no sense does what you linked to reduce intentionality to a physical process, in a non-question begging sense. In no sense does it explain how what what is only, allegedly, physical is about, or directed to, something else. It is, rather, just a neuroscientific explanation of learning that assumes intentionality. It may be valid within its own sphere, but is has littleuse for explaining how intentionality could be reduced to a physical process.

    ReplyDelete
  24. Jeremy,

    "Well, it was you who was using the term mapping."

    Quite so. But it wasn't your usage of the word "mapped" I was querying, but the word "be".

    What I said is that the intentionality exists if such a mapping is *possible*. That's different from such a mapping actually being constructed in some mind. So when you said "be mapped", I suspected that you had misunderstood the *possibility* of a mapping - that a map 'exists' only in a theoretical/Platonic sense - for the *physical instantiation* of such a mapping, which is not required.

    Now it will often be the case that such a map *has* been constructed, but this is a second layer of intentionality. You have a reference system. You have a simulator system that can be mapped to the first system's states and properties. And you have a third system that simulates both of the first two, with the property that the simulations are synchronised. It's states would correspond to sensory data (say) received from the first matched with corresponding the internal state of the second. In the third system, operations on either half induce the corresponding change in the other half.

    Again, the mapping between the composite of the first two systems and the third system only has to be potential, not actual, for the intentionality to apply. Indeed, however many layers you apply, there will always be a topmost layer that remains unmapped. There *is* a relationship between the two, but you can't ever *know* that there is - you can only hope.

    Jinzang,

    "What that entails is a mapping of sentences that involve meaning and intentionality to sentences only involving physical processes."

    Perhaps you can help me out here. Just to illustrate what you're asking me to do, can you tell me how you would go about explaining the concept of "words" in non-word terms, without using words to do so?

    Anon 2

    "Where along this long causal network are the objective, mind independent boundaries that denote "this is the scenario?" "

    Ah, I see. There aren't any.

    The equations are another model that can be used to simulate a separate system. The equations are a set of symbols that when manipulated according to the rules of algebra (note that it is the combination of symbols *and rules* that defines their meaning) transform to states that correspond to what water does.

    This model allows you to explore a variety of scenarios involving partial information. You can set it to match the water's state at an instant of time and see how it evolves subsequently. You can set it to match the state of the water at a particular location and see how it evolves elsewhere. You can set it to match a desired state, and project it backwards in time to see what needs to be done to bring that state about. There are endless possibilities.

    But the model is not the reality. Reality is subject to certain constraints (conservation of mass and momentum) which the equations replicate symbolically, but our model is incomplete because our knowledge of the universe is incomplete. We don't *know* what happened all the way back to the big bang.

    Out in the real world, there are no boundaries. The 'scenario' extends from the beginning to the end of the universe, with no sharp edges. Inside the models, the scenario boundaries are not objective. It depends how much you know, and what question you're asking. So the answer to your question - either way - is as I said that there aren't any.

    ReplyDelete
  25. "Perhaps you can help me out here. Just to illustrate what you're asking me to do, can you tell me how you would go about explaining the concept of "words" in non-word terms, without using words to do so?"

    There is nothing question-begging in using words to explain the concept of "words", since words and the concept of "words" are obviously two different things. You are attempting to explain intentionality using intentional terms, which is blatantly question-begging. As has been pointed out to you, again and again.

    ReplyDelete
  26. "There is nothing question-begging in using words to explain the concept of "words", since words and the concept of "words" are obviously two different things."

    I would have thought so. But it seems that the same distinction between intentionality and explanations of intentionality escape people.

    As I've said several times - intentionality occurs when state-transition systems have similar patterns of relationships between their states and operations. It's not a concept that depends in any way on intentionality. The systems (and the patterns) simply are what they are.

    But I can't *describe* or *specify* to you which particular sort of systems I'm talking about without using words that 'refer' to features of systems, which are necessarily intentional, and you keep on picking up on the use of these words. The systems are as they are whether I describe them to you or not, but I can no more *tell* you about them without using reference than I can explain what words are without using words.

    They're a part of the explanation, but not part of the mechanism.

    ReplyDelete
  27. @NiV:

    "[I]ntentionality occurs when state-transition systems have similar patterns of relationships between their states and operations."

    . . . in which case the two systems "intend" each other, as well as any other systems (past, present, or future) to which they also happen to be similar, whether any of them "know" about each other or not.

    If that's what you want to use the word "intentionality" to mean, I guess nobody can stop you. But it's hard to imagine anything much further from what the rest of us mean by it.

    ReplyDelete
  28. @NiV:

    "But I can't *describe* or *specify* to you which particular sort of systems I'm talking about without using words that 'refer' to features of systems, which are necessarily intentional, and you keep on picking up on the use of these words. . . . They're a part of the explanation, but not part of the mechanism."

    The only reason your account seems to be even a remotely plausible explanation of intentionality is precisely that you use the language of intentionality in describing it. That's why people keep pointing out that you're begging the question.

    ReplyDelete
  29. @NiV:

    "What I said is that the intentionality exists if such a mapping is *possible*. That's different from such a mapping actually being constructed in some mind. So when you said 'be mapped', I suspected that you had misunderstood the *possibility* of a mapping - that a map 'exists' only in a theoretical/Platonic sense - for the *physical instantiation* of such a mapping, which is not required."

    If all your account requires is a "theoretical/Platonic" mapping between two systems, then so much the worse for your account. It gets further and further from intentionality the more you explain it.

    Apparently any two systems are now to be regarded as "intending" each other if any sort of mapping between them, no matter how complex or arbitrary, "exists" in a mathematical sense. In other words, everything "intends" pretty much everything else—or, at the very least, even if we try to be a little restrictive, quite a lot of things "intend" quite a lot of other things.

    In fact, arguably one of the few things that turns out not to "intend" is the human mind, at least whenever it refers to or thinks about a system (that of the real numbers, for example) too complex for it to grasp and/or represent in its entirety.

    ReplyDelete
  30. (Sorry, "the few things that turn out.")

    ReplyDelete
  31. But even in that case, we may still "intend" lots of things we didn't mean to be thinking about!

    ReplyDelete
  32. NiV,

    1. Here are two systems.

    System 1:

    100 a$ = "Flimfl"
    110 for a = 1 TO 6
    120 print mid$(a$, a, 1);
    130 next a
    140 end

    System 2:

    200 b$ = "ammery"
    210 for b = 1 TO 6
    220 print mid$(b$, b, 1);
    230 next b
    240 end

    2. While it is true that the output of each system differs -- System 1 outputs Flimfl, and ammery is outputted by System 2 -- it is also true that the behavior of each system is the same, for each system:

    a) defines a string with six characters;

    b) executes one loop with six iterations; and,

    c) extracts and displays a different single character from its defined string six times, with the ordinal location of each extracted character corresponding to the nth iteration executed.

    So -- walla! -- a simple yet instructive example of the production of intentionalty through the mapping of systems with identical behaviors.

    3. Then again, uh, maybe not.

    ReplyDelete
  33. Even more simply, consider a machine that repeatedly flips a light switch on and off, and another machine that repeatedly turns a coin back and forth between heads and tails. The two state transition systems are obviously the same[*], so guess what—these two systems intend each other!

    Not only that, but each of them also intends the system consisting of alternation between night and day! And the system consisting of me repeatedly raising and lowering my hand! And . . .

    And all of those systems intend each other too—indeed, they already intended each other before we even built our machines! Why, there's intentionality just sprouting up everywhere!

    ----

    [*] A set S of two states, that is, S = {p, q}, and a binary relation → on S that takes each state to the other: that is, {(p, q), (q, p)}, or pq and qp.

    ReplyDelete
  34. Not only that, but each of them also intends the system consisting of alternation between night and day!... Why, there's intentionality just sprouting up everywhere!

    Indeed.

    ReplyDelete
  35. @Glenn:

    Heh, nice. So we'll add "the system consisting of Tom Cruise repeatedly raising and lowering his hand in Knight and Day"—which apparently I intended even though I didn't mean it.

    ReplyDelete
  36. Hey, and what's even cooler is that (this version of "intentionality" being bidirectional) Tom Cruise intended me in that scene even though I've never even met him!

    ReplyDelete
  37. ". . . in which case the two systems "intend" each other, as well as any other systems (past, present, or future) to which they also happen to be similar, whether any of them "know" about each other or not."

    Exactly so!

    "If that's what you want to use the word "intentionality" to mean, I guess nobody can stop you. But it's hard to imagine anything much further from what the rest of us mean by it."

    What the rest of you mean by it is I think any of a set of different multi-layered composite systems built up from the basic building block. Models of models of models... But the brain is so good at it, you don't notice.

    "The only reason your account seems to be even a remotely plausible explanation of intentionality is precisely that you use the language of intentionality in describing it. That's why people keep pointing out that you're begging the question."

    Thanks. That's useful to know. It tells me that I need to work more on the core explanation. You're not meant to be convinced by the intentional description/context around it.

    "Apparently any two systems are now to be regarded as "intending" each other if any sort of mapping between them, no matter how complex or arbitrary, "exists" in a mathematical sense."

    Yes, that's the idea!

    "In other words, everything "intends" pretty much everything else—or, at the very least, even if we try to be a little restrictive, quite a lot of things "intend" quite a lot of other things."

    How many things can humans use as "a symbol" of something else? Is there anything that can't be?

    "In fact, arguably one of the few things that turns out not to "intend" is the human mind, at least whenever it refers to or thinks about a system (that of the real numbers, for example) too complex for it to grasp and/or represent in its entirety."

    The Real numbers are only a mental model - they exist nowhere else except in the heads of humans (and computer algebra systems), and so there's a sense in which we do grasp them in their entirety. The Real numbers are an algebraic model for the spacetime continuum, and represent it to the extent that they do follow similar rules.

    There's no need to grasp any external system in its entirety - all we need is a big enough subset of it to be useful.

    "So -- walla! -- a simple yet instructive example of the production of intentionalty through the mapping of systems with identical behaviors."

    Not quite. The first two and last two letters output are the same in the first example but different in the second, so the behavior is not quite the same. But suppose the first string was "CHURCH" and the second was "edited". Then 'C' is a symbol meaning 'e', 'H' means 'd', and so on. The initial state of the first program (a=1,out='C') "means" the initial state of the second machine which is (a=1,out='e'). The string of letters 'CHURCH' can "mean" the string of letters 'edited'. Simple substitution ciphers spring to mind.

    "Not only that, but each of them also intends the system consisting of alternation between night and day! And the system consisting of me repeatedly raising and lowering my hand!"

    Computers represent everything with bits. If the only aspect of the system you want to represent is that it is two-state, then yes, all of those are adequate symbolic representations.

    "Why, there's intentionality just sprouting up everywhere!"

    Yes, it does that. :-)

    OK, I think you've got the idea, now.

    ReplyDelete
  38. @NiV:

    "If the only aspect of the system you want to represent is that it is two-state, then yes, all of those are adequate symbolic representations."

    In other words, a mind can take any of them to be a representation of the others. Without such a mind, you've got nothing but similarity—and really not even that, as none of those examples (even the bits in the computer) are "two-state systems" in and of themselves, apart from an act of abstraction performed by an intellect interested in (as you put it) representing aspects of the system.

    ReplyDelete
  39. "In other words, a mind can take any of them to be a representation of the others."

    Any of them *is* a representation of any the others. A mind can take them so or not, as it chooses (if so, it would be implementing a second-level model such as I described above). But each of them still represents the others even if there are no other minds around to see them.

    ReplyDelete
  40. In other words, a mind can take any of them to be a representation of the others. Without such a mind, you've got nothing but similarity—and really not even that

    I remember Edward Feser himself talking about how there's nothing in nature which allows us to (if I remember his words right) section off this or that part of reality for reference.

    ReplyDelete
  41. But each of them still represents the others even if there are no other minds around to see them.

    So you're saying that intentionality is built into physics itself? This or that arrangement of matter intrinsically represents something else?

    ReplyDelete
  42. @Scott:

    "Apparently any two systems are now to be regarded as "intending" each other if any sort of mapping between them, no matter how complex or arbitrary, "exists" in a mathematical sense."

    I think this is not quite correct. According to NiV's latest clarifying comments, we can glean that: Mathematical objects do not exist outside the minds. There are maps between mathematical objects. But map is itself a mathematical concept. So it does not exist outside the mind. So the correct paraphrase seems to be that any two systems intend each other, if a map can be established between the mental models (which can be partial or incomplete or even wrong -- at least for some values of wrong) we have of the systems.

    This is such an obvious and compelling explanation of intentionality that my reaction is a Duh.

    ReplyDelete
  43. @NiV:

    "But each of them still represents the others even if there are no other minds around to see them."

    Which yet again raises the question of what in the world this sort of "representation" (which is really only resemblance or similarity) has to do with intentionality. But I think that horse is dead.

    ReplyDelete
  44. @grodrigues:

    "According to NiV's latest clarifying comments, we can glean that: Mathematical objects do not exist outside the minds."

    And yet according to his previous clarifying comments, it was entirely sufficient "that a map 'exists' only in a theoretical/Platonic sense."

    ReplyDelete
  45. "So you're saying that intentionality is built into physics itself? This or that arrangement of matter intrinsically represents something else?"

    Yes. Although not just arrangement, but operations/rules of behaviour, too.

    "But map is itself a mathematical concept. So it does not exist outside the mind."

    Computers can implement maps between symbols. So if this statement is true (and I don't think it is) then computers are minds. QED! :-)

    Seriously, I think this is doing the "the map is not the territory" thing again. The mathematical concept of a map is the "map". The objective similarity of real-world systems is the "territory". Just because we use mathematics to model it, doesn't mean mathematics is the only place it happens.

    ReplyDelete
  46. Yes. Although not just arrangement, but operations/rules of behaviour, too.

    Intrinsic intentionality. Whatever that is, it's not materialism or naturalism.

    ReplyDelete
  47. "Intrinsic intentionality. Whatever that is, it's not materialism or naturalism."

    Can you say why not? (Without begging the question.)

    ReplyDelete
  48. NiV: How do you explain "words" without using them? [...] That words do not consist merely of chains of letters or sounds strung together, but must have some magic and ineffable ingredient?

    But of course we ultimately do have to explain words without using them. Infants don't learn to speak by having definitions recited at them. However, that doesn't mean we end up with some ineffable "magic". Perhaps that's part of the problem here: you are thinking in terms of those two categories, so when people try to explain a third option, you hear it as a funny way of talking about "words" or else about "magic"... and we end up talking past each other.

    so here I am, trying to learn what those arguments are. Is that an an unreasonable thing to do?

    Well, I imagine an unreasonable person would be long gone. Sometimes people are just on different wavelengths, and it's really hard to hit on the right concept or description that will make sense to the other side. Unfortunately, I do not have any clever ideas how to explain this issue in a different way from anyone else.... OK, just one comment that may or may not help:

    >"Why, there's intentionality just sprouting up everywhere!"
    OK, I think you've got the idea, now.


    We've got the idea —or Ideas... but we call the Ideas "forms". I think you are double-checking your answers and being confident that they are correct — because they are correct answers... only to the wrong question. Everyone else is talking about final causes, not formal. And final causes are forms too, but in a special mode. (But again, I'm not sure what's the productive way to describe that distinction.)

    ReplyDelete
  49. "Intrinsic intentionality. Whatever that is, it's not materialism or naturalism."

    Either that, or it's not intentionality. I'm betting on the latter, because NiV (at least sometimes) writes as though all he has in mind is "objective similarity."

    ReplyDelete
  50. First, let me channel my inner Grammar Nazi. It's voila, not walla. The French spell words funny.

    Second, if you really want a physicalist account of intentionality, you should have a look at behaviourism, where mental states are interpreted as dispositions to behaviour.

    ReplyDelete
  51. Well, now . . . technically that's your inner Orthography Nazi, and it's "voilà," with a grave accent on the a.

    And yes, "anal-retentive" does take a hyphen. Why do you ask? ;-)

    ReplyDelete
  52. NiV,

    I would have replied earlier, but I was otherwise occupied at the ocean -- smelling salt air, watching sea gulls and, of course, observing waves repeatedly advance onto and retreat from the beach.

    "So – [voila]! -- a simple yet instructive example of the production of intentionalty through the mapping of systems with identical behaviors."

    Not quite.


    I agree. Of course. See 3. in my relevant comment above.

    The first two and last two letters output are the same in the first example

    Since when is upper case 'F' the same as lower case 'f'?

    but different in the second, so the behavior is not quite the same.

    Except that the behavior is the same, as each system outputs the 1st, 2nd, 3rd, 4th, 5th and 6th characters of their respective strings on the 1st, 2nd, 3rd, 4th, 5th and 6th iterations of their respective loops.

    (Only a little earlier in your reply you had responded with, Yes, that's the idea! to Scott's "Apparently any two systems are now to be regarded as "intending" each other if any sort of mapping between them, no matter how complex or arbitrary, "exists" in a mathematical sense." Now you seem to be saying, "No, that's not the idea!" But perhaps by "any sort of mapping between them" you intend some meaning other than that implied by "any sort of mapping between them".)

    ReplyDelete
  53. Oh well. If I can correct it once, I can correct it twice; s/b "voilà". ;)

    ReplyDelete
  54. " . . . observing waves repeatedly advance onto and retreat from the beach."

    0 . . . 1 . . . 0 . . . 1 . . .

    ReplyDelete
  55. @Glenn:

    "Since when is upper case 'F' the same as lower case 'f'?"

    I overlooked this when NiV first replied to your post, but it's a very good illustration of (gender-neutral-)his tendency to import actual intentionality into his reductive account without realizing it. The two letters are of course the same to minds that assign them the same meaning.

    ReplyDelete
  56. And of course if your first program is run twice, even the two lowercase fs are "the same" only to a mind that interprets them as letters. Physically, they're just ink (or laser) marks that have approximately the same shape and size.

    ReplyDelete
  57. " . . . observing waves repeatedly advance onto and retreat from the beach."

    0 . . . 1 . . . 0 . . . 1 . . .


    Exactly. But then I got distracted by a sea gull swooping by, and the next thing I knew it was like this:

    1 . . . 0 . . . 1 . . . 0 . . .



    And of course if your first program is run twice, even the two lowercase fs are "the same" only to a mind that interprets them as letters. Physically, they're just ink (or laser) marks that have approximately the same shape and size.

    Good point.

    ReplyDelete
  58. Assume the program is run twice, and all output is to the same sheet of paper. How might one be 100% certain that the marks themselves aren't supposed to mean anything, and that it is, in fact, the relative location of the marks to some edge of the paper which is intended to convey meaning (of some kind)?

    ReplyDelete
  59. @Glenn:

    "But then I got distracted by a sea gull swooping by, and the next thing I knew it was like this:

    1 . . . 0 . . . 1 . . . 0 . . . "

    That must have been disorienting. How fortunate for you, then, that the meanings of 0 and 1 are arbitrary rather than built into nature. Imagine if the intentionality had been intrinsic!

    But wait . . . that means there are two different mappings of the {0, 1} system onto the {in, out} system, one in which 0 corresponds to "in" and one in which it corresponds to "out." How are we to decide which one is the true "representation"?

    "How might one be 100% certain that the marks themselves aren't supposed to mean anything, and that it is, in fact, the relative location of the marks to some edge of the paper which is intended to convey meaning (of some kind)?"

    Gasp! It's even worse than I thought! Your example seems to suggest that some sort of intentionality is required/presumed in order for something in the physical world to count as a "state transition system" at all.

    ReplyDelete
  60. @Glenn:

    "[P]erhaps by 'any sort of mapping between them' you intend some meaning other than that implied by 'any sort of mapping between them'."

    One of the interesting features of NiV's account of intentionality is that reference is always underdetermined and never unique—in contrast to the paradigmatic case in which I'm thinking about something specific, say my cat. By his own account, NiV's statement "means" all sorts of things, including quite a lot of things that he didn't have in mind at all but which his statement/thought somehow "resembles."

    ReplyDelete
  61. Coyne reviewed a blurb, and said so. Feser's characterization is wrong from the start.

    ReplyDelete
  62. Hmm. Almost as if there is a kind of asynchronous cognitive 'saccade' subtly going on in the background.

    ReplyDelete
  63. @Bodybuilder:

    "Coyne reviewed a blurb, and said so."

    No, Coyne reviewed two reviews. And he got even those wrong; neither of them says that Hart's book presents irrefutable arguments for classical theism. (Which is good, because Hart very clearly denies that he's doing anything of the sort. His purpose is expository.)

    And regardless, that's not what Coyne himself says he's doing. He says he's refuting the arguments of theologians.

    ReplyDelete
  64. Bodybuilder,

    If you want to defend Coyne here, fine. But do try using something a little more sophisticated than, "Coyne was relying on hearsay." That just doesn't seem to go very far in solving the problem.

    ReplyDelete
  65. By the way, I'm a bodybuilder too. I just don't [pats tummy] focus exclusively on muscle.

    ReplyDelete
  66. "(Only a little earlier in your reply you had responded with, Yes, that's the idea! to Scott's "Apparently any two systems are now to be regarded as "intending" each other if any sort of mapping between them, no matter how complex or arbitrary, "exists" in a mathematical sense." Now you seem to be saying, "No, that's not the idea!" But perhaps by "any sort of mapping between them" you intend some meaning other than that implied by "any sort of mapping between them".)"

    The mapping has to preserve the relationships between states and operations.

    Fair point on 'F' versus 'f' - just take the second letter and the last, then.

    "One of the interesting features of NiV's account of intentionality is that reference is always underdetermined and never unique—in contrast to the paradigmatic case in which I'm thinking about something specific, say my cat."

    Representations acquire more specific meaning by being included in wider operational networks, with more extensive and complicated patterns. "My cat" includes the word "my", which maps to real world features like the cat that lives in the same house as the speaker, eating food bought by the speaker, etc. that nail down one specific cat (although not literally, of course).

    ReplyDelete
  67. NiV,

    I don't see how intentionality plays any role in mapping reality. It seems to me that the nature of my 'map' must be circumstantial. The transient association of particles that is me happens to exist in a certain place during a certain time and the 'map' in my brain is a function of the relationship between "myself" and all the other arrangements of atoms at each time and place "I" occupy. Are you trying to say that intentionality just is this? It seems like Scott and Glenn are talking about intentionality, but you are talking about "intentionality".

    Also, are you not getting hung up on technicalities with Glenn's example? Scott posed the problem more simply, and I think you have yet to address it directly.

    ReplyDelete
  68. I'm feeling much better about my contributions to this thread since bodybuilder's post. :)

    ReplyDelete
  69. NiV:

    Representations acquire more specific meaning by being included in wider operational networks, with more extensive and complicated patterns. "My cat" includes the word "my", which maps to real world features like the cat that lives in the same house as the speaker, eating food bought by the speaker, etc. that nail down one specific cat (although not literally, of course).

    So, just so I understand you. Any physical state simultaneously refers to or intends any other physical state, because there is always a possible transition from the former to the latter. That means that any physical state is radically indeterminate, because it could mean or refer to anything at all. To avoid this indeterminacy, you postulate “wider operational networks”, which themselves would have to be physical states that transition to other physical states, which themselves are radically indeterminate. So, it seems that, on your account, there is always a radical indeterminacy in meaning and intentionality when all you have are physical states and the transitions between physical states. And that actually was a key premise in Ross’ argument for the immateriality of thought.

    Furthermore, the commenters here have objected to your account, because it claims to be able to account for intentionality from non-intentional physical states and transitions, and yet constantly seems to introduce intentionality into the non-intentional physical states and transitions, which begs the question. Even above, you mention “the speaker” and “the word ‘my’”, which refers to the speaker, which is an intentional being, and that seems to beg the question, yet again. For you to prove your point, you would have to describe how non-intentional physical states and transitions that do not involve formal or final causality somehow can interact in such a way as to result in intentionality. Therefore, intentional terms or concepts can only present themselves at the end of the account, and not at the beginning or the middle.

    ReplyDelete
  70. NiV:

    "Intrinsic intentionality. Whatever that is, it's not materialism or naturalism."

    Can you say why not? (Without begging the question.)


    Because intentionality presupposes final causality, which is supposed to be prohibited from any account of materialism or naturalism. The general idea is that modern science from the Enlightenment onward banished formal and final causality from its causal explanations, and only used efficient and material causes, i.e. mechanical explanations. Any teleology or final causality was simply a projection of the human mind upon the mechanisms of nature that intrinsically lacked any such teleological properties at all. It seems that you reject such a view, and would endorse the more Aristotelian position that teleology is an intrinsic part of the natural world, and that since intentionality is a kind of teleology, then there is no deep mystery about how intentionality arises in the world at all. That would probably bring you close to the views of the commenters on this website, but would place you at a great distance from most materialists and naturalists.

    ReplyDelete
  71. Bob:

    People mis-perceive things all the time, upon first glance it could have been a cat, or merely the shape of a cat. I could have been a lot of things. Correct meaning or interpretation of a stimulus could require additional brain activity apart from any particular pattern of neuronal firing and probably does. At this point, we just do not understand and maybe we never will.

    You are correct that a physical state in itself is radically indeterminate with regards to what it may refer to, and the broader context is essential to delimit the possibilities. However, if the broader context is itself only other physical states, which are themselves radically indeterminate, then you are just compounding the problem.

    However, I think you are really jumping the gun when claiming to have reached any valid conclusion about the actual nature of meanings, interpretations, or intentionality in general.

    I don’t think so. I think it’s pretty well established that physical states cannot determine meaning, because they could mean anything and there is nothing about the physical state itself that delimits the possible meanings at all.

    That said, you might be correct in the end. in that nothing about the physical state itself fixes which interpretation is correct. The problem is that at this point, it is nothing more than conjecture based on a lack of actual knowledge, or so it seems to me.

    Well, if I’m right, and there is nothing but such radically indeterminate physical states, then there is no determinate meaning at all. For there to be determinate meaning, there must be something other than the physical states that determines the meanings. This “something” would have to be an intentional mind of some kind that attributes determinate meaning to physical states, and this intentional mind could not be a physical entity, because then it itself would also be radically indeterminate.

    ReplyDelete
  72. @dguller

    You are correct that a physical state in itself is radically indeterminate with regards to what it may refer to, and the broader context is essential to delimit the possibilities. However, if the broader context is itself only other physical states, which are themselves radically indeterminate, then you are just compounding the problem.

    Not sure that follows. Think of a painting. Each splotch of paint by itself is just a splotch of paint, whereas many splotches of paint together becomes a coherent visualization.

    I don’t think so. I think it’s pretty well established that physical states cannot determine meaning, because they could mean anything and there is nothing about the physical state itself that delimits the possible meanings at all.

    Again as above, seems like you are ignoring the forest, for the trees.

    Well, if I’m right, and there is nothing but such radically indeterminate physical states, then there is no determinate meaning at all. For there to be determinate meaning, there must be something other than the physical states that determines the meanings. This “something” would have to be an intentional mind of some kind that attributes determinate meaning to physical states, and this intentional mind could not be a physical entity, because then it itself would also be radically indeterminate.

    And again, I do not think that this follows for the same reason I posited earlier in this post.

    In the end, it seems like you are really trying to force the validity of your assumption by the use of a faulty analogy.

    One pixel on your monitor probably doesn't tell you much, whereas millions of pixels will, though each of these is, well, still just a pixel.

    ReplyDelete
  73. Bob:

    Not sure that follows. Think of a painting. Each splotch of paint by itself is just a splotch of paint, whereas many splotches of paint together becomes a coherent visualization.

    Are you saying that a splotch of paint by itself is not “a coherent visualization”? Is it incoherent somehow? Or, are you saying that when combined with other splotches, then a new unity is formed, i.e. the painting? In that case, you are talking about the whole (i.e. painting) having emergent properties that are not present in each of the parts (i.e. splotches of paint). So, in the context of our discussion, you are saying that even though a particular physical state is radically indeterminate, then when one radically indeterminate physical state is combined with other radically indeterminate physical states to form a totality of some kind, then the totality can be determinate somehow as an emergent property of some kind.

    But then you have a few problems.

    First, you have to account for where this determinacy comes from, other than by saying “magic”.

    Second, if everything that exists is a physical state of some kind, then the totality itself would also have to be a physical state of some kind. And if all physical states are radically indeterminate, because they could always mean something else, then the totality itself would also be radically indeterminate. After all, if the totality is the context within which the part derives its meaning, then the totality itself can always be placed in another higher and more all-encompassing totality, which would be required to determine its meaning, and this process can proceed ad infinitum.

    Third, each part is itself a composite entity that is complex, and yet its complexity does not determine its meaning. So, adding complexity does not necessarily fix meaning, either.

    ReplyDelete
  74. @dguller

    Yes, that is what I am saying. So to the problems.

    1. This is a function of what a brain does. Exactly how is not understood.

    2. You seem to be commiting a logical fallacy by saying what is true of the parts is necessarily true of the whole. Secondly, there is no need for an ad infinitum, as I doubt that any empirical knowledge can be realized to a level of absolute certainty, just to various levels of granularity.

    3. Meaning does not exist in a vacuum. Meaning is relative to context.

    ReplyDelete
  75. Bob:

    1. This is a function of what a brain does. Exactly how is not understood.

    First, how is this any different than just saying “magic”? Would you accept a theist saying that “God did it” as an adequate explanation?

    Second, the idea is that it is in principle impossible for a physical state to determine meaning, because for any physical state, there is always alternative possibilities of meaning that are compatible with the same physical state. So, it is not understood in the same way that a square triangle is not understood.

    2. You seem to be commiting a logical fallacy by saying what is true of the parts is necessarily true of the whole.

    Say that you are trying to understand what physical state P1 means, and find that it could mean any of the following: M1, M2, M3, or M4. To figure out which meaning is appropriate for P1, you then appeal to another physical state P2. But P2 could mean M5, M6, M7, or M8. So, before you can use P2 to figure out what P1 means, you would first have to figure out what P2 means. And to do that, you would have to appeal to P3, which could mean M9, M10, M11, or M12. And to use P2 to figure out what P1 means, you first have to figure out what P3 means. And on and on it goes.

    Now, you could appeal to the system within which P1, P2 and P3 exist, and say that the system is what fixes meaning. But then you have to explain how the system can fix the meanings of P1, P2 and P3. You cannot say that P1’s position within the system is what fixes the meaning, because it is precisely P1’s position in the system that allows it to have the possible meanings M1 to M4, and the same goes for P2 and P3. So, what else are you appealing to in order to fix the meanings of P1, P2 and P3, if not to their respective positions within the system? You cannot appeal to one part of the system to fix the meaning of another part of the system, because the parts are just radically indeterminate physical states, and so you could not appeal to something radically indeterminate to determine something else that is radically indeterminate. So, again, how exactly does the totality of the system, which itself contains the possible range of meanings, fix one particular meaning of a physical state, without appealing to anything else other than the system of physical states itself?

    3. Meaning does not exist in a vacuum. Meaning is relative to context.

    Absolutely. And the question is how a system of physical states and their interrelations (i.e. the context) can possibly fix and determine the meaning of a single physical state. I believe that this is impossible, because you will either appeal to other parts, which would themselves require determination, or to the totality, which itself already contains the possible range of meanings, and thus cannot be used to determine which one is appropriate.

    ReplyDelete
  76. @NiV:

    "'My cat' includes the word 'my'[.]"

    "My cat" does, but my cat doesn't. And I needn't think the words in order to think of her.

    ReplyDelete
  77. @dguller

    First, how is this any different than just saying “magic”? Would you accept a theist saying that “God did it” as an adequate explanation?

    How does "not understood" equate to either "magic" or "God did it"?

    I am obviously unconvinced that it is, in principle, impossible for a collection of physical states to determine meaning.

    One pixel versus many pixels.

    So, again, how exactly does the totality of the system, which itself contains the possible range of meanings, fix one particular meaning of a physical state, without appealing to anything else other than the system of physical states itself?

    If P1 is analogous to a pixel, then any particular meaning would be difficult to come by, unless it was, perhaps, a . at the end of a sentence.

    However, if P1 was a bunch of pixels arranged in the shape of the word dog the particular meaning becomes much easier to ascertain when viewed against the surrounding context.

    I believe that this is impossible, because you will either appeal to other parts, which would themselves require determination, or to the totality, which itself already contains the possible range of meanings, and thus cannot be used to determine which one is appropriate.

    To the last, if each of those parts are a coherent whole (a group of pixels versus a single pixel), then I do not believe that your objection follows. The context of all groups of pixels, that is their relationships to each other, is what determines the meaning of the whole. Again, as you agreed, Meaning is relative to context.

    ReplyDelete
  78. Bob:

    I am obviously unconvinced that it is, in principle, impossible for a collection of physical states to determine meaning.

    One pixel versus many pixels.


    A single physical state is a collection of physical states, and yet, being a collection of physical states still wasn’t enough to fix meaning. So, why do you think that just adding complexity somehow magically determines meaning?

    If P1 is analogous to a pixel, then any particular meaning would be difficult to come by, unless it was, perhaps, a . at the end of a sentence.

    P1 is any physical state within a totality of physical states. Its meaning would have to be determined by its place within the totality of physical states. And yet, that very fact is insufficient to determine meaning, because its particular place within the totality of physical states is consistent with a number of possible meanings. Therefore, neither the totality of physical states nor a subset of physical states within the totality is sufficient to determine meaning.

    However, if P1 was a bunch of pixels arranged in the shape of the word dog the particular meaning becomes much easier to ascertain when viewed against the surrounding context.

    And what would the “surrounding context” consist in? If it consists in intentional minds, then you are just begging the question, because it would be intentionality of the minds that determines meaning, which is precisely what commenters here have been arguing. If it does not consist in the intentionality of minds, then you should be able to fix the meaning of the dark lines and shapes, “dog”, to mean something in particular without appealing to the intentionality of minds. You would have to appeal to other physical states or the totality of physical states, both of which are insufficient to fix meaning of “dog”.

    To the last, if each of those parts are a coherent whole (a group of pixels versus a single pixel), then I do not believe that your objection follows. The context of all groups of pixels, that is their relationships to each other, is what determines the meaning of the whole. Again, as you agreed, Meaning is relative to context.

    Each pixel or group of pixels is consistent with a number of possible meanings. It is precisely their place within the totality that permits this range of meanings. How exactly does appealing to the same totality that supports a range of meanings allow you to delimit that range into a single meaning? It is the same totality in both cases, or is there a difference, and if so, then what exactly is the difference? It would be like saying that X justifies P1 having M1 and M2, and that X also delimits P1 to mean only M1. How can X do both?

    ReplyDelete
  79. Just to be clear: my point is that I don't need any "wider operational networks" in order to fix the specific meaning of a thought about my cat. Even if I'm a newborn baby and the cat is the first thing I ever see, that cat is the intentional object of my wordless perceptual judgment even though I don't yet know that it's a "cat."

    And it's probably about time we scotched this identification of "representation" with intentionality anyway. When I think about my cat, I'm thinking about my cat, not about some "representation" of her. In addition to its other faults, your account threatens to erect an impenetrable barrier between our minds and the objective reality of which we're trying to think.

    ReplyDelete
  80. @dguller


    Are you saying that the addition of information will not help to determine meaning? Or that, if it did from a physical perspective, it would need to be magic?

    Perhaps you can tell me exactly how the brain works so I can understand your rejection of the brain's capacity to do what you claim it "in principle" cannot do.

    I seems like you are saying that x is impossible because you can't understand how it works.

    Maybe you can tell me how your theory of mind works. Explain intentionality and meaning using your preferred method.

    ReplyDelete
  81. @dguller

    Each pixel or group of pixels is consistent with a number of possible meanings. It is precisely their place within the totality that permits this range of meanings.

    And the more information you add, in this context, the smaller the range of meanings becomes.

    ReplyDelete
  82. @Bob:

    "I am obviously unconvinced that it is, in principle, impossible for a collection of physical states to determine meaning.

    One pixel versus many pixels."

    I think you're trading on a bit of ambiguity here. The issue at hand isn't "meaning" but intentionality.

    Sure, given the existence of minds that mean, it's possible for a large number of pixels to determine a meaning even though one pixel doesn't. Maybe, as you say in one of your examples, a large group of pixels spells "dog."

    But a whole bunch of non-intentional physical states giving rise to intentionality through sheer accumulation of numbers is another matter. That's rather like trying to construct three-dimensional space out of dimensionless points without assuming that you already have three-dimensional space to put them in.

    If you think it can be done, it remains for you to show how. I think it can't.

    ReplyDelete
  83. Bob:

    Are you saying that the addition of information will not help to determine meaning?

    I am saying that no appeal solely to physical states is sufficient to determine meaning. You can certainly appeal to information that is not simply a physical state, but you also should recognize that appealing to information is already to appeal to something that is loaded with determinate intentionality, and thus would just beg the question.

    Or that, if it did from a physical perspective, it would need to be magic?

    Actually, it would be worse than magic. Magic is just something that occurs that is thought to be naturally impossible. What you are proposing is metaphysically impossible, and as likely as a square circle. It is an incoherent and contradictory premise that you are endorsing to claim that something that cannot possibly determine meaning determines meaning.

    Perhaps you can tell me exactly how the brain works so I can understand your rejection of the brain's capacity to do what you claim it "in principle" cannot do.

    Well, since I’m arguing that the mental capacity in question is immaterial, I don’t see why an appeal to how the brain works is relevant. Sure, for this mental capacity to work optimally, the brain is required, but the brain, being a physical entity, is simply incapable of accounting for the mental capacity in question, i.e. the intentional focus upon a determinate meaning.

    I seems like you are saying that x is impossible because you can't understand how it works.

    I’m not making an argument from incredulity. This is a metaphysical argument involving the nature of physical entities and intentionality. Any given physical entity will be open to a number of possible meanings. The question is how one can narrow down that range of meanings to a single meaning. Any appeal to another physical state will not work, because the second physical state will itself be indeterminate between a range of meanings. Any appeal to a system of interconnected and interrelated physical states will not work, because it is precisely the physical states location within that system of physical states that accounts for the range of possible meanings at all, and thus that very same system cannot also be used to fix one possible meaning from that range. So, you must appeal to something else, which is neither a physical state nor a system of physical states.

    And notice that I’m already giving your account too much by assuming that there is a range of meanings for a single physical state at all. Technically speaking, there is no range of meanings at all. There is just the physical state. What accounts for the range of meanings is an intentional mind that can fix meaning, including a range of possible meanings. What makes a shape refer to some meaning is nothing intrinsic to the shape itself, but rather to how that shape can be used by a mind.

    ReplyDelete
  84. @Scott

    But a whole bunch of non-intentional physical states giving rise to intentionality through sheer accumulation of numbers is another matter. That's rather like trying to construct three-dimensional space out of dimensionless points without assuming that you already have three-dimensional space to put them in.

    I am not sure I understand your analogy.

    Can you define what you mean by intentionality?

    For instance, if an amoeba moves toward it's dinner, is it doing so intentionally, by your definition?

    ReplyDelete
  85. @Bob:

    "Can you define what you mean by intentionality?"

    I mean what Brentano meant by it when he developed the concept (from Scholasticism) and used it to characterize the mental as distinguished from the physical. Any mental phenomenon has an "object" of sorts: we think about x, we judge that y, we desire z. "Intentionality" just means that sort of about-ness.

    ReplyDelete
  86. @dguller

    I am saying that no appeal solely to physical states is sufficient to determine meaning. You can certainly appeal to information that is not simply a physical state, but you also should recognize that appealing to information is already to appeal to something that is loaded with determinate intentionality, and thus would just beg the question.

    No, by information I am just referring to physical states. I am not sure what your mean by load with determinate intentionality. Seems like the reification of information (read physical states) to me.

    If you think it is metaphysically impossible, you may need to check your metaphysics. I see nothing impossible about positing the workings of the brain as the source of determining meaning, in fact, this seem the most plausible explanation for it.

    I do think that your argument is still missing the forest for the trees. It is almost like you wish to draw an arrow from one state to the next and disregard that fact that the arrow probably goes both ways.

    What gives the range of meanings of x is the experience of x within various contexts. The range of meanings are reduced by the immediate context.

    ReplyDelete
  87. @Scott

    I mean what Brentano meant by it when he developed the concept (from Scholasticism) and used it to characterize the mental as distinguished from the physical. Any mental phenomenon has an "object" of sorts: we think about x, we judge that y, we desire z. "Intentionality" just means that sort of about-ness.

    You may need to unpack this more if you don't mind, because you lost me.

    ReplyDelete
  88. I should also perhaps clarify that the reason "meaning" is ambiguous is that we can speak of the "meaning" of a word or symbol without making explicit our assumption that there's a mind somewhere for whom the word or symbol has that meaning. The "meaning" of the word dog involves a sort of derived intentionality that depends on the original, underived intentionality of minds that use the word to mean/intend.

    ReplyDelete
  89. @Bob:

    "You may need to unpack this more if you don't mind, because you lost me."

    There's not a lot to unpack; the idea is very basic and doesn't appear to be reducible to anything else. When I think about my cat, my thought has an object: my cat. When I want to eat dinner, my desire has an object: dinner. Mental phenomena are characterized by this sort of about-ness, this having of objects.

    Brentano, as I mentioned, took intentionality in this sense to be what distinguishes mental from physical phenomena. Arguing that intentionality can be reduced to the purely physical requires that one show how that sort of about-ness can arise or emerge from phenomena that lack it.

    I think a failure to get clear on this point is at the root of a lot of the discussion in this thread.

    ReplyDelete
  90. @Bob:

    "No, by information I am just referring to physical states. I am not sure what your mean by load with determinate intentionality. Seems like the reification of information (read physical states) to me."

    What dguller means (and I hope he won't mind if I presume to speak for him here) is that regarding physical states as "information" presumes the existence of minds or intellects for whom such "information" can be . . . well, informative. Physical states in and of themselves aren't "about" anything; only to minds do they constitute information.

    That's precisely why an account of intentionality that treats physical states as information begs the question at issue. To call them "information" at all already loads them, as dguller says, with determinate intentionality.

    ReplyDelete
  91. Bob:

    And the more information you add, in this context, the smaller the range of meanings becomes.

    Yes, but they never shrink to one possible meaning. There is always a range of possible meanings when physical states are involved. Thus, the determinacy of thought becomes impossible if thought is a physical state.

    No, by information I am just referring to physical states. I am not sure what your mean by load with determinate intentionality. Seems like the reification of information (read physical states) to me.

    First, by “determinate intentionality”, I mean that one thing can be about or refer to a specific content. For example, when I think about cats, I am actually thinking about cats, and not about cats, or spatiotemporal cat slices, or cat shapes, or animals that purr, and so on.

    Second, if information is synonymous with physical states, then if the argument is correct, then information is itself indeterminate, because physical states are indeterminate.

    If you think it is metaphysically impossible, you may need to check your metaphysics. I see nothing impossible about positing the workings of the brain as the source of determining meaning, in fact, this seem the most plausible explanation for it.

    But if the brain is a totality of physical states, and physical states are indeterminate, then the output of the brain itself would also have to be indeterminate. An appeal to the brain will only help you if you can establish that it is possible to take a single physical state with a range of possible meanings, and simply on the basis of other physical states delimit that range of possible meanings to a single meaning. Just saying that “the brain does it” will not work at all, especially if the brain is a physical entity, and the precise issue at hand is how physical entities can fix meaning.

    I do think that your argument is still missing the forest for the trees. It is almost like you wish to draw an arrow from one state to the next and disregard that fact that the arrow probably goes both ways.

    Or, as Scott said, it is like adding more points to a two-dimensional plane and expecting to reach the third dimension. It just cannot be done.

    ReplyDelete
  92. What gives the range of meanings of x is the experience of x within various contexts. The range of meanings are reduced by the immediate context.

    And the question is what is it about the context that delimits the range of meanings? Take the shape, “”. You agree that there is absolutely nothing about that shape that fixes a determinate meaning, because there is a range of possible meanings that are consistent with the same shape. You appeal to context to fix a determinate meaning, but what is the context? The context would have to be people with minds discussing a particular topic, which would then fix the meaning. For example: if that topic involved triangles, then the shape is probably a triangle; if that topic involved ordering pizza, then the shape is probably a pizza; if that topic involved physics, then the shape is probably the symbol for change; and so on. But notice that you are not appealing to a physical state consisting in something describable by physics or chemistry, but rather about linguistic subjects with minds saturated by intentionality, which is precisely what determines the meaning. It is not the context, per se, but rather the context involving minds that determines the meaning, and not any physical features of the shape itself, or any surrounding physical states.

    You yourself imply this position when you appeal to “the experience of x within various contexts”. There must be a mind that is experiencing x in different contexts in order for there to be “the experience of x within various contexts”, and so, once again, it is nothing about the physical state of x that fixes meaning, but rather a mind that fixes meaning. And if this mind is itself a physical state, then like all physical states, it would be open to a range of possible meanings, and be incapable of determining a single meaning at all. And if you want to say that the thought that p just is the set of neural pathways activated in a certain physical situation, then you have pretty insurmountable problems with a causal theory of reference (see http://edwardfeser.blogspot.ca/2011/02/putnam-on-causation-intentionality-and.html).

    ReplyDelete
  93. 1. NiV and Bob seem to be arguing for physicalism from opposite positions. To NiV, almost all physical states can map to almost all others; to Bob, by adding more context, the mapping/meaning narrows to one or a few.

    2. NiV's argument seems to be that the very indeterminacy it entails, is what intentionality is, or at least, is built into intentionality. This is part of the reason intentionalists won't buy it.

    3. Bob, I think, might argue that our uses of words are never really determinate. That is, even when we are speaking or thinking of an X, our actual concept of X is less than truly determinate to what X really is.

    I'm not trying to make a point here, beyond just clarifying what is being argued, in my own mind.

    ReplyDelete
  94. @dguller:

    "Take the shape, ''.  . . . [I]f [the] topic [under discussion] involved triangles, then the shape is probably a triangle[.]"

    Hmm, I'm pretty sure that even under those circumstances,  wouldn't "probably" be a triangle. ;-)

    ReplyDelete
  95. Scott:

    Jeez, you're such a square.

    ReplyDelete
  96. "So, just so I understand you. Any physical state simultaneously refers to or intends any other physical state, because there is always a possible transition from the former to the latter."

    No. Intentionality is a property not of states but of physical systems, which combine both states and operations. One system 'represents' another system if the pattern of behaviour of each system is similar, such that both the states and operations *could* be mapped from one onto the another in such a way as to preserve all (or at least, most of) the relationships. The states of a representing system are the 'symbols' of a representation and the operations it can perform constitute the 'interpretation'. The interpreter has *many* states/symbols, connected in a broad network by the operations for transitioning from one to another, and it is the entire pattern formed by this context that constrains the possible meanings.

    Simple patterns are very much more common than particular intricate ones, so simple systems tend to be able to represent more things. A system with only one state and no operations can be applied to absolutely anything, but is a system empty of meaning. The information content is zero.

    "Even above, you mention “the speaker” and “the word ‘my’”, which refers to the speaker, which is an intentional being, and that seems to beg the question, yet again."

    I used the word "my" only because that was the example I was given. The speaker, on the other hand, is objectively determinable. It's the system outputting the representation "My cat". Robots can speak. So to assume that speakers are necessarily intentional *would* be begging the question.

    "Because intentionality presupposes final causality, which is supposed to be prohibited from any account of materialism or naturalism."

    Does it? And is it?

    I thought the claim was that final causes involve a sort of 'pointing' that was *similar to* intentionality. The same sort of issues arise for each, but that they are not the same thing. Do you have a reference to the argument that shows intentionality presupposes final causality?

    Final cause explanations are rife in materialism/naturalism - they're very useful and intuitive. What's forbidden is that they be the *only* explanation. Most systems can be viewed in several different physically equivalent ways, each with its own advantages and disadvantages. Final cause arguments form an important part of the toolbox. But there's always another explanation that doesn't require them.

    Final causation is a simplified model, a shortcut for reasoning, conveniently describing the high-level behaviour of a particular class of system. It works perfectly well as a model, so I'd count it 'useful', but is not the most fundamental level of explanation. Teleology is a 'constructed' property of composite systems.

    ReplyDelete
  97. "My cat" does, but my cat doesn't. And I needn't think the words in order to think of her."

    There are objective facts about your cat that correspond to ownership. And your wordless mental representation of her no doubt links to many other properties that rule out other cats.

    "And it's probably about time we scotched this identification of "representation" with intentionality anyway. When I think about my cat, I'm thinking about my cat, not about some "representation" of her."

    There are two distinct situations - one where you are simply thinking about your cat, when the pattern of neurons firing is the representation. The second is when you are thinking about the words "my cat" which is a second-level model that models both a representation of the words and a representation of your cat, and asserts a synchrony between the states and operations of the two sub-systems. You can do both.



    "1. NiV and Bob seem to be arguing for physicalism from opposite positions. To NiV, almost all physical states can map to almost all others; to Bob, by adding more context, the mapping/meaning narrows to one or a few."

    We're arguing from exactly the same position. I was very impressed with the way Bob expressed it so concisely. :-)

    ReplyDelete
  98. 1. "Robots can speak. So to assume that speakers are necessarily intentional *would* be begging the question."

    Whoa, whoa whoa... and how'd those robots manage to do that? Nobody said that speakers are necessarily intentional, dguller said that the speaker in question was intending 'my cat'. So I understand a little better, though, are you saying that "I" am not an intentional being, but that intentionality is just a term that refers to this operation that supervenes on all the lower, 'dumber' activities going on inside my skull?

    2. regarding representation:

    I want to agree with what Scott said here. This is a highly problematic term. Let us say I produce a painting and, since I am a poor artist, I convince my wife to pose for it (nothing scandalous, mind you). Upon completing the painting, I title it "Athena Going to War". You see it in my studio and remark what a lovely likeness it is of my wife. I say that it is not my wife. You insist that she must have modeled for it. I say that yes, she did, but the painting is of the goddess, not of my wife. At this point I would think it very strange if you kept insisting, "but it looks like your wife, IT looks exactly like her!" I have already agreed that it looks like her, but it is still not OF her. This seems to go as well for the firing pattern of neurons as representation.

    " Teleology is a 'constructed' property of composite systems."

    ah, and so is intentionality?

    ReplyDelete
  99. Robots can speak. So to assume that speakers are necessarily intentional *would* be begging the question.

    Au contraire, I think it is question-begging to say that robots can speak.

    It seems most sensible to me to construe "speak" as an intentional verb (otherwise, I imagine we are speaking past each other). In that case, we can say, "If robots speak, then robots are capable of intentionality." But to be able to produce sounds that are recognizable by a human or otherwise intelligent listener is not to speak. A tape recorder, I think we can agree, only "speaks" by analogy. The burden on the defender of robots is to show why it is more similar to a human than to the tape recorder.

    ReplyDelete
  100. @NiV:

    "Robots can speak. So to assume that speakers are necessarily intentional *would* be begging the question."

    If all you mean is that robots can make sounds, then that's no big deal. So can squeaky wheels and falling rocks. Since you chose robots as your example, I have to assume you mean something more than this.

    And if what you mean is that robots can "speak" in the full and proper sense—that they can use words to mean, to intend, to refer to things—then yes, that does require intentionality, and there's nothing remotely question-begging about that statement.

    "[T]he pattern of neurons firing is the representation [of my cat.]"

    Your claim, then, I take it, is that in principle someone could examine the pattern of my neurons firing and determine, simply from that examination, that its states and the transitions between them were just like those of my cat, and thus conclude that I was thinking of my cat without my ever saying so.

    Suppose (as is surely possible in principle) there were a similar pattern of states and transitions in the molecular motions within a rock. Would that then mean that the rock was also thinking of my cat?

    ReplyDelete
  101. @NiV:

    I think Matt Sheean's point about representation is a very good one, and I look forward to your response.

    ReplyDelete
  102. NiV:

    Intentionality is a property not of states but of physical systems, which combine both states and operations. One system 'represents' another system if the pattern of behaviour of each system is similar, such that both the states and operations *could* be mapped from one onto the another in such a way as to preserve all (or at least, most of) the relationships. The states of a representing system are the 'symbols' of a representation and the operations it can perform constitute the 'interpretation'. The interpreter has *many* states/symbols, connected in a broad network by the operations for transitioning from one to another, and it is the entire pattern formed by this context that constrains the possible meanings.

    First, basing the intentionality upon similarity is problematic, because everything is similar to everything else in some way. So, you would still be left with the problem of having any state (or system) referring to every other state (or system), and thus compromising determinate content.

    Second, in some ways, I don’t think that you are too far off in your above account. What I am reading into your comment is that there must be something in the representational system that is isomorphic with something in the referential system, i.e. similar to it, and that this commonality is what grounds the intentionality of the systems towards one another. To an Aristotelian, this commonality would be the form. For example, what makes a mental representation of a cat about a cat is that the mental representation of a cat and a cat both share the same form, i.e. a feline nature.

    I thought the claim was that final causes involve a sort of 'pointing' that was *similar to* intentionality. The same sort of issues arise for each, but that they are not the same thing. Do you have a reference to the argument that shows intentionality presupposes final causality?

    The idea is that all things display teleology by virtue of having a nature that they are actively striving to express to its fullest. This is true even of mindless beings, but once you start talking about beings with minds, then intentionality comes into the picture, because intentionality is specifically about mental states referring to or directed towards particular contents. But intentionality piggy-backs upon the more basic teleology that exists throughout the natural world. It is only because more basic organisms are primordially directed towards certain outcomes, irrespective of whether they are aware of this fact or not, that higher cognitive functions can be about anything at all.

    Final cause explanations are rife in materialism/naturalism - they're very useful and intuitive. What's forbidden is that they be the *only* explanation. Most systems can be viewed in several different physically equivalent ways, each with its own advantages and disadvantages. Final cause arguments form an important part of the toolbox. But there's always another explanation that doesn't require them.

    I agree that final causes aren’t the only explanation, but they are an essential part of any explanation. Aristotle would say that formal and final causality are different aspects of the same thing, i.e. a nature that is striving to express itself to the fullest, and that even efficient causality is simply the actualization of formal and final causality by an external agent of some kind. Ultimately, formal, final and efficient causality are different sides of the same underlying reality. Only material causes stand along as passive receivers of the activity of the formal-final-efficient cause(s).

    ReplyDelete
  103. @NiV:

    "Teleology is a 'constructed' property of composite systems."

    Interesting. Who or what "constructs" it? (Be sure not to include anything teleological/purposeful/intentional in your account of this construction.)

    ReplyDelete
  104. @dguller:

    "To an Aristotelian, this commonality would be the form."

    That's right, except that the Aristotelian account is utterly at odds with materialism: an intellect that receives the form of a cat without becoming a cat must be immaterial.

    It's also, I think, at odds with representationalism: the form of the cat that my intellect receives doesn't "represent" but is formally identical with the form of the cat itself.

    ReplyDelete
  105. (Incidentally, that's why I don't care for the "correspondence theory of truth"; it seems to me to entail some sort of representationalism and thereby to inherit a host of problems, not least that of imprisoning us within an "iron ring of ideas." The term I prefer is "conformity," which I've often seen used in A-T writings without, however, any careful or systematic attempts to distinguish it from correspondence.)

    ReplyDelete
  106. "Nobody said that speakers are necessarily intentional"

    "It seems most sensible to me to construe "speak" as an intentional verb"

    Really? :-)

    I interpreted it as a means to convey information, like a speak-your-weight machine. I don't make any assumptions about the intentionality of the source.

    "are you saying that "I" am not an intentional being"

    No, of course not. But I would suggest that there is an objective way to connect the words to the speaker of the words that does not depend on the speaker's intentionality. I only used the term 'speaker' as a simple way to refer to the origin of the indexical's coordinate system.

    When you read the instruction "Press here", you don't generally interpret "here" as the location of the person who wrote the instruction, but the location of the written instruction itself. Perhaps "speaker" was the wrong way to describe that.

    "Your claim, then, I take it, is that in principle someone could examine the pattern of my neurons firing and determine, simply from that examination, that its states and the transitions between them were just like those of my cat, and thus conclude that I was thinking of my cat without my ever saying so."

    Yes, more or less.

    "Suppose (as is surely possible in principle) there were a similar pattern of states and transitions in the molecular motions within a rock. Would that then mean that the rock was also thinking of my cat?"

    Yes. It would also mean that you could ask the rock questions, and get sensible answers that were true of your cat. I think in those circumstances it would be reasonable to say the rock's patterns were "about" your cat.

    Silicon chips could be thought of as a sort of rock, so I'd say it was more than a possibility "in principle".

    ReplyDelete
  107. @NiV:

    "It would also mean that you could ask the rock questions, and get sensible answers that were true of your cat."

    It would? Why?

    ReplyDelete
  108. "Silicon chips could be thought of as a sort of rock, so I'd say it was more than a possibility 'in principle'."

    Because the pattern of molecular motion within some silicon chips includes states and state-transition behavior that resemble those of my cat? What on earth makes you think that's the case?

    ReplyDelete
  109. "I say that yes, she did, but the painting is of the goddess, not of my wife."

    That's OK. Remember, it's the states and operations of the interpreter that define the meaning. If the interpreter (you) relates the picture to concepts about Greek gods and wisdom, courage, law, technology, and so on, the way the painting fitted into the pattern of relationships would mean that the painting 'meant' Athena, rather than your wife.

    However, if the interpreter's operations are related to the geometry of the face, colour of the eyes, and so on, the pattern would match your wife more closely.

    It's not simply the *state*, it's the *context*.

    "The idea is that all things display teleology by virtue of having a nature that they are actively striving to express to its fullest. This is true even of mindless beings, but once you start talking about beings with minds, then intentionality comes into the picture, because intentionality is specifically about mental states referring to or directed towards particular contents."

    I don't follow the logic, there. Things would surely be striving to fully express their *own* nature, not the nature(?) of something else. Unless the thing represented is part of the nature of the thing representing it?

    ReplyDelete
  110. Really? :-)

    I interpreted it as a means to convey information, like a speak-your-weight machine. I don't make any assumptions about the intentionality of the source.


    Well, I am not Matt Sheean. But I imagine we are using different understandings of "speaking," mine being intentional and his not. Either way is fine; I think the point we are both trying to get across is that the production of sounds understandable to some intelligent listener is not sufficient for intentionality. If you want to call that "speaking," that is fine, but then it does not follow that there is a relevant similarity between the robot and the human.

    To speak of it as "information" only obfuscates the matter as to why something qualifies as "information."

    ReplyDelete
  111. @NiV:

    "I would suggest that there is an objective way to connect the words to the speaker of the words that does not depend on the speaker's intentionality."

    What you need in order to avoid begging the question is a way to connect them without depending on anyone's intentionality, including the robot's programmer or the person who made the tape recording. We're still waiting, and I have a feeling we're going to be waiting for a long time.

    ReplyDelete
  112. @NiV:

    "Remember, it's the states and operations of the interpreter that define the meaning. . . . . It's not simply the *state*, it's the *context*."

    Your account doesn't provide any basis for regarding anyone as an "interpreter." If System A resembles System B in states and transitions, then System A "intends" System B and vice versa; end of story. Whatever else you think you mean by "context" is irrelevant.

    ReplyDelete
  113. "It would? Why?"

    Because I could ask *you* those questions and get the same answers. If the pattern of operations connecting questions to answers is the same in you and the rock, then the rock will answer them the same way.

    "Because the pattern of molecular motion within some silicon chips includes states and state-transition behavior that resemble those of my cat?"

    No, because silicon chips can be thought of as a sort of rock, and can also contain general patterns of behaviour rich in information, such as would be required to realise this thought experiment.

    ReplyDelete
  114. @NiV:

    "Because I could ask *you* those questions and get the same answers. If the pattern of operations connecting questions to answers is the same in you and the rock, then the rock will answer them the same way."

    I said nothing whatsoever about connecting questions to answers. I said that there's a pattern of molecular motion in the rock that "resembles" (the states and transitions of) my cat in the same way that the pattern of neural firings does when I'm thinking of my cat.

    ReplyDelete
  115. "What you need in order to avoid begging the question is a way to connect them without depending on anyone's intentionality, including the robot's programmer or the person who made the tape recording."

    I already have. All you've said is that the speaker happens to be intentional, not shown that the indexical's meaning depends on that intentionality.

    ReplyDelete
  116. "I said nothing whatsoever about connecting questions to answers. I said that there's a pattern of molecular motion in the rock that "resembles" (the states and transitions of) my cat in the same way that the pattern of neural firings does when I'm thinking of my cat."

    Yes, so if it resembles the states and transitions of your cat, it can answer questions about the states and transitions of your cat.

    ReplyDelete
  117. @Scott:

    "It would? Why?"

    I think it is because there is a first-order model in your head about which there could be a theoretical map that maps the theory of the model to the model of the rock. The second order model that models the map between models, also models the word "models", and therefore has a model that tells us, by looking at the states of the rock and the state transition tables what the rock model is saying about the cat of the model. The map is a homomorphism of state cats and the Church-Turing-Model machine universally thesis tells us every Deutsch rabbit trail of a model intentionaly there is.

    But I could be wrong.

    ReplyDelete
  118. @NiV:

    "Yes, so if it resembles the states and transitions of your cat, it can answer questions about the states and transitions of your cat."

    Again, why? My cat can't, so why should something that resembles my cat be able to?

    ReplyDelete
  119. @grodrigues:

    NiV has been as clear as day that (gender-inclusive-)his account of intentionality doesn't depend on any second-order models. According to him, all that's required for System A to "intend" System B (and vice versa) is that there exist (Platonically) a map from the states of System A to the states of System B preserving the binary relation that gives us the transitions between states.

    I'm thinking of my cat—not answering questions about her, not talking about her, just thinking of her and nothing else. NiV says that my thought is "about" my cat because the states and transitions in the patterns of my neural firings can be mapped to the states and transitions of the cat herself. I'm saying that exactly the same thing might be true of the patterns of molecular motion in a rock, and asking whether the rock is therefore also thinking about my cat—period. Introducing anything "second-order" here just confuses the issue.

    ReplyDelete
  120. NiV,

    How can the indexical's meaning NOT depend on any intentionality. It wouldn't mean anything then, other than whatever relation was being instantiated at that point.

    But in order for this to go through, YOU have to mean something by it. For all I know, right now your behavior here could be the product of your desire to tell all of us just how good olive oil flavored ice cream is and how surprised you were at this discovery.

    But! you retort... I wouldn't be justified in thinking that you were trying to tell me about ice cream. Well, why? Because your speech indicates to me that I should think that you INTEND to convince me that intentionality really is relations between states and operations. Maybe not, though. It's all just probability judgements about what the most likely thing you mean is, or not what you mean, but what your map is "trying" to map onto my map. But surely, if that is the case, you must mean it.

    ReplyDelete
  121. @NiV:

    You answered me thus:

    '"1. NiV and Bob seem to be arguing for physicalism from opposite positions. To NiV, almost all physical states can map to almost all others; to Bob, by adding more context, the mapping/meaning narrows to one or a few."

    We're arguing from exactly the same position. I was very impressed with the way Bob expressed it so concisely. :-)'

    I don't see how this can be so. You had very clearly stated previously that virtually any physical state/sequence can be mapped to any other one. (Presumably one of a fit level of complexity.) Bob expressly was advocating a view that, as complexity increases (that is, context is more and more defined), the possible "mapping" is increasingly narrowed. I do not see how these are "the same."

    Another question about this would be, granted your view that there are innumerable possible mappings, how is one to say that *this* case is about wave motions in water, as opposed to the Battle of Gettysburg, or a menu, or anything else. To answer which it is being used to map. I don't see how that can work.

    ReplyDelete
  122. Second, if everything that exists is a physical state of some kind, then the totality itself would also have to be a physical state of some kind. And if all physical states are radically indeterminate, because they could always mean something else, then the totality itself would also be radically indeterminate. After all, if the totality is the context within which the part derives its meaning, then the totality itself can always be placed in another higher and more all-encompassing totality, which would be required to determine its meaning, and this process can proceed ad infinitum.

    I don't understand this objection. It seems to be saying that individual pixels have the same meaning content as letters, words, sentences, paragraphs, and so on. Moreover, it seems to contradict dguller's own classical theistic views that meaning is ultimately derived from an all-encompassing totality.

    ReplyDelete
  123. "Again, why? My cat can't, so why should something that resembles my cat be able to? "

    Your cat can. Does your cat like fish? Put some fish down see if the cat goes to eat it. Or add the mental representation of fish to the mental representation of cat and see if it transitions into a state associated with the cat eating the fish. Does your cat have black fur? Look at the cat's fur, and see if it's black. Or operate on the mental model of the cat with an operation that identifies colour, and see if the resulting state maps to black.

    "I don't see how this can be so. You had very clearly stated previously that virtually any physical state/sequence can be mapped to any other one."

    No, various other people were arguing that that was what I was arguing, but I wasn't. Simple systems can often be mapped; as systems get more complex the possible mappings get more specific.

    ReplyDelete
  124. @dguller

    Yes, but they never shrink to one possible meaning. There is always a range of possible meanings when physical states are involved. Thus, the determinacy of thought becomes impossible if thought is a physical state.


    You are trying to prove too much. There is a range of possible meanings for any stimulus. Some stimuli will have a greater range, while some have a lesser range. These ranges are based on experience, memory of experiences.

    Context is the relationships between the stimuli, derived from the experience of them. The intersection between the various sets of ranges. The number of sets necessary to reduce the range of possible meanings for a stimulus to one is then just a relationship to the initial range of possible meanings for that stimulus. This, whether or not a correct determination results, minds do make mistakes from time to time as in when experience of the specific stimulus is new, for example.

    There would seem to be nothing “impossible” about it.

    First, by “determinate intentionality”, I mean that one thing can be about or refer to a specific content. For example, when I think about cats, I am actually thinking about cats, and not about cats, or spatiotemporal cat slices, or cat shapes, or animals that purr, and so on.

    Second, if information is synonymous with physical states, then if the argument is correct, then information is itself indeterminate, because physical states are indeterminate.


    To the first, the “one thing” likely relates to a range of experiences, which in context, relate to cats, perhaps even to a specific cat, such as one you have some direct experience of.

    To the second, yes, information is only determinate in context, as I have been arguing.

    But if the brain is a totality of physical states, and physical states are indeterminate, then the output of the brain itself would also have to be indeterminate

    A fallacy of composition, as I have already said and I already provided a kind of outline of how to limit a range of possibilities. I am not even sure I understand your objection beyond that you seem to be saying that it is impossible because it is impossible.

    Or, as Scott said, it is like adding more points to a two-dimensional plane and expecting to reach the third dimension. It just cannot be done.

    Really?

    Feeding a bunch of marks on paper into one end of an orchestra can produce the glorious 9th Symphony out of the other.

    ReplyDelete
  125. Is it not also the case that when I thinking of cats as in the abstract or universal cat then I am necessarily thinking of something that is not physical, if one defines the physical or material as referring to empirical reality.
    Bob writes,

    There is a range of possible meanings for any stimulus. Some stimuli will have a greater range, while some have a lesser range. These ranges are based on experience, memory of experiences.

    Context is the relationships between the stimuli, derived from the experience of them. The intersection between the various sets of ranges. The number of sets necessary to reduce the range of possible meanings for a stimulus to one is then just a relationship to the initial range of possible meanings for that stimulus. This, whether or not a correct determination results, minds do make mistakes from time to time as in when experience of the specific stimulus is new, for example.
    Surely this is simply question begging. It leaves unanswered how this experience and stimuli creates the initial meaning or even how it this reshaped.




    ReplyDelete
  126. @Jeremy Taylor

    Surely this is simply question begging. It leaves unanswered how this experience and stimuli creates the initial meaning or even how it this reshaped.

    I fail to see this as question begging. Obviously, I do not know exactly how the brain actually does this. I simply showed, I hope, that such is not as was asserted to be, in principle, impossible.

    Secondly, I would say that any "initial meaning" must itself result from the surrounding context. Experience does not occur within a vacuum and I would think that, in the case of a completely unique type of experience, one without any experienced precursor, it could be that there will be no range of meaning attached to such an experience.

    Have you heard the story about the natives that could not "see" the sailing ships just off the coast when first visited by western explorers.

    There was nothing in the previous experience of these natives that gave context to the visual stimulus that they were experiencing, leaving them mentally "blind", though the ships were right in front of them.

    ReplyDelete
  127. A fallacy of composition, as I have already said and I already provided a kind of outline of how to limit a range of possibilities

    A supposed fallacy of composition is not always one. Cf. here:

    "not every inference from part to whole commits a fallacy of composition; whether an inference does so depends on the subject matter. If each brick in a wall of Legos is red, it does follow that the wall as a whole is red...If A and B are of the same length, putting them side by side is going to give us a whole with a length different from those of A and B themselves. That just follows from the nature of length. If A and B are of the same color, putting them side by side is not going to give us a whole with a color different from those of A and B themselves. That just follows from the nature of color."

    ---

    Feeding a bunch of marks on paper into one end of an orchestra can produce the glorious 9th Symphony out of the other.

    Except the marks had to be placed in a particular way according to the will of the mind of Beethoven. "Feeding marks" indeed.

    ReplyDelete
  128. Except the marks had to be placed in a particular way according to the will of the mind of Beethoven. "Feeding marks" indeed.

    And?

    The "mind of Beethoven" in the analogy is analogous to the contextual experience of stimulus I have been describing.

    ReplyDelete
  129. The "mind of Beethoven" in the analogy is analogous to the contextual experience of stimulus I have been describing.

    Still smuggling in intentionality through the back door without providing an account of how it got there from a non-intentional state. Kicking the problem to "context" just pushes the problem up another level, where you have to provide an account of why context should have any specific directed-ness.

    ReplyDelete
  130. Kicking the problem to "context" just pushes the problem up another level, where you have to provide an account of why context should have any specific directed-ness.

    What is this directed towards?

    *

    ReplyDelete
  131. @Scott:

    "According to him, all that's required for System A to "intend" System B (and vice versa) is that there exist (Platonically) a map from the states of System A to the states of System B preserving the binary relation that gives us the transitions between states."

    From February 2, 2014 at 12:30 PM, NiV wrote (my emphasis):

    "What the rest of you mean by it is I think any of a set of different multi-layered composite systems built up from the basic building block. Models of models of models... But the brain is so good at it, you don't notice."

    And then again at February 2, 2014 at 10:44 PM:

    "Representations acquire more specific meaning by being included in wider operational networks, with more extensive and complicated patterns. "My cat" includes the word "my", which maps to real world features like the cat that lives in the same house as the speaker, eating food bought by the speaker, etc. that nail down one specific cat (although not literally, of course)."

    Which I take it to mean inside the model lives another model, which is a model of the model plus extensions (e.g. the cat and the word "cat"), and so on ad infinitum like a Matryoshka of models (or a model of a Matryoshka?). It must be so. By appealing to the same similarity relations that NiV appeals, all that is needed for such models modeling models to exist, is that a suitable map exists. Or simply by noting that thinking about a cat is say, a certain pattern of neurons firing, and therefore is itself a physical state and therefore is itself similar to many other things. And such maps almost certainly exist -- by messing arbitrarily with encodings or by appealing to the Church-Turing-Deutsch universality thesis which NiV appeals. So the point is not that such higher order models are needed to account for intentionality, but that on his account such models exist necessarily. And therefore I intend my own intentional thought when I think??

    Which I suppose ends up being the same point you are making.

    ReplyDelete
  132. @Scott:

    Some further points. Let me start with some technical notes. First a minor quibble: state transitions are not given by a binary relation, for a state can transition to more than one state. Also, state machines have less computational power than universal models of computation such as Turing machines -- although I am not sure if this is relevant.

    (1) Since state machines do not exist in nature, the only sense that I can make of this talk is that a state is a certain configuration of matter and a state transition is an evolution of the system from one configuration to another. But then, the only sense it can be made of a map between states respecting state transitions is just to say the two physical systems are equivalent qua physical systems. But this is not enough to support the operations of intellect which are indeterminate to form and infinitely receptive to it.

    (2) NiV will surely object and stack a layer of abstraction, but then he has to justify what he abstracts without smuggling in intentionality -- an impossible task. However, let us grant this for the sake of argument. Since different state machines, with different number of states and state transition tables, can recognize the same regular language (e.g. look up state minimization algorithms), presumably they are equivalent, but then it is not clear in what sense the map is supposed to preserve state transitions. Maybe only the maps that delete or add unreachable and non distinguishable states? And what is an unreachable state? Is say, the state of matter arranged elephant-wise unreachable from a state of matter arranged pink flamingo-wise? Why or why not? And if yes, does that mean that when I think of an elephant I also think of everything in the universe, since everything else is either in a reachable or an unreachable state from the state of matter arranged flamingo-wise?

    (3) And what about indistinguishable states? Given an arbitrary encoding, any state can stand in for anything (think of a call instruction in a programming language), so the radical indeterminacy problem is left exactly as it was. My computer, if left alone, will cool down, so it will be in different states which presumably are indistinguishable. But they are not *physically* indistinguishable, so what is the difference? Is it not the answer that the difference lies in an interpretative act of a mind?

    (4) If for A to intend B is for a map between the state machines to exist, and if maps are mathematical concepts that exist only within minds, than the only sense I can make of it is saying that for A to intend B is the possibility of a mind C, given enough information, to construct a suitable map between the states of A to the states of B. And since such maps are what intentionality is on NiV's account, then this can only mean that for A to intend B is for a mind C to interpret A as B.

    I am not as disposed as you and others seem to be, to grant any value in what NiV is proposing: it does not even rise to the dignity of being wrong. My suspicion is that this mishegoss was picked up from Hofstaedter (who does suggest that meaning is a map). While he may be good for many things, philosophy of the mind is not one of them.

    edit: correction of some annoying typos.

    ReplyDelete
  133. grodrigues,

    First a minor quibble: state transitions are not given by a binary relation, for a state can transition to more than one state.

    I had understood, though perhaps mistakenly, that Scott had reduced my example to something much simpler, which something involves, let us say, the XOR'ing of an ostensible bit. If a bit, ostensible or not, is, e.g., 0, what are the multiple states it can be transitioned to via XOR?

    I am not as disposed as you and others seem to be, to grant any value in what NiV is proposing:

    I think it safe to say that most "grant[ing] of value" to what NiV proposes has been of the kind that is often, though clearly not always, accompanied by words such as, "However, let us grant this for the sake of argument."

    - - - - -

    Also, when Scott said, "Introducing anything 'second-order' here just confuses the issue", it seems clear that he wasn't saying that "second-order" stuff hasn't been involved in what NiV has proposed. Rather, what he was saying, or so I think (but I'll gladly accept the correction if I'm wrong), was that at that particular stage in the discussion, when a very specific point was being addressed, introducing "second-order" would confuse the issue -- not the larger issue which you were addressing, but the smaller issue (of a very specific point at that particular stage in the discussion) which Scott was addressing with NiV. They were on a roll, and it would have been interesting to see where Scott's sober-mindedness might have led NiV.

    ReplyDelete
  134. ...then this can only mean that for A to intend B is for a mind C to interpret A as B.

    And in Ross's scheme, so far as I understand it, A would be a phantasm of B and it's that phantasm that needs to be interpreted/judged by mind C, and it is that judgement that counts as determinate thought...not what 'A' is.

    ReplyDelete
  135. I wish I had time to read more comments, but I must say it's refreshing to see people debating God's existence on both sides agreeing about the importance and inescability of assumptions in what we believe and how we view the world. Most atheists I come across think they're absolutely objective, and that everyone who disagrees with them is somehow irrational and biased.

    ReplyDelete
  136. Scott:

    That's right, except that the Aristotelian account is utterly at odds with materialism: an intellect that receives the form of a cat without becoming a cat must be immaterial.

    I agree.

    It's also, I think, at odds with representationalism: the form of the cat that my intellect receives doesn't "represent" but is formally identical with the form of the cat itself.

    Here, I think matters are a little more complicated. To say that X represents Y means (a) Y is absent in some sense, and (b) X makes Y present in some way. That is why it is called re-present-ation to begin with. What is absent in the mind is the actual cat, but what is present in the mind is the form of the cat, which – as you said – is formally identical between the mind and the actual cat. The way that I think about it is that the form in the intellect is the lens through which the mind refers to objects. That is what I consider to be a representation.

    Incidentally, that's why I don't care for the "correspondence theory of truth"; it seems to me to entail some sort of representationalism and thereby to inherit a host of problems, not least that of imprisoning us within an "iron ring of ideas." The term I prefer is "conformity," which I've often seen used in A-T writings without, however, any careful or systematic attempts to distinguish it from correspondence.

    I think that representationalism of the variety of that you are criticizing is a problem only if one assumes that the representation is utterly distinct and different from the object being representation such that there is an ontological and epistemological abyss between the two that is unbridgeable. However, if you accept that there is an isomorphism of some kind between the representation and the object, then there is a bridge between them that avoids the “iron ring of ideas” that you mention. To continue my lens analogy, although someone could plausibly say that a brick wall was an epistemological obstruction between me and the other side, no-one would argue that a brick wall with a window is equally an epistemological obstruction, because one can see through the window to what is on the other side, and thus acquire knowledge about it.

    ReplyDelete
  137. Bob:

    You are trying to prove too much. There is a range of possible meanings for any stimulus. Some stimuli will have a greater range, while some have a lesser range. These ranges are based on experience, memory of experiences.

    But again, you are presupposing the existence of a mind that experiences and remembers those experiences in order to provide the necessary context to constrict the possible range of meanings.

    There would seem to be nothing “impossible” about it.

    The impossibility is to restrict the possible range of meanings without a mind capable of intentionality, which you keep presupposing in your account, and thus demonstrating my point. If you want to prove me wrong, then show me how one can delimit the range of possible meanings associated with a physical state to a single meaning without appealing to a mind capable of intentionality.

    To the first, the “one thing” likely relates to a range of experiences, which in context, relate to cats, perhaps even to a specific cat, such as one you have some direct experience of.

    I don’t think so. My thought about a cat is not about “a range of experiences”, but rather about a cat.

    To the second, yes, information is only determinate in context, as I have been arguing.

    And the question is what this “context” is supposed to be. If the context is nothing but physical states, or even the totality of physical states, then meaning remains indeterminate. It is only when you include a mind with intentionality in the context that the meaning becomes determinate, and this mind must be other than a physical state or a totality of physical states.

    A fallacy of composition, as I have already said and I already provided a kind of outline of how to limit a range of possibilities. I am not even sure I understand your objection beyond that you seem to be saying that it is impossible because it is impossible.

    It is not a fallacy of composition. Physical states in themselves are semantically indeterminate, and a totality of physical states is equally semantically indeterminate. Therefore, whatever restricts the range of possible meanings of a physical state cannot be a physical state or a totality of physical states. Unless you are arguing that the brain is neither a physical state nor a totality of physical states, then I’m not too sure how you can claim that the brain somehow restricts the range of possible meanings.

    Feeding a bunch of marks on paper into one end of an orchestra can produce the glorious 9th Symphony out of the other.

    Sure, if the orchestra and the marks on paper were designed by a mind with intentionality. It Is not the physical characteristics of that arrangement that produces the music, but rather the intentional production of the device by a mind that does so. And that mind cannot itself be entirely physical, which is precisely what the argument is about.

    What is this directed towards?

    You claimed that “The "mind of Beethoven" in the analogy is analogous to the contextual experience of stimulus I have been describing”, which means that when you are proposing that the context is what determines meaning, you are actually appealing to a mind with intentionality that determines meaning within certain circumstances.

    And remember what this entire discussion was initially about. You claimed that the mind could not exist independently of the brain, and I replied that this was a metaphysical claim, and that there were arguments that purported to show that the mind must exist independently of the brain, because the brain is a physical entity, and no physical entity can fix meaning, which means that there are mental activities that cannot possibly be physical. Since all activities of the brain are physical, then the brain cannot account for non-physical mental activities.

    ReplyDelete
  138. @NiV:

    You replied to me:

    '"I don't see how this can be so. You had very clearly stated previously that virtually any physical state/sequence can be mapped to any other one."

    No, various other people were arguing that that was what I was arguing, but I wasn't. Simple systems can often be mapped; as systems get more complex the possible mappings get more specific.'

    But surely you did say that.

    2/2 3:37: "What I said is that the intentionality exists if such a mapping is *possible*. That's different from such a mapping actually being constructed in some mind. So when you said "be mapped", I suspected that you had misunderstood the *possibility* of a mapping - that a map 'exists' only in a theoretical/Platonic sense - for the *physical instantiation* of such a mapping, which is not required."

    2/2 12:30pm: To the question
    "Apparently any two systems are now to be regarded as "intending" each other if any sort of mapping between them, no matter how complex or arbitrary, "exists" in a mathematical sense."

    You answered "Yes, that's the idea!"

    2/2 at 12:56pm You again replied to a question:
    '"In other words, a mind can take any of them to be a representation of the others."

    Any of them *is* a representation of any the others. A mind can take them so or not, as it chooses (if so, it would be implementing a second-level model such as I described above). But each of them still represents the others even if there are no other minds around to see them.'

    Now, from this it certainly appears that the "mapping" is something someone does, using one system/sequence to model another. But it is clear that we can use any number of these as symbols for any others -- as you admit. It is a convention decided on by the mapper. Now you are saying that isn't what you mean? Surely, the more we get (Bob's added context) into a system, the more we can map with it, not less. For we can always use a set of several data points to refer to a single datum in the referent.

    And, just to be clear, it is precisely the "mapping" of X to Y which is meant by "intention". But you seem always to refer this to a yet higher level of mapping. A sort of third man, I suppose.



    ReplyDelete
  139. @Glenn:

    Just confirming that you're correct about what I meant in each case. Thanks.

    @grodrigues:

    In my own example, the state transitions are described by a binary relation, namely the one I gave, which takes each of the two possible states to the other; I wasn't making a general claim about state transition systems.

    However, it's also my understanding that the transition function of a state transition system is often (though perhaps not always) understood to be a binary relation (by which I mean a subset of SxS where S is the set of states). If a state p in S can transition to more than one state, then I would expect that either (a) that subset will contain more than one ordered pair with p as its first element, or (b) the transition "function" isn't well-defined as we have to expand S to include more states. (Or both.)

    I agree with your other observations, subject to Glenn's helpful caveats.

    ReplyDelete
  140. @NiV:

    "Your cat can. Does your cat like fish? Put some fish down see if the cat goes to eat it."

    That's very obviously not what's meant by "answering questions" in the present context.

    At any rate, though, my question about the states and transitions in the molecular motions within the rock didn't in any way presume that the rock would continue "representing" my cat if I started doing stuff to the rock. My question was about the states and transitions in the molecular motions within the rock now, quite apart from any "questions" I might put to it afterward, indeed quite apart from my or anyone else's knowledge of the molecular motions within the rock at all.

    ReplyDelete
  141. @dguller:

    I see your point about forms and representation and I don't entirely disagree, but I still think "conformity" is a better term than "correspondence" for the A-T theory of truth even if the latter isn't really wrong. This subject would take us pretty far afield, though, so let's see whether it comes up again in another thread where it's not so off-topic.

    ReplyDelete
  142. @dguller:

    Correction: the transition "function" isn't well-defined and we have to expand S to include more states.

    ReplyDelete
  143. Yikes. That last post was @drodrigues. Sorry.

    ReplyDelete
  144. Obviously the d beginning of dguller's name has transitioned over to Scott's map of grodrigues. :)

    ReplyDelete
  145. @Step2:

    [facepalm] I think you're right. In one of my earlier posts I actually typed "drodrigues" and caught my error before submitting it. Not this time.

    ReplyDelete
  146. This bit from NiV that George LeSauvage has quoted is exactly what I had in mind earlier when I said that NiV didn't think second-order models were necessary for representation (which he identifies with intentionality) to exist:

    Any of them *is* a representation of any the others. A mind can take them so or not, as it chooses (if so, it would be implementing a second-level model such as I described above). But each of them still represents the others even if there are no other minds around to see them.

    NiV doesn't deny that second-order models do exist; indeed he's quite clear that they do. But I think he's also quite clear that, on his account, "basic" intentionality doesn't depend on the existence of those second-order models.

    ReplyDelete
  147. @godrigues:

    NiV also said this earlier:

    What I said is that the intentionality exists if such a mapping is *possible*. That's different from such a mapping actually being constructed in some mind.

    He further specified that by "the *possibility* of a mapping" he means "that a map 'exists' only in a theoretical/Platonic sense."

    So I think he would deny that, in the sense relevant for intentionality, "maps are mathematical concepts that exist only within minds."

    ReplyDelete
  148. @George LeSauvage:

    "But surely you did say that."

    Not quite, but I think NiV's position, as stated, does imply that.

    What I think NiV wants to say is that two systems "intend" one another if and only if there's a transition-preserving map from one to the other. He seems to think that he can distinguish simpler systems from more complex ones, though, so he wouldn't agree that pretty much any system can be mapped onto any other.

    The problem with this view (one of them, anyway) is that we can always treat a more complex system as a simpler one just by changing the way we describe it. Suppose a system has ten, a hundred, or a thousand possible states when modeled in a certain way; then it has two when we model it as a two-state system with states p and not-p, where p is any subset of the ten and not-p is the complementary subset. And whatever its more "complex" transitions may be, all we have to do is restrict our interest to the transitions between p and not-p and we have the on-off system I described earlier.

    Yes, this means the choice of system is subjective, and no, NiV doesn't seem to realize the importance of this fact even though he's occasionally acknowledged that what system we use may depend on our interests. But the key point here is that he seems to be committed to the view that just about any physical "system" can be mapped onto any other (by a suitable assignment of states and transitions in our "model") and therefore that pretty much any two physical systems therefore "intend" each other.

    He doesn't intend that consequence, of course, but then that's more ironic than significant.

    ReplyDelete
  149. I see that I've elevated "godrigues" to divine status . . .

    ReplyDelete
  150. But the key point here is that he seems to be committed to the view that just about any physical "system" can be mapped onto any other (by a suitable assignment of states and transitions in our "model") and therefore that pretty much any two physical systems therefore "intend" each other.

    Wouldn't this also demonstrate the indeterminacy of any purely physical system, which I thought at some point he was arguing against?

    ReplyDelete
  151. @Steve:

    "Wouldn't this also demonstrate the indeterminacy of any purely physical system, which I thought at some point he was arguing against?"

    Yes, I think it would. dguller points out something along those lines earlier in the thread as well.

    ReplyDelete
  152. @dguller

    But again, you are presupposing the existence of a mind that experiences and remembers those experiences in order to provide the necessary context to constrict the possible range of meanings.

    No, experience and memory are parts of what we refer to as the mind, not distinct from it. Semantic limitations aside.


    I think we need to clarify some things as this is getting muddled.

    Let's step back a minute and try to strip down the complexity and work from there up.

    Consider a functioning brain that has absolutely no experiences or memories, nor access to any.

    In this specific context, based on your position, describe intentionality and meaning.

    ReplyDelete
  153. Bob:

    Consider a functioning brain that has absolutely no experiences or memories, nor access to any.

    In this specific context, based on your position, describe intentionality and meaning.


    In this context, the brain is not “functioning” at all. I actually have no idea what it is doing, if it is not processing experiences or memories of prior experiences. In fact, I don’t see how there can be intentionality and meaning in this scenario. There are no experiences, and so I can’t be thinking about anything that involves sensory information at all, which rules out the empirical world. And since we derive our concepts from our interaction with the empirical world, there can be no concepts. And even if you assume an a priori set of rational principles that are always present, irrespective of one’s experiences of the world, and which mirror the empirical world’s formal features in some way, then you would require a functional memory in order to access them at all. After all, if you could make no reference to anything prior to the present moment, then you have no idea whether the principles that you are accessing have remained the same at all, and thus cannot refer to them, either. So, I think that your scenario is incredibly problematic, to say the least.

    ReplyDelete
  154. Scott:

    I see your point about forms and representation and I don't entirely disagree, but I still think "conformity" is a better term than "correspondence" for the A-T theory of truth even if the latter isn't really wrong. This subject would take us pretty far afield, though, so let's see whether it comes up again in another thread where it's not so off-topic.

    I wouldn't disagree with anything that you wrote above.

    ReplyDelete
  155. @dguller

    Thanks.

    In my view, you are correct when you say:

    In fact, I don’t see how there can be intentionality and meaning in this scenario.

    Though I disagree with you saying that the brain is "not functioning", because I specifically said it was functioning.

    I did not say that the brain could not process experiences and memories. I simply said that it had no access to any.

    Nor do I see anything problematic with the scenario in the sense of a thought experiment.

    We have a brain in a Vat with it's senses turned off.

    So, just to be clear. In the scenario as described, you would agree that intentionality and meaning are most likely absent.

    Would you agree?

    ReplyDelete
  156. Bob:

    In my view, you are correct when you say:

    In fact, I don’t see how there can be intentionality and meaning in this scenario.


    Then your scenario cannot possibly help us to understand how intentionality and meaning are possible, other than that neither is possible under your scenario.

    Though I disagree with you saying that the brain is "not functioning", because I specifically said it was functioning.

    I know that you said it, and I replied that I have no idea what it is doing under your scenario that would be similar to what a functional brain does.

    I did not say that the brain could not process experiences and memories. I simply said that it had no access to any.

    How can it process that which it lacks any access to? That’s like saying that I’m digesting the fries that I never ate.

    We have a brain in a Vat with it's senses turned off.

    But even a brain in a vat in that scenario has access to its memories of prior experiences that it had before the senses were turned off.

    So, just to be clear. In the scenario as described, you would agree that intentionality and meaning are most likely absent.

    Yes.

    ReplyDelete
  157. This is a fascinating discussion.

    May I interject a question here? I'd like to hear what both sides have to say.

    I do not understand the A/T argument that something than can take the form of another thing must be immaterial. In Feser's Aquinas book, for example, it is simply asserted that for the mind, grasping the form of the triangle, to therewith have a real triangle within it is "absurd." Why is it absurd? What is a "real triangle" such that it cannot exist in a mind? Similarly, Scott says "the Aristotelian account is utterly at odds with materialism: an intellect that receives the form of a cat without becoming a cat must be immaterial." Why is this the case?

    Would it be fair for me to say that what NiV and Bob are basically claiming is that you can have a cat form made out of one kind of material, the stuff made with cat DNA, call it a "physical" cat, and the same cat form in a different material, the stuff of a human brain, call it a "mental" cat.

    ReplyDelete
  158. Natural Mind:

    I do not understand the A/T argument that something than can take the form of another thing must be immaterial. In Feser's Aquinas book, for example, it is simply asserted that for the mind, grasping the form of the triangle, to therewith have a real triangle within it is "absurd." Why is it absurd? What is a "real triangle" such that it cannot exist in a mind? Similarly, Scott says "the Aristotelian account is utterly at odds with materialism: an intellect that receives the form of a cat without becoming a cat must be immaterial." Why is this the case?

    The idea is that forms, i.e. that which determines what something is supposed to be, do not exist independently of their presence within substances. Substances are either material or immaterial. When a form is present in matter, then you have a material entity of some kind, but when a form is present in an immaterial intellect, then you do not have a material entity of some kind, but rather the form of the material entity. For example, an actual material cat is not present in my intellect when I think about cats, but rather the form of feline nature is present in my intellect, which I abstracted from the material cat’s impact upon my senses. Actually, one of the defining features of an intellect is that it can assume a form without becoming what that form is about. For example, when matter acquires the form of a feline nature, then the matter becomes a particular material cat, but when an intellect acquires the form of a feline nature, then the intellect does not become a particular material cat.

    ReplyDelete
  159. @dguller

    I think I may have confused you. Apologies.


    I know that you said it, and I replied that I have no idea what it is doing under your scenario that would be similar to what a functional brain does.

    At this point, all one can probably say about such a brain is that it is not dead.

    How can it process that which it lacks any access to? That’s like saying that I’m digesting the fries that I never ate.

    Obviously it is not processing anything, never said it was. However, that is not the same as saying that it cannot process something, if in fact it had something to process.

    But even a brain in a vat in that scenario has access to its memories of prior experiences that it had before the senses were turned off.

    You are introducing elements that were not specified in the explanation of the experiment. This brain has no prior experiences.
    ................................


    Since we agree that, in this brain's current state, intentionality and meaning are likely absent. We can move along.

    Our mad scientist will now activate the brain's auditory receptor and introduce the sound of rain falling on a window.

    What meaning, if any, do you think the brain can attach to this sound.

    Do you get the jist of this experiment?

    ReplyDelete
  160. @Scott:

    "I see that I've elevated "godrigues" to divine status"

    And it suits me mighty fine. Now, lowly mortals, let me pour out a tidbit of my infinite wisdom on you. Hopefully with no stupid typos or silly mistakes.

    "However, it's also my understanding that the transition function of a state transition system is often (though perhaps not always) understood to be a binary relation (by which I mean a subset of SxS where S is the set of states). If a state p in S can transition to more than one state, then I would expect that either (a) that subset will contain more than one ordered pair with p as its first element, or (b) the transition "function" isn't well-defined as we have to expand S to include more states. (Or both.)"

    In general state machines, state transitions are driven by the inputs, so yes, the transition function, if you mean what I think you mean, is not just not well-defined, it is not defined at all. In NiV's account, the role of inputs would be played by the interactions with the surrounding environment -- with all the attending, well-known problems. But this is just a minor technical quibble over a general definition, and has no bearing on your example.

    I will not say anything else about NiV's account, because, just in case it is not obvious, to disentangle what follows logically from it from what NiV intends would need his clarifying comments (there is ample irony in there).

    ReplyDelete
  161. Bob:

    Our mad scientist will now activate the brain's auditory receptor and introduce the sound of rain falling on a window.

    What meaning, if any, do you think the brain can attach to this sound.


    Whatever meaning it can attach to the sound will not be determinate, but rather could only consist of a range of possible meanings. The neural pathways that are activated by the auditory stimulus could refer to the sound of rain, the sound of rain falling on a window, the sound of rain falling on a flat surface, the sound of water, the sound of fluid, the sound of a certain frequency, the sound of a weather state, and so on. There is nothing about the neural pathway itself that determines the meaning, but rather one must refer to the wider context, as you mentioned. One thing that you will notice, though, is that part of this wider context is the present of the mad scientist who is interpreting the findings of his experiment, and thus already introducing the intentionality of a mind into the equation. Your task was to demonstrate how physical states alone could determine meaning without presupposing an interpreting mind with intentionality. So far, not so good.

    ReplyDelete
  162. Bob:

    Also, the neural pathway could refer to that particular sound at that particular time, as well. So, even if you winnowed out every single possible stimuli down to a single possible stimulus, and the exact same neural pathway is activated, then even then, you still could not distinguish, solely on the basis of the physical states, between the neural stimulus being about the sound at time t1 versus the sound at time t2.

    ReplyDelete
  163. @dguller

    You keep trying to introduce things on your own. Not sure why you are doing so.

    Forget the mad scientist. I did not ask you what the mad scientist was interpreting. We are talking specifically about the brain in the vat.

    Suffice it to say that, at this point, there would seem to be nothing to which the brain can refer, as it's only experience is the sound. So, my position would be that, at this point, intentionality and meaning are still absent.

    Would you disagree with that?

    ReplyDelete
  164. Also, the neural pathway could refer to that particular sound at that particular time, as well. So, even if you winnowed out every single possible stimuli down to a single possible stimulus, and the exact same neural pathway is activated, then even then, you still could not distinguish, solely on the basis of the physical states, between the neural stimulus being about the sound at time t1 versus the sound at time t2.

    Not sure how the concept of time would be meaningful to this brain, at the moment. What would the brain be basing this concept on?

    ReplyDelete
  165. @dguller

    Further to the previous post.

    At the present, the brain only has the experience of the sound, whatever that entails neurologically.

    There is no T2.

    ReplyDelete
  166. Bob:

    Forget the mad scientist. I did not ask you what the mad scientist was interpreting. We are talking specifically about the brain in the vat.

    You mentioned that context was what determines meaning, and the mad scientist was part of the context of your scenario. That’s the only reason I brought him (or her) up. But regarding the brain in the vat, all we can say about it is that in response to a certain indeterminate stimulus, a certain neural pathway is activated in the brain.

    Suffice it to say that, at this point, there would seem to be nothing to which the brain can refer, as it's only experience is the sound. So, my position would be that, at this point, intentionality and meaning are still absent.

    At the very least, there is certainly no determinate intentionality and meaning, but I would agree that there is no intentionality and meaning at all, as well.

    Not sure how the concept of time would be meaningful to this brain, at the moment. What would the brain be basing this concept on?

    I was just saying that if you wanted to perform an inquiry into what the neural pathway might mean by trying different stimuli and seeing which resulted in the same neural pathway to fire, then that procedure would fail to result in determinate meaning, as well, for the reasons that I mentioned.

    ReplyDelete
  167. @dguller

    I was just saying that if you wanted to perform an inquiry into what the neural pathway might mean by trying different stimuli and seeing which resulted in the same neural pathway to fire, then that procedure would fail to result in determinate meaning, as well, for the reasons that I mentioned.

    Well, since I believe that the brain deals with the time coordinate similarly to how it deals with other coordinates, I would think that T1 and T2 themselves would have their own bits of neural pathways. In the way that one can differentiate eating noodles last night from eating noodles a week ago.

    ReplyDelete
  168. Bob:

    Well, since I believe that the brain deals with the time coordinate similarly to how it deals with other coordinates, I would think that T1 and T2 themselves would have their own bits of neural pathways. In the way that one can differentiate eating noodles last night from eating noodles a week ago.

    Even worse for your account, then. That would mean that even if you managed to whittle down the stimuli to a single stimulus that necessarily excluded every other possible factors, then you would still have different neural pathways activated, because you would have a different neural pathway for different times of the stimuli. And even if you want to say that only part of the neural pathway would correspond to the temporal periods of the stimuli, and the rest of the neural pathway would be about the stimulus itself, then you are still left with the problem that there are still an infinite number of possible meanings that would correspond to that stimulus (i.e. quus-like scenarios), and so under the best of scenarios, meaning is still indeterminate.

    ReplyDelete
  169. @dguller

    At the very least, there is certainly no determinate intentionality and meaning, but I would agree that there is no intentionality and meaning at all, as well.

    Cool.


    So, at what point do you think there would be meaning and intentionality in our BIV?

    I would probably say that there would have to be sufficient related experiences for a brain to derive any meaning for the sound at all.

    Would you agree?

    And if so, doesn't this make it plausible to say that meaning is, in some sense, derivative from the sum of related experiences, since without related experience, there would seem to be no meaning possible?

    I am using experience in the sense of the introduction of stimuli and the brains capacity to capture these stimuli as memories, not in any broader sense.

    ReplyDelete
  170. Even worse for your account, then. That would mean that even if you managed to whittle down the stimuli to a single stimulus that necessarily excluded every other possible factors, then you would still have different neural pathways activated, because you would have a different neural pathway for different times of the stimuli. And even if you want to say that only part of the neural pathway would correspond to the temporal periods of the stimuli, and the rest of the neural pathway would be about the stimulus itself, then you are still left with the problem that there are still an infinite number of possible meanings that would correspond to that stimulus (i.e. quus-like scenarios), and so under the best of scenarios, meaning is still indeterminate.

    No, not a problem, as I said that it is plausible that specific meanings are derived from the intersection of various experiences with varying ranges of associations. The more experiences, the more precise the meaning.

    ReplyDelete
  171. Bob:

    So, at what point do you think there would be meaning and intentionality in our BIV?

    I would probably say that there would have to be sufficient related experiences for a brain to derive any meaning for the sound at all.

    Would you agree?


    That depends upon what you mean by “sufficient related experiences”. To me, what would make an experience “sufficient” to delimit meaning would be one that is experienced by a mind capable of intentionality that could then narrow the range of possible meanings to a single one. The experience itself could not do so, and the brain, as a physical entity, could not do so, no matter how many sensory stimuli it is exposed to.

    And if so, doesn't this make it plausible to say that meaning is, in some sense, derivative from the sum of related experiences, since without related experience, there would seem to be no meaning possible?

    Meaning depends upon experiences if the meaning is about those experiences, but it does not follow that if you just combined a sufficiently complex range of experiences, then determinate meaning would follow. In fact, it is the mind that determines meaning from a range of possible meanings that are consistent with the same experiences. The question is whether this mind is physical or not. I am arguing that it cannot be physical, because all physical states are consistent with a possible range of meanings, and thus something that is not physical must be involved to delimit that range of meanings to a single one.

    No, not a problem, as I said that it is plausible that specific meanings are derived from the intersection of various experiences with varying ranges of associations. The more experiences, the more precise the meaning.

    And I agreed that one could narrow down the range of meanings by accumulated experiences, but the issue is not narrowing down to a smaller range of possible meanings, but to narrowing down to a single meaning. Your account can account for the former, but not the latter, and it is the latter that is the issue at hand.

    ReplyDelete
  172. @dguller and Bob:

    Intentionality is broader than just deriving intellectual meaning from sensory experience. I'd say that sensory experience is itself "intentional" in the sense that it has an object—in the current example, a cluster of auditory qualia that the rest of us would recognize/interpret as the sounds of rain falling on a window, but the brain in question has as yet no basis for doing so.

    I'd therefore also say that if[!] the brain (or the person whose brain it is, assuming there is one) is actually hearing the sound, then that sort of intentionality is present whether or not the brain has any way to interpret the sound as the sound of something.

    ReplyDelete
  173. (It's probably already clear, but I'll mention anyway, that I agree with dguller that the physical state of the brain alone would not be sufficient to determine the quale or qualia it was experiencing. It's probably also the case that more than one physical state/process would be consistent with the having of that experience, so the indeterminacy would likely run both ways.)

    ReplyDelete
  174. That depends upon what you mean by “sufficient related experiences”. To me, what would make an experience “sufficient” to delimit meaning would be one that is experienced by a mind capable of intentionality that could then narrow the range of possible meanings to a single one. The experience itself could not do so, and the brain, as a physical entity, could not do so, no matter how many sensory stimuli it is exposed to.

    I am not sure I understand how you can conclude that the brain, as a physical entity, could not do so.

    I just showed you a state where we agreed that no meaning or intentionality were present. You are saying that regardless of how much stimulous is supplied to our BIV, that it would never be able intend anything at all?

    Meaning depends upon experiences if the meaning is about those experiences, but it does not follow that if you just combined a sufficiently complex range of experiences, then determinate meaning would follow. In fact, it is the mind that determines meaning from a range of possible meanings that are consistent with the same experiences. The question is whether this mind is physical or not. I am arguing that it cannot be physical, because all physical states are consistent with a possible range of meanings, and thus something that is not physical must be involved to delimit that range of meanings to a single one.

    I am sorry, but that just seems like you are making a giant leap to me.

    I do not see how it follows that since all physical states are consistent with a possible range of meanings, there must be something non-physical to delimit particular meaning.

    What about a scenario where we have 5 related physical states. Four of these states has a range of possible meanings, 1 of these states only has a couple. Is your position that the intersection of these ranges can not, in principle, delimit the range of meanings to a single one?

    ReplyDelete
  175. @Scott

    Intentionality is broader than just deriving intellectual meaning from sensory experience. I'd say that sensory experience is itself "intentional" in the sense that it has an object—in the current example, a cluster of auditory qualia that the rest of us would recognize/interpret as the sounds of rain falling on a window, but the brain in question has as yet no basis for doing so.

    I am not sure I would accept this definition of intentional. It seems to me that the word implies something active and not passive.

    ReplyDelete
  176. Bob:

    I just showed you a state where we agreed that no meaning or intentionality were present. You are saying that regardless of how much stimulous is supplied to our BIV, that it would never be able intend anything at all?

    I’m saying that at the most, the brain could delimit a larger range of possible meanings to a smaller range of possible meanings, but it would never be able to determine a single meaning.

    I do not see how it follows that since all physical states are consistent with a possible range of meanings, there must be something non-physical to delimit particular meaning.

    Because no matter how many more physical states you add to the equation, you never reach a single meaning. If you agree that it is possible for human beings to intend single meanings, then this must be due to something non-physical, because anything physical will only make the range of possible meanings smaller without ever reaching a single meaning.

    What about a scenario where we have 5 related physical states. Four of these states has a range of possible meanings, 1 of these states only has a couple. Is your position that the intersection of these ranges can not, in principle, delimit the range of meanings to a single one?

    Walk me through how this would work, and I’ll respond more adequately.

    ReplyDelete
  177. @dguller,

    Walk me through how this would work, and I’ll respond more adequately.

    I only have limited time at the moment, but I'll work something up and post it soon.

    Thanks for the discussion so far.

    ReplyDelete
  178. @Bob:

    "I am not sure I would accept this definition of intentional. It seems to me that the word implies something active and not passive."

    In its standard philosophical usage it just means directedness, with no implications as to activity or passivity. But if you prefer just to call it "directedness" (or "phenomenal consciousness") instead, it doesn't matter to my point, which is that sensory experience has this sort of directedness.

    ReplyDelete
  179. Bob,
    If you have access to this paper, I would give it a read.

    http://www.pdcnet.org/acpq/content/acpq_2013_0087_0001_0001_0032

    ReplyDelete
  180. @dguller,

    thank you very much for your reply!

    I do understand what the A-T position is: that the form of cat/triangle in the intellect is of an immaterial substance.

    What I don't understand, however, is why it is considered absurd/impossible that the intellectual substance be immaterial. Why can't we say that form of the cat or triangle in one material -- say, cat cells with cat DNA, or three lines drawn in chalk -- is what we call a physical cat, and the form of the cat/triangle in a different (but still perfectly material!) material -- say, neurological states and processes in a human brain -- is what we call a mental cat? I fail to see the link between the form's instantiation in the mind and the necessity for its immateriality.

    ReplyDelete
  181. Natural Mind:

    What I don't understand, however, is why it is considered absurd/impossible that the intellectual substance be immaterial. Why can't we say that form of the cat or triangle in one material -- say, cat cells with cat DNA, or three lines drawn in chalk -- is what we call a physical cat, and the form of the cat/triangle in a different (but still perfectly material!) material -- say, neurological states and processes in a human brain -- is what we call a mental cat? I fail to see the link between the form's instantiation in the mind and the necessity for its immateriality

    Perhaps considering sensible species would be useful here. When an empirical entity with form F interacts with the sensory apparatus of a complex biological organism, then F is passed from the entity to the senses of the organism as a sensible species. So, you have F-in-entity and F-in-sense (= sensible species), both of which are material. It is F-in-sense that subsequently causes a subjective experience of the entity as a quale. Again, all of which occurs in the material part of the mind, and does not require an immaterial intellect. In fact, the above process occurs even in animals with sufficiently sophisticated neurological systems.

    However, F-in-sense is still a particular, and thus cannot be the basis for scientific knowledge, which is necessarily of universal principles and causes. And it is at this point that the intellect abstracts an intelligible species F from the sensible species F, and uses the intelligible species F, or F-in-intellect, to understand the formal and universal aspects of the empirical entity. This part of the process must be immaterial, because materiality is necessarily particular and individual, and thus not amenable to universal knowledge. And as has been argued on this thread, materiality is also unable to provide determinate thought content, and thus such conceptual knowledge must be immaterial in nature.

    ReplyDelete
  182. @Natural Mind:

    "I do understand what the A-T position is: that the form of cat/triangle in the intellect is of an immaterial substance."

    The A-T position is that the cat itself is a substance, and thus (on a hylemorphic account of substances) a compound of form and matter. The form of the cat is not a "substance," and it doesn't become an "immaterial substance" just by being in the intellect. (Aquinas would say that angels are immaterial substances, but he wouldn't say that about forms in the intellect.)

    "What I don't understand, however, is why it is considered absurd/impossible that the intellectual substance be immaterial."

    Basically, because in A-T "matter" just is whatever a "form" must be joined with in order for a substance with that form to exist. The point is that because the intellect can receive the form of a cat without itself becoming a substantial cat, the intellect itself can't be what A-T means by "matter."

    ReplyDelete
  183. By the way, I'm not disagreeing with dguller; we're just addressing different issues.

    The word "material" is used in varying senses within A-T, and I think that in the context of sensory perception it tends to be used to mean something like "bodily." I agree with dguller that sensory experience is material in that sense. The intellect is immaterial in both senses, and dguller's reply can be taken as complementary to mine in explaining why.

    ReplyDelete
  184. @dguller

    Walk me through how this would work, and I’ll respond more adequately.

    I am basing this on the following premises.

    The total possible number of referents, references that are shared in common between the sets, can be no more than the maximum number of referents in the least populace set.

    No actual physical state will have an infinity of possible referents and any physical state will have at least one possible referent, even if that referent is no referent, a null set (which might even be the same as the infinite set since referencing absolutely everything is not much different than referencing absolutely nothing).

    A physical state can be said to be related to another physical state if it contains at least one referent in common with the other.

    A physical state must have at least one referent not in common with another in order for it to be a distinct physical state.

    Therefore, it is in principle not only logically possible, but physically possible to delimit the range of referents from a group of related physical states to a single referent.

    ReplyDelete
  185. @Scott

    In its standard philosophical usage it just means directedness, with no implications as to activity or passivity. But if you prefer just to call it "directedness" (or "phenomenal consciousness") instead, it doesn't matter to my point, which is that sensory experience has this sort of directedness.

    Do you mean to say that specific stimuli are the cause of the specific neural patterns to which they relate?

    ReplyDelete
  186. @Steve

    I cannot access it, paywall. Can you summarise the main argument being put forward? I assume it is the similar to one that dguller is espousing.

    ReplyDelete
  187. @Scott,

    Thanks for the explanation.

    I would contend, though, that a cat in the mind is a substantial cat, instantiated in the material of the mind/brain, just as a cat on the mat is a substantial cat, instantiated in the material of cat cells. It's a radically different instantiation of the cat form, but a no less material one. It's a cat concept. I don't see what immateriality buys you, what "problem" it solves.

    There's nothing absurd about there being quite real, quite material cats and triangles in the mind. Being made out of different material, they have different character, but sharing the same form, they are both cats and triangles.

    I think this is close to NiV's and Bob's position earlier. Would you agree, NiV and/or Bob?

    ReplyDelete
  188. Bob,
    I was hoping you could read it (maybe Ed could email it?) to gain more background/details to the general argument. I'm at work now, and heading to a basketball game tonight, but will summarize for you ASAP.

    ReplyDelete
  189. Bob:

    The total possible number of referents, references that are shared in common between the sets, can be no more than the maximum number of referents in the least populace set.

    If you are talking about referents that are members of the set, then I agree.

    No actual physical state will have an infinity of possible referents

    Why not? You can see the falsity of this premise by just looking at a physical state of a cat on a mat. Does that physical state refer to a cat on a mat, or a cat, or a mat, or an animal on a mat, or an animal on a piece of furniture, and so on? And you could add an infinite number of potential referents by just adding numbers to the referent. For example, does the physical state refer to one cat on one mat, or to two divided by two cat on two divided by two mat, or three divided by three cat on three divided by three mat, and on and on, to infinity?

    and any physical state will have at least one possible referent, even if that referent is no referent, a null set (which might even be the same as the infinite set since referencing absolutely everything is not much different than referencing absolutely nothing).

    I give you this one, even though it is debatable under some circumstances.

    A physical state can be said to be related to another physical state if it contains at least one referent in common with the other.

    Okay.

    A physical state must have at least one referent not in common with another in order for it to be a distinct physical state.

    Okay.

    Therefore, it is in principle not only logically possible, but physically possible to delimit the range of referents from a group of related physical states to a single referent.

    How does this conclusion follow?

    ReplyDelete
  190. Bob:

    All that follows from your argument is that if a physical state A has some characteristics in common with a physical state B, then A can refer to B solely on the basis of the characteristics that they have in common. It does not show that A can exclusively refer to just one characteristic of B that they share in common. For example, say that A = {D, E, F} and B = {D, E, G}, then you can say that A refers to B in terms of D and E. But it does not follow that A refers to B only terms of D or E. That cannot be determined only on the basis of A and B alone.

    ReplyDelete
  191. @dguller:

    "You can see the falsity of this premise by just looking at a physical state of a cat on a mat. Does that physical state refer to a cat on a mat, or a cat, or a mat, or an animal on a mat, or an animal on a piece of furniture, and so on?"

    Moreover, let's take a cat to represent coffee, the relation of being on to represent the relation of being in, and a mat to represent a cup. Now the cat's physically being on the mat represents there being coffee in a cup.

    If fanciful "representations" like this are ruled out, it's not on the basis of the physics alone.

    @Bob:

    "Do you mean to say that specific stimuli are the cause of the specific neural patterns to which they relate?"

    No, I mean to say that when I (e.g.) hear a sound, my experience has that sound as an "object" (what I hear is the sound) and is thus in some sense "directed" toward it, even if I don't have any idea what it's the sound of.

    ReplyDelete
  192. Bob,

    I apologize if my reentry into this thread changes it from a conversation to a barrage. I have a question, though.

    would you say that the correct referent ends up being selected for? For instance, that when I look at a car things like 'tire' get selected for rather than 'donut', at least over time. Maybe when I was a toddler I thought cars moved around on donuts. Over time a context develops in my brain that causes it, in the presence of this or that object, to select for what has been the most suitable referent for that object?

    ReplyDelete
  193. Bob:

    Also, just because two physical states share properties in common does not mean that one refers to the other solely on the basis of those properties. So, although common properties might be a necessary condition for reference, they certainly aren't a sufficient condition for reference.

    ReplyDelete
  194. So, although common properties might be a necessary condition for reference

    I think Scott also illustrated that 'properties in common' also explode, especially when we include representative properties and substitution.

    ReplyDelete
  195. @Bob:

    "[A]ny physical state will have at least one possible referent, even if that referent is no referent[.]"

    dguller is willing to grant you this arguendo, but I'm not sure I am. In what sense does a physical state "refer"? Depending on precisely what you mean here, you may be begging the very question at issue.

    ReplyDelete
  196. I have a question on the defense of one of the premises of Classical Theism and the argument from change.

    Part of the defense of the argument entails the proposition that the world is essentially ordered (as opposed to accidentally). What proof does the Classical Theist have for this proposition?

    ReplyDelete
  197. @anon

    The fact that everything that's a combination of act and potency, derives it's actuality from the first member at any instance. A temporal account won't do here, and so we end up with an hierarchical ordered series.

    Accidentally ordered series however, could continue to exist even with their first member falling out of existence.

    You might want to check this out:
    http://vimeo.com/60979789

    ReplyDelete
  198. @Daniel

    The fact that everything that's a combination of act and potency, derives it's actuality from the first member at any instance. A temporal account won't do here, and so we end up with an hierarchical ordered series.

    Can you please elaborate on this a little more? The video you posted is what actually generated my question. I'm having trouble conceptualizing a proof as to why the universe MUST be essentially ordered.

    ReplyDelete