Does the proven unsolvability of Turing's halting problem mean that, though we know that either the axiom of choice or the axiom of determinacy is the case, we cannot know which is the case?
― M.V., Friday, 25 September 2009 06:49 (sixteen years ago)
Not at all. The halting problem concerns Turing machines, and says that there is no Turing machine that can detect whether or not a given program will halt (because if the program won't halt, the Turing machine will just keep running, waiting for it to halt, and this will never end).
Now Turing machines are important because they are a model of computation, and the Church-Turing thesis says that all models of computation are "equivalent", in the sense that they permit precisely the same computations. So the unsolvability of the halting problem is a result for all models of computation.
What import does this have for the knowability of mutually incompatible axioms like the two you're describing. None, unless you believe both that:
(1) the problem of which of these two axioms is true is a problem that a Turing machine, or other model of computation, cannot decide.
(2) our minds are computers. More precisely, that *our* ability to decide between the two of them is a computation ability, and that our computation ability implements a model of computation meeting the Church-Turing thesis.
I see no good reason to believe (1) or (2).
― Euler, Friday, 25 September 2009 07:56 (sixteen years ago)
Would be interested to hear more about (2). I read Penrose a long time ago, didn't find him that convincing, iirc his argument went something like "oooOOOOoooo Godel oooOOOOoooo quantum physics oooOOOOOOoooooo".
― this must be what FAIL is really like (ledge), Friday, 25 September 2009 09:13 (sixteen years ago)
what good reason is there not to believe (2)?
― tomofthenest, Friday, 25 September 2009 09:56 (sixteen years ago)
I don't have a knock-down argument against (2), but it's not as though the only option remaining is to accept (2). Especially for a claim as massive as (2), shouldn't the default stance be to wait for good evidence before accepting it?
And I don't know what the evidence at this point is supposed to be. AI hasn't panned out. My own views at this time are like Hubert Dreyfus' in What Computers Can't Do. You can get a good summary of those views in the interview here.
I think a main reason people accept (2) is ideological: it comes from a desire to see nature as mechanized. But just because you want to believe nature is mechanized doesn't mean that it is.
― Euler, Friday, 25 September 2009 10:23 (sixteen years ago)
Speaking for myself it's not a desire, it's just that, despite my strong dualist tendencies, I don't know what it means, for something to be non-computable. The arguments against computationalism are all negative - "consciousness|qualia|intuition|expertise|whatever seems to be [not even is, just seems to be] non-computable, therefore computationalism is false". But no-one knows, if it's not computation, what it is instead.
But I kinda promised myself I'd stop thinking about this, 'cause it's always just the same old merry-go-round.
― this must be what FAIL is really like (ledge), Friday, 25 September 2009 11:00 (sixteen years ago)
huh I have now heard two references to Hubert Dreyfus in two days, having never heard of him before.
This is not my area, but my inclination for (2) is to say that it's different because the mind isn't a closed system in the way that a computer program is? Both in terms of the immediate and contingent access to extraneous information in the mind, and through ongoing sensory perception. And also in that the mind's computational processes can change to fit when working through a problem. Excuse me if I'm missing the point entirely.
― Akon/Family (Merdeyeux), Friday, 25 September 2009 11:13 (sixteen years ago)
Yeah, I don't know exactly what cognition is, if not computation, but I don't want to conclude from that that cognition is computation. We just don't understand cognition very well at this point.
― Euler, Friday, 25 September 2009 11:40 (sixteen years ago)
xpost, interested in what you mean by "AI hasn't panned out". Do you mean, we don't have robots you can have a chat with yet?
― tomofthenest, Friday, 25 September 2009 12:22 (sixteen years ago)
Re. AI: sure, that would be a sign of passing. I'm talking about what's called "strong AI", a machine that wasn't just good at some specialized task like playing chess, but was more flexible. We've nothing like that on the horizon, even.
― Euler, Friday, 25 September 2009 12:54 (sixteen years ago)
It'll creep up on us, and I'm sure it won't appear to be "human being" intelligent for a long time.
Then again, do you know I'm not a bot?
― tomofthenest, Friday, 25 September 2009 14:08 (sixteen years ago)
We now have flesh-eating robots, what more do you want?(xpost)
― Garnet Memes (James Redd and the Blecchs), Friday, 25 September 2009 14:09 (sixteen years ago)
otm.
― caek, Friday, 25 September 2009 14:13 (sixteen years ago)
I like analogy John Searle came up with which was something like "the planets are not calculating differential equations as they go through their orbits. They are not in that line of business."
― Garnet Memes (James Redd and the Blecchs), Friday, 25 September 2009 14:37 (sixteen years ago)