Posts by Kracklite

Last ←Newer Page 1 2 3 4 5 Older→ First

  • Hard News: A thing that rarely ends well,

    I mean has he actually tried to do what his thought experiment suggests would be a piece of piss?

    Well, the point of thought experiments is that you don't do them physically, but think about their logical implications. There's no point in actually putting a cat in a box actually. All you get is an angry cat. And a very dangerous angry cat if it's Greebo.

    Thought experiments are useful in generating testable hypotheses and in the end, observation trumps hypothesis.

    Still, Searle may be stacking his deck, but biologists, neurologists in particular, seem to be genuiely baffled as to why the brain bothers spending its huge metabolic budget on consciousness. It certainly interests me and I wonder if our big brains and minds are a sort of evolutionary equivalent of Tulipmania.

    It may seem that way to a human but it sure doesn't to an engineer.

    Um, you might want to consider the implications of that sentence.

    I take your point however, because

    it's the low level stuff that he/she does display, like picking up which object in a sentence you are talking about from context.

    I have a feeling that we'll find out what conbsciousness is by digging into the squishy stuff rather than ontological discussions with AIs... and then I wonder if there would be crises in hospitals because the signs for the oncology departments are misspelled and patients are admitted with ontological problems and suffer acute existence failures. Then I have a shot of Scotch and things are much better.

    The idea in the mind is perhaps not exactly the same thing as any word or set of them

    And thus were launched the careers of a thousand poststructuralists.

    Beware the Derrida gang.

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: A thing that rarely ends well,

    Come to think of it, much of the confusion and hopes that people have about AI may come down to language. I remember an article in the back pages of a New Scientist a few weeks back (can't remember the exact one) on Native American languages and the difficulties of translation. One couldn't for instance find a noun like 'parrot', so translators ended with all sorts of ambiguous terms. Instead of straight, simple nouns (in our terms), some languages would have 'being [verb] of parrotness here with me'. A dead parrot would be something like 'process of being an ex-parrot decomposing here.' All very experiential and contextually contingent, so that everything is identified according to its existence within a system. Discrete packages of 'I' and 'it' were not tenable.

    In an Aristotlean system, there 'must' be binaries of self/other, so if a computer does something, it is because it's an entity, rather than one's use of it creating a loop of action and feedback.

    Blame English. Language is a virus and all that.

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: A thing that rarely ends well,

    And on the AI front,

    There is no disputing that she 'comprehends' what is going each way, with the aid of that tool.

    Here's where I run away and hide under Donna Haraway's skirts and tell you that you're married to a cyborg. That indeed is the principle objection to Searle - that the room itself comprehends.

    While your wife is conversing in German with intent, in Searle's model, the little man in the room is indulging in purely formal play, so I don't think that the analogy is exact.

    Searle's ultimate point is one of scientific parsimony - that comprehension is not necessary for the room to operate.

    Actually, on the cyborg issue, Samuel Butler wrote about that wayyy back in the mid-late 19th century in some letters to the Christchurch Press and later in Erewhon. A person who incorporates tools into the system of their being is, from one point of view, simply a highly organised person. Lynn Margulis, the microbiologist who adanced the symbiotic model for cellular evolution devotes significant parts of some of her books (with Dorion Sagan) to exploring the future evolutionary implications of out integration with technology.

    If people like, say, Damassio are correct (his model is outlined in a book called The Feeling of What Happens), then consciousness, me, the feeling of 'I', the Pointy-Haired Boss, is the little man and my brain is the Chinese Room. Who cares if I connect it by various means to technological prostheses (be they the internet or a performance-enhancing drug such as caffeine).

    Interesting times...

    I doubt it. An amusing parody could be written, but everyone would know the difference, the moment they flamed it and it failed to grasp anything about what they were saying, or display any understanding of anything outside of simple abuse

    Are your sure of that? That's exactly what they do. Even if there were subtle lacunae in their responses that would be visible to anyone who looked closely, most people wouldn't. In fact, much of theory of mind is really projection - and we project a lot onto inanimate, nonsentient objects such as cars, swords, ships, robots (toy models of the Sojourner Mars rover sold out in record time), dolls and so on all the time. These things offer the most minimal cues and yet people - not just children - attribute intentions and characters to them.

    Then how do I know that you're real and I'm not a brain in a jar?

    Anyway, I don't have a point to prove here really, since this started with a throway quip and I'm grateful for the discussion. I'm intrigued by your descriptions of people's responses to 'smart' versus 'dumb' machines, certainly.

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: A thing that rarely ends well,

    Hell, I'm off to the airport. Bloody Clark.

    ... and other ironic and earnest comments on that topic...

    OK, my experience in a small segment, academia. There are a number of reasons for emigration that I can cite that have little to do with KKKlark, the effing Treaty, dandruff etc.

    One, in a number of professions, universities are churning out far more architecture graduates and designers than there are jobs. This is the result of 'aspirational' marketing and enrolment (and that's why I hate that particular buzzword). This is compounded by the fact that these professions have international cultures and no-one can get real experience without at least a wanderjahr. Also, architecture in particular is very cyclic, depending on levels of investment in building which can fluctuate wildly. Where to go when the market stagnates but overseas? Housing for example is in trouble now...

    (Actually, a friend of mine is involved in new fitouts of buildings put up during the eighties boom and he has some horror stories about the shoddy quality of construction. Now, on paper they met earthquake safety regulations, but in construction, well... Anyone working in a Wellington office building built in the eighties should start looking for a new job - quickly.)

    As for the universities themselves, of course they depend on international exchange and high specialisation. Most of my friends and colleagues there have gone/come from/to overseas. Business is no doubt similar; Globalisation is simply the rule now, and has been for a long time.

    With increasingly specialised work and limited contracts replacing permanent employment, a mobile workforce is inevitable and unstoppable.

    The only reasons I haven't gone are the climate, that Australia is no less isolated from the sources of information that I need for my research. London, on the other hand, offers easy access to the continent in a mere couple of hours of flight time. New Zealand can't change that fact without some seriously accelerated plate tectonics (all the more reason to move out of those eighties office blocks!).

    Eventually, when I finish my PhD, I'll probably be off to the UK - and not because I want to necessarily - my heart's firmly rooted in Dunedin (but then I've lived in Wellington since 1985 anyway).

    Certainly there are many factors involved in emigration and people can articulate complex motivations through simple frustrations and these can be put into simple, crude statistics and used to support any hare-brained bit of rhetoric. There are liars, damned liars and political opportunists who use statistics - but we know that, don't we?

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: A thing that rarely ends well,

    Hi Ben, most interesting.

    I think Turing thought sentience or consciousness was an offshoot of intelligence.

    Well, I can't model Turing's actual thoughts on that (heh), but in the early days of computer science, some optimistic people were thinking that consciousness was an inevitable corollary of intelligence, which they thought was demonstrated by raw calculating ability. That line of thought was quickly refuted. A common scientific joke is that hypersonics/string theory/fusion/AI is very promising - and has been for a long time.

    Searle's thought experiment of the Chinese Room seeks to illustrate his contention that a complex system following purely formal rules can give the appearance of comprehension without actual comprehension.

    Consciousness is a very controversial topic at the moment, but as a consequence very interesting as various disciplines try to pin it down or show that unconscious processes are more efficient (Peter Watts, biologist and sf writer puts it thus: if your brain were Dilbert, your consciousness is the Pointy-haired Boss).

    I'm certainly not going to get into the essentialist argument that there is some unique quality to the human mind that no machine can ever duplicate, but current thinking is that intelligence is not consciousness and simply increasing its quantity does not produce the quality of consciousness. Very dumb creatures can be conscious, very 'intelligent' machines that can model weather systems and so forth show no signs of consciousness whatsoever. As you say,

    I think we still have quite a long way to go before there's any kind of general intelligence in machines that even vaguely resembles human intelligence.

    Or ESL people as you say, yes. I did see a transcript of a conversation between a human interrogator and a programme that gave an impression of an obsessive Trekkie. I'm sure someone could writer a programme that simulates the posts of Redbaiter or DFJ - and I don't mean that as a joke. Their monocausotaxophilia (attributing everything to one cause - 'socialism/'dykeocracy' in their cases), rigidly stereotyped statements and limited vocabulary are eminently imitable.

    And

    But who knows, it could just be a small chance in paradigm and architecture.

    Indeed.

    Certainly the hardware is powerful enough now, something we couldn't really say when I was first studying AI.

    There's been some interesting work lately in simulating a mouse brain (well, half of one), so in principle, yes...

    You probably know about that, right?

    Google, for instance, is better than most human librarians at finding you the info you need.

    I know some librarians who would probably like to make balloon animals with your intestines for saying that. :)

    I don't know about Web 1.0/2.0, but the 'semantic web' or 'Web 3.0' would probably start to demonstrate Chinese Room like qualities successfully.

    Of course a lot of the debate about AI and consciousness comes down to it actually being two different questions: 'What is the feeling of "I"?' and 'How can we make this machine perform better?' One's a philosophical issue that has been taken up by neuroscientists, with practical and ethical consequences in medicine and the other is a technological one with philosophical implications.

    (One commentator on the latter extreme put it, 'Who cares if submarines really swin or not?')

    Of course I'd like to be confident that I could have the choice of downloading my mind into a machine when this body finally gives up rather than it simply being the equivalent of an autobiographical hypertext or an animated portrait.

    I'm getting into lecturer mode.. thank you for giving me the opportunity.

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: A thing that rarely ends well,

    Come to think of it, I would consider voting for Max Headroom.

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: A thing that rarely ends well,

    I believe the Turing test is intended to be a sufficient but not necessary proof of "intelligence", not humanity.

    Actually it is a test of neither, but - supposedly - of sentience or consciousness. Alan Turing proposed it as a means of examining, not necessarily solving the questions about whether computers actually think or respond without comprehension. One questions and compares the responses given by a computer and a real human and the computer is considered to have passed once it is indistinguishable from a human being. To do so, it would have to display some 'theory of mind', that is, the ability to model the inner thought processes of another and to reflect upon one's own situation through discourse. Thus far no software has been able to simulate a human being for long, but in theory one could. Would it actually be conscious or just a skilled mimic however?

    Well, that's a philosophical can of worms, becoming now a neurological can of worms as neuroscientists start asking, 'Well what do we mean by "consciousness" anyway?'

    Then there's Searle's Chinese Room for hours and hours of more fun...

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: A thing that rarely ends well,

    My distrust of him stems not from any personal dislike (how can one dislike a void?), but that he tries too hard to say what he thinks people think he should say (now there's a hall of mirrors!). It doesn't give me confidence that he has coherent principles or a structure of reasoning to generate those principles - just reflexes and an obsequious desire never to be caught saying 'no'.

    Someone said of someone else (both famous) that he was like a cushion, bearing the imprint of of whoever sat on him last. I get that feeling about Key - and in government he'll surround himself with all the old plutocrats and technocrats.

    Well, I tend to be cynical about any man in a suit.

    I wonder if the wrong interview technique is being used. Time for the Turing Test perhaps? Or maybe one could set up some interesting tape loop experiments by playing recordings of him back to himself?

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: A thing that rarely ends well,

    This morning at water coolers all the talk was "jobs for boys" and there was surprise when the cooler people discovered to be an "honourary" position.

    Missed the Key interview on this matter on Nat Rad because I was putting the rubbish and recyclables out (OK, laboured simile, sorry), but I caught the listerner feedback before the nine o'clock news and it was unanimously negative. RNZ tries to be 'balanced' these days and there are no shortage of Colonel Blimps in their audience, but the most charitable description of Key read out was 'lightweight'.

    The TV news people may have their story to tell, but that observation, and that there was 'surprise' rather than, say, scepticism around the water cooler at this manufactured scandal (or as Sean Plunket put it, 'scandelette'), would suggest that people are starting to notice a whiff from Key's, um, 'fertiliser'.

    The Library of Babel • Since Nov 2007 • 982 posts Report

  • Hard News: Not so much ironic as outrageous,

    Indeed, best wishes and hopes.

    The Library of Babel • Since Nov 2007 • 982 posts Report

Last ←Newer Page 1 86 87 88 89 90 99 Older→ First