![]() |
| Last update: |
![]() |
|
|
Week 12 April 7 Easter
|
||
|
|||
|
Assigned Readings
|
|||
|
1
|
Mandel, T. (1997). Social User Interfaces and Intelligent Agents. In Elements of User Interface Design. |
||
|
2
|
Video: Beyond Human | ||
|
Additional References
|
|||
|
1
|
Knuth, D.E. (2001) 'Things a Computer Scientist Rarely Talks About'. CLSI Publications, Stanford, CA ISBN 1-57586-327-8
|
||
|
2
|
Stewart, I., and Cohen, J. (1997) 'Figments of Reality ' the Evolution of the Curious Mind' Cambridge University Press ISBN 0-521-57155-3
|
||
| 3 | Turing Test, From The Alan Turing Scrapbook | ||
|
4
|
Turing Test: http://cogsci.ucsd.edu/~asaygin/tt/ttest.html#intro [excerpt below]
|
||
| 5 | Turing Machines: http://www.turing.org.uk/turing/scrapbook/machine.html http://plato.stanford.edu/entries/turing-machine/ |
||
| 6 | Searle's Chinese Room (a response to the Turing Test) http://www.artsci.wustl.edu/~philos/MindDict/chineseroom.html | ||
| 7 | A lovely description of Strong vs Weak AI by Dr. Stephen Westland from University of Derby (http://colour.derby.ac.uk/colour/people/westland/cfls7.html) | ||
|
|||
|
|
Response
|
|
| A.I. Equipment..... | ||
![]() |
||
| Katrin demonstrates how to wear the A.I. equipment..... (You'll have to find instructions on its proper use elsewhere) | ||
![]() ![]() |
||
| A couple of excerpts from Knuth's Book: (Donald Knuth is one of the undisputed fathers of modern programming.)
From a Panel Discussion on "Creativity, Spirituality, and Computer Science" Nov. 17, 1999 [Panel members: Donald Knuth; Anne Foerst, MIT; Harry Lewis, Dean of Harvard College; Guy L. Steele Jr., Sun; Manuela Veloso, CMU; Mitch Kapor, Lotus] A question from the audience: "I have a two-year-old and a computer. I am quite certain that the two-year-old is conscious and quite certain that the computer is not. However, the two-year-old is not fully conscious and the computer shows bits of consciousness at times. It seems to me that there is something motivating the two-year-old that is not present in the computer. Until we get some sense of where that comes from, we are not going to make a whole lot more progress on the AI stuff. Consciousness seems to me to be an interesting source. I think this question of materialism is key. Do you believe that consciousness arises out of material events? Or, do you believe that material events arise out of consciousness?" Guy L. Steele Jr. [Distinguished Engineer at Sun], in reply to something Hofstadter (of "Goedel, Escher, Bach" fame) wrote in 1981. "There is a possibility that the physical structure of the universe may be such that the only feasible embedding of intelligence - in a small enough space that you are not subject to speed of light considerations, and can interact with human beings in real time, at their natural speed - may be the biochemical one. In fact, we may run into problems trying to build electrical, silicon, or whatever computers out of other stuff than what our heads have been made out of, trying to get it into a small enough space that the pieces can interact quickly enough so that they can have conversations with us. That is a possible technical limitation that we shouldn't overlook in the debate." |
||
|
||
| Strong vs Weak AI: While I doubt you will find agreement even among the so-called experts, people dealing and working with fall roughly into two categories: the "believers" or Strong-AI proponents, and the non-believers, which are the 'Weak-AI' folks. Since there is no conclusive proof behind most of what is done in AI, the discipline often ends up taking on cult status (zealous devotion to a person, ideal, or thing). The believers will tell you it is only a matter of time before we create an intelligent machine. The non-believers are not so self-assured. While many strong-AI folks exist among computer scientists, the 'buy-in' to the religion is far greater among those who do not actually know how computers work . I think this is telling. |
||
| "The future of computing will be 100% driven by delegating to, rather than manipulating computers." While this may be true for many, and maybe even most now, the fundamental functioning of the computer has not changed in the last 50 years. Someone must still know how to actually make those delegating utilities work. There is also a complaint that whenever something works in AI, it gets called something else. Perhaps this is because once we get it working we realize it is NOT intelligence after all. Computer Science is in the unfortunate position of being a discipline that can call nothing its own. Everything we know and do comes from another discipline. This makes us a little defensive. It is also a discipline that many sophisticated computer users believe they understand by virtue of the fact that they know how to use the applications software, or can piece together components. According to some of us, this is *not* computer science, nor does it qualify as understanding the machine. A colleague of mine once said that architects have a similar problem: everyone lives in a house or apartment so everyone seems to think they 'know' architecture. People who are experienced computer users wonder what computer scientists do. Over the last decade or so, there has been a quiet divergence of researchers into two fairly distinct groups: those who use the computer in sometimes very sophisticated ways but who remain 'users', and those who are able to create new tools. The gap between the tool-users and the tool-makers is as wide as it's ever been. The tools are sometimes very complex and require considerable expertise to use effectively - but they do not come about by spontaneous generation: someone must create, design, and implement them. These are computer scientists. "Any sufficiently advanced technology is indistinguishable from magic." Arthur C. Clarke There is no denying that software has become very complex and incredibly sophisticated. The complexity has outstripped the ability of even many computer scientists (and virtually all software engineers) to comprehend. This does NOT imply though, that it should be mistaken for magic, NOR for intelligence. One of the claims of the article is that the use of intelligent agents, expert systems and AI will allow users to interact with the computer in human ways (and language), rather than computer ways (and language). Even Alan Turing thought we'd be there by now. This should not be surprising: the theoretical machine on which all modern computers are based is one that is capable of doing anything, and is known as the Turing Machine. Unfortunately, human forms of communication are so context-sensitive and subjective that we have not in fact come very far at all. We still don't understand enough about how we do things to mimic them convincingly. True speech recognition, as opposed to the simple training systems currently available, is still in it's infancy. Computer vision is hard. Computer hearing is even father behind. Living organisms have a truly amazing capacity for classification and pattern recognition. Much of the optimism shown by 'strong AI' disciples comes from a lack of understanding of what is actually involved in making these things happen. The promise that voice recognition is just around the corner is a classic example. The AI believers have been making such promises for over 30 years, yet real, functioning systems remain mysteriously out of reach. The fact is, we still have nothing that is reliable enough to be usable. Perhaps an example will help put it into perspective: suppose an AI missionary claims his intelligent voice recognition software can achieve 98% accuracy. That sounds pretty good, doesn't it? Well, lets apply that to the words in a book. An average novel contains roughly 8 words per line, 35 lines per page. On one page, we have an average of 8 X 35 = 280 words. If we got 98% of the words right that would mean we get 2% of the words wrong: or about 5 words per page. I would not be willing to read a book that had 5 words wrong per page - it would be too distracting. It should not be surprising that we would be so willing to impose personalities onto our computers. Even with just a small amount of apparent social 'consciousness', people are apt to anthropomorphize our computer. We attribute personalities to our cars too - and our musical instruments, our weapons, not to mention our pets, no matter how lowly. Just because I have given my car a name does not mean that the car possesses some form of intelligence. |
||
|
||
|
Copyright (C) 2004 Katrin Becker
|
||
| . | ||