Katrin Becker
Back to my Home Page (My Page)
EDER 679.12
Reading Response 12
Last update: Wednesday, April 07, 2004 12:43 PM

Back to 679 main pageComputer Based Learning II

Week 12 April 7 Easter
Are computers intelligent? Are robots our offspring?
Relating with agents and social user interfaces.
Assigned Readings
1

Mandel, T. (1997). Social User Interfaces and Intelligent Agents. In Elements of User Interface Design.

2
Video: Beyond Human

Additional References
1
Knuth, D.E. (2001) 'Things a Computer Scientist Rarely Talks About'. CLSI Publications, Stanford, CA ISBN 1-57586-327-8
2
Stewart, I., and Cohen, J. (1997) 'Figments of Reality ' the Evolution of the Curious Mind' Cambridge University Press ISBN 0-521-57155-3
3 Turing Test, From The Alan Turing Scrapbook
4
5 Turing Machines:
http://www.turing.org.uk/turing/scrapbook/machine.html
http://plato.stanford.edu/entries/turing-machine/
6 Searle's Chinese Room (a response to the Turing Test) http://www.artsci.wustl.edu/~philos/MindDict/chineseroom.html
7 A lovely description of Strong vs Weak AI by Dr. Stephen Westland from University of Derby (http://colour.derby.ac.uk/colour/people/westland/cfls7.html)
And, if you're up for the challenge:
Hofstadter, D. R. (1979). Godel, Escher, Bach: An eternal golden band. New York: Basic Books.
Hofstadter, D. R. (1985). Mathemagical Themas. New York: Basic Books.
Hofstadter, D. R. a. t. F. A. R. G. (1995). Fluid Concepts and Creative Analogies: computer models of the fundamental mechanisms of thought. New York: Basic Books.
Response
A.I. Equipment.....
Katrin demonstrates how to wear the A.I. equipment..... (You'll have to find instructions on its proper use elsewhere)
A couple of excerpts from Knuth's Book: (Donald Knuth is one of the undisputed fathers of modern programming.)

From a Panel Discussion on "Creativity, Spirituality, and Computer Science" Nov. 17, 1999 [Panel members: Donald Knuth; Anne Foerst, MIT; Harry Lewis, Dean of Harvard College; Guy L. Steele Jr., Sun; Manuela Veloso, CMU; Mitch Kapor, Lotus]

A question from the audience: "I have a two-year-old and a computer. I am quite certain that the two-year-old is conscious and quite certain that the computer is not. However, the two-year-old is not fully conscious and the computer shows bits of consciousness at times. It seems to me that there is something motivating the two-year-old that is not present in the computer. Until we get some sense of where that comes from, we are not going to make a whole lot more progress on the AI stuff. Consciousness seems to me to be an interesting source. I think this question of materialism is key. Do you believe that consciousness arises out of material events? Or, do you believe that material events arise out of consciousness?"

Guy L. Steele Jr. [Distinguished Engineer at Sun], in reply to something Hofstadter (of "Goedel, Escher, Bach" fame) wrote in 1981. "There is a possibility that the physical structure of the universe may be such that the only feasible embedding of intelligence - in a small enough space that you are not subject to speed of light considerations, and can interact with human beings in real time, at their natural speed - may be the biochemical one. In fact, we may run into problems trying to build electrical, silicon, or whatever computers out of other stuff than what our heads have been made out of, trying to get it into a small enough space that the pieces can interact quickly enough so that they can have conversations with us. That is a possible technical limitation that we shouldn't overlook in the debate."

Step 1: Define "Intelligence". This is KEY, and, to my mind, still open for debate. Until we can agree on a common definition for intelligence, there can be no common description of artificial intelligence either.
"We could define the intelligence of a machine in terms of the time needed to do a typical problem and the time needed for the programmer to instruct the machine to do it."
John Nash, 1954
"I tell my students that artificial intelligence is a property that a machine has if it astounds you."
Herbert Freeman

Can we separate intelligence from consciousness; from awareness; from free will?
Strong vs Weak AI:

While I doubt you will find agreement even among the so-called experts, people dealing and working with fall roughly into two categories: the "believers" or Strong-AI proponents, and the non-believers, which are the 'Weak-AI' folks. Since there is no conclusive proof behind most of what is done in AI, the discipline often ends up taking on cult status (zealous devotion to a person, ideal, or thing). The believers will tell you it is only a matter of time before we create an intelligent machine. The non-believers are not so self-assured. While many strong-AI folks exist among computer scientists, the 'buy-in' to the religion is far greater among those who do not actually know how computers work . I think this is telling.
"The future of computing will be 100% driven by delegating to, rather than manipulating computers." While this may be true for many, and maybe even most now, the fundamental functioning of the computer has not changed in the last 50 years. Someone must still know how to actually make those delegating utilities work.

There is also a complaint that whenever something works in AI, it gets called something else. Perhaps this is because once we get it working we realize it is NOT intelligence after all.

Computer Science is in the unfortunate position of being a discipline that can call nothing its own. Everything we know and do comes from another discipline. This makes us a little defensive. It is also a discipline that many sophisticated computer users believe they understand by virtue of the fact that they know how to use the applications software, or can piece together components. According to some of us, this is *not* computer science, nor does it qualify as understanding the machine. A colleague of mine once said that architects have a similar problem: everyone lives in a house or apartment so everyone seems to think they 'know' architecture. People who are experienced computer users wonder what computer scientists do. Over the last decade or so, there has been a quiet divergence of researchers into two fairly distinct groups: those who use the computer in sometimes very sophisticated ways but who remain 'users', and those who are able to create new tools. The gap between the tool-users and the tool-makers is as wide as it's ever been. The tools are sometimes very complex and require considerable expertise to use effectively - but they do not come about by spontaneous generation: someone must create, design, and implement them. These are computer scientists.

"Any sufficiently advanced technology is indistinguishable from magic." Arthur C. Clarke

There is no denying that software has become very complex and incredibly sophisticated. The complexity has outstripped the ability of even many computer scientists (and virtually all software engineers) to comprehend. This does NOT imply though, that it should be mistaken for magic, NOR for intelligence.

One of the claims of the article is that the use of intelligent agents, expert systems and AI will allow users to interact with the computer in human ways (and language), rather than computer ways (and language). Even Alan Turing thought we'd be there by now. This should not be surprising: the theoretical machine on which all modern computers are based is one that is capable of doing anything, and is known as the Turing Machine. Unfortunately, human forms of communication are so context-sensitive and subjective that we have not in fact come very far at all. We still don't understand enough about how we do things to mimic them convincingly. True speech recognition, as opposed to the simple training systems currently available, is still in it's infancy. Computer vision is hard. Computer hearing is even father behind. Living organisms have a truly amazing capacity for classification and pattern recognition.

Much of the optimism shown by 'strong AI' disciples comes from a lack of understanding of what is actually involved in making these things happen. The promise that voice recognition is just around the corner is a classic example. The AI believers have been making such promises for over 30 years, yet real, functioning systems remain mysteriously out of reach. The fact is, we still have nothing that is reliable enough to be usable. Perhaps an example will help put it into perspective: suppose an AI missionary claims his intelligent voice recognition software can achieve 98% accuracy. That sounds pretty good, doesn't it? Well, lets apply that to the words in a book. An average novel contains roughly 8 words per line, 35 lines per page. On one page, we have an average of 8 X 35 = 280 words. If we got 98% of the words right that would mean we get 2% of the words wrong: or about 5 words per page. I would not be willing to read a book that had 5 words wrong per page - it would be too distracting.

NOTE: Intelligent agents ARE sophisticated macros. They are the same thing. One of the biggest barriers to real advances in machine intelligence may well be an inability or unwillingness to accept that sophisticated programs are still just programs.

It should not be surprising that we would be so willing to impose personalities onto our computers. Even with just a small amount of apparent social 'consciousness', people are apt to anthropomorphize our computer. We attribute personalities to our cars too - and our musical instruments, our weapons, not to mention our pets, no matter how lowly. Just because I have given my car a name does not mean that the car possesses some form of intelligence.

This article predates the term "spyware". They devote great energy to describing how idyllic computer interactions will be when we have 'intelligent' machines, but fail to recognize the cost.

FROM THE IMITATION GAME TO THE TURING TEST
The Turing Test was introduced by Alan M. Turing (1912-1954) as "the imitation game" in his 1950 article (now available online) Computing Machinery and Intelligence (Mind, Vol. 59, No. 236, pp. 433-460) which he so boldly began by the following sentence:

I propose to consider the question "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think."

Turing Test is meant to determine if a computer program has intelligence. Quoting Turing, the original imitation game can be described as follows:

The new form of the problem can be described in terms of a game which we call the "imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B.

When talking about the Turing Test today what is generally understood is the following: The interrogator is connected to one person and one machine via a terminal, therefore can't see her counterparts. Her task is to find out which of the two candidates is the machine, and which is the human only by asking them questions. If the machine can "fool" the interrogator, it is intelligent.

This test has been subject to different kinds of criticism and has been at the heart of many discussions in AI, philosophy and cognitive science for the past 50 years.

Back to 679 main page
Copyright (C) 2004 Katrin Becker
.