"Body, Brain, and Communication" Iain A. Boal

An Interview with George Lakoff

 

Iain A. Boal, an Irish social historian of science and technics, teaches at the University of California, Berkeley. He is working on a book and film about charisma and healing in the 17th-century Ireland and England. George Lakoff (b. 1941) is professor of linguistics at the University of California, Berkeley. His books include (with Mark Johnson) Metaphors We Live By (University of Chicago Press, 1980); Women, Fire, and Dangerous Things: What Categories Reveal About the Mind (University of Chicago Press, 1987); What Conservatives Know That Liberals Don�t (Chicago Press, 1995). For more information about Lakoff�s field of cognitive linguistics, see his "Conceptual Metaphors" Web site at http://cogsci.Berkeley.edu . This interview appeared in Resisting the Virtual Life; The Culture and Politics of Information (City Light, 1995), a collection of essays edited by Boal and James Brooks that offers, as they write in the preface, a critical view of technology and its associated "values, in our view, too often detrimental to a more human life."

Q George, I understand you want to make a disclaimer about computers before we begin?

A. Yes. I simply want to say that I am not a computer curmudgeon. Whatever I say today has nothing to do with feeling that the clock ought to be turned back, that computers are terrible things for mankind or anything of that sort. I work on a computer; I love it. I communicate by e-mail, and it is very important that I do so. I do research with people who design computational models of mind. I have the greatest respect for them as colleagues and for their work and I think they are enormous and quite obvious advantages in computer technology that are for the better. So, with that disclaimer, let me talk about things that perhaps are mistaken or oversold.

Q. Perhaps we could start this way: You are well known for your work on language and metaphor and in particular for a criticism of the conduit metaphor in relation to language. Can you tell us what the conduit metaphor is? And why are you critical of it? And how the conduit metaphor relates to computers?

A. The conduit metaphor is a basic metaphor that was discovered by Michael Reddy. He observed that our major metaphor of communication comes out of a general metaphor for the mind in which ideas are taken as objects and thought is taken as the manipulation of objects. An important part of that metaphor is that memory is "storage." Hence when you store something in memory you either have to retrieve it or get it to come to you, you recall it. As Reddy observed, communication in that metaphor is the following: ideas are objects that you can put into words, so that language is seen as a container for ideas, and you send ideas in words over a conduit, a channel of communication to someone else who then extracts the ideas from the words.

Reddy shows that this is the major metaphor that we have for communications and he gives lots of examples: "I got that idea across to him" or "Did you get what I was saying?" or "It went right over my head" or "You try to pack too many ideas into too few words." A great many expressions are based on the conduit metaphor. One of them is that the meaning is right there in the words.

Q. What is implied by this view of language as communication by conduit?

A. One entailment of the conduit metaphor is that the meaning, the ideas, can be extracted and can exist independently of people. Moreover, that in communication, when communication occurs, what happens is that somebody extracts the same object, the same idea, form the language that the speaker puts into it. So the conduit metaphor suggests that meaning is a thing and the hearer pulls out the same meaning from the words and that it can exist independently of beings who understand words.

Q. That probably does seem like an attractive idea to a telephone engineer. It seems to describe quite well what is going on.

A. You are bringing up the question of information theory--the whole understanding of information theory in the popular domain as opposed to information theory as a technical subject, which has to do with signals. Information theory as a popular idea is very much like the conduit metaphor. This, as Reddy points out, is the most common view of what communication and information are. And theories of teaching are based on it. When you say, " We are going to stuff this into your mind" and "You have to regurgitate it on the exam" and so one, you are talking about the conduit metaphor, and in this view of teaching what the teacher tries to communicate to the students is actually communicated to them.

Now that is an attractive idea, and there is a set of cases where it seems to work. For example: We are now drinking tea. If I say to you, there is tea in my cup; there is no reason to think that you would have any problem understanding what a cup is, what tea is, and what it means for tea to be in the cup. The conduit method works pretty well as a way of understanding what involved in that communication. But there are a lot of cases where it just fails; in fact, it fails in most cases.

For example, in order for the conduit metaphor to work, the speaker and the hearer must be speaking the same language. If I speak to someone in English, who doesn�t know English, obviously it isn�t going to work. Not only must people be speaking the same language, but they have to have the same conceptual system. They have to be able to conceptualize things in the same way. So if I speak to another speaker of English, from a very different subculture, about a subject where the difference in subcultures matters a great deal, then we may not be communicating. My ideas will not be "extracted" from my words.

The other person I am talking to has to be able to have the right conceptual system to be able to understand what it is I am saying�to make anything like the same sense out of it. In addition, the person I am talking to may have to have pretty much the same kinds of relevant life experiences; he must understand the context in pretty much the same way. If someone understands the context in a totally different way, then the conduit metaphor fails. There is no lack of ways in which the conduit metaphor fails. The conduit metaphor says if you put your ideas in the right words, communication should just work. But communication isn�t so simple. Communication is difficult and it takes a lot of effort. What the conduit metaphor does is hide all the effort involved in communication.

The view of information as something that is separable from human beings is an entailment of the conduit metaphor. It seems natural because that is our major metaphor for communication. Most people don�t even see it as a metaphor; they see it as just a definition of communication. As a result, again as Reddy points out, one of the consequences of this is that people think that information is in books. If ideas can be put into words and words are in books, then the ideas can be in books, and the books are in the libraries�or the ideas can be coded into the computer and therefore the information can be in the computer.

Q. How is that wrong?

A. It is wrong in the following way: Let�s suppose that we have books on ancient Greek philosophy. Let�s suppose we stop training people to speak ancient Greek. Suppose nowhere in the world can people speak ancient Greek and suppose no one learns ancient Greek philosophy anymore. Can you just go to those books in ancient Greek, about Greek philosophers, and understand them? Clearly, the answer is not. So there is no information in the books per se.

You have to have people who understand the language, who understand the historical context, who understand the ideas involved and the conceptual systems involved. The same thing is true of "information" in the computer. In order for anybody to understand "information," they have to put an interpretation on what comes out of the machine. This is a major problem for all software designers. It is not news to anyone who actually designs software, because the problem for software designers is that people are likely to misinterpret what the designer intends. "User-friendly" software is software that is likely to be understood by the person using it. Information is not straightforwardly in the computer�you have to have human beings trained in specific way before it makes any sense to talk about having "information" in a computer. What are the consequences of that? Well, there are a great many. For example, take the claim that we now have more information at your fingertips than ever befor

Q. A very common claim. But it�s true, isn�t it?

A. It is not clear that it is true. Let�s take an example: On the World Wide Web there is a lot of software that I have available to me that I could put on my computer. But I don�t know how to use most of that software. That software is not all information for me. It might become information for me, if I were to learn certain things, but right now it isn�t information for me.

Now, let�s take another kind of case. One of the awful things about the conduit metaphor is that it assumes that meaning is objective. So, for example, let�s take a clear case where meaning need not be objective. Suppose you consider the FBI files. They �re encoded on computers. There are all kinds of data put on those files that is collected by agents, and these files have been collected over the past forty or fifty years. For all I know there might be a file on me! I would doubt that what an FBI agent wrote about me twenty-five years ago is objective. What goes into the FBI computer is not information in any neutral sense. It is something that has been subject to interpretation and upon being seen in a different context can be interpreted in a different way.

Thus it is not obvious that the FBI�s computer has a lot of �information" about some particular person on whom it has a large file. It has what somebody has put in the computer given what they understood and what they took that to mean. But that is not objective information about that person. Does the FBI computer contain objective "information" about people? It may very well not. The FBI files are an extreme case. If you want to take an even more extreme case, look at the KGB files. Do you trust what the KGB has in its files? Do they have a lot of "information" about Americans in their files? It is a very funny idea to think that they have "information" about us given what has been put in, under what circumstances, and for what purposes.

You go from there to information on your credit file. There is "information" in your credit file about when you did and didn�t pay your credit card on time and things of this sort. That in some way is "objective" information, but of course there are circumstances, interpretations, and so on because that information is used for a purpose. It is used for a purpose of deciding whether you should get a loan or get credit�it has to do with whether you are trustworthy. That is not an objective matter. Your trustworthiness is not information that can be in a computer. The only information that can be in the computer is whether a certain bill got paid on time, and things of that sort.

Q. Now, I take it that this is always going to be a problem if language is ever reduced to writing. Are you suggesting that it is now acutely more of a problem, given the recent advances in the technics of communication and information?

A. That is exactly right. It is. Of course, it was already a problem with writing. But it is more of a problem when you have artificial intelligence programs taking databases and then reconfiguring them, interpreting them in other ways, making computations based on them. These so-called "intelligent" programs aren�t intelligent. The programs just follow algorithms that someone made up. And a conclusion can be arrived at on the basis of such an algorithm. An algorithm might be applied to your credit-rating file to decide whether you should get a loan. The algorithm doesn�t know you and cannot decide if you are trustworthy. Such algorithm s are being used t make decisions about your life on the basis of the kind of so-called "information" in some computers.

Q. How did the epithet "intelligent" ever get to be applied to algorithms in a computer?

A. That is a long and interesting story. The first part of the story has to do with formal logic. Gottlob Frege and Bertrand Russell were the developers of mathematical logic. Russell claimed that human rationality could be characterized by mathematical logic. Now, mathematical logic is the precursor to computer programs. The computer database is based on what is called a model for predicate calculus. It has a bunch of entities with properties and relations. All the standard models for first-order logic look like that. And the way in which symbols are manipulated in a computer program comes out of the same kind of mathematics that was developed for the theory of proofs�sometimes also called "recursive function theory," sometimes "the theory of formal systems"�but it�s all the same form of mathematics. The idea was that if humans reasoned using mathematical logic, then a computer could reproduce that form of reasoning. If mathematical logic could characterize what human intelligence was, the computer could be intelligent.

That�s what lies behind the idea. It�s false, an utterly false notion, but a lot of people believe it, a lot of people still think it�s true. There are several things behind this that are metaphorical. We saw in the conduit metaphor that people don�t realize that the conduit metaphor is a metaphor; similarly, a lot of people don�t realize that the metaphor "thought is mathematical logic" is a metaphor. It isn�t true; it is very far from being accurate. So it is important to understand that, one, it is a metaphor, and, two, that it is false in a great many ways.

Q. Do you say it is false because you know it is some other way?

A. Yes, we know a number of reasons why it is false. The first reason is that it is based on the assumption that reason is disembodied, that reason can be separated from the body and the brain, that it can be characterized in terms of pure form. This is an idea that goes back at least to Descartes.

What has been discovered in the cognitive sciences in the last fifteen or twenty years is that reason is embodied, that concepts are embodied�they have to do with how we function in the world, how we perceive things, how our brains are organized, and so on. It is not a matter of disembodied computation. Moreover the mechanisms of reason have turned out to be not at all just mathematical logic. There are many other very different mechanisms at work.

Humans think in terms of what are called "image schemas" �these are schematic spatial relations. For example, if you take the concept "in," it is based on what is called the "container schema," a bounded region of space. The concepts "from" and "to" are based on a "source-path-goal schema," and so son. Different languages organize these schemas in different ways. The schemas are embodied: they are not just disembodied symbols. They have topological and orientational properties that have to do with the way bodies are organized. Mathematical logic just does not capture all of this.

Secondly, there is a lot of reasoning that is metaphorical. As we saw, the conduit metaphor is part of a larger metaphor for understanding what thought is. In general, the way we understand thought is through a set of metaphors. These metaphors are not characterizable in mathematical logic. They do have entailments but not of the kind that logicians have talked about. For example, take classical categories as defined within mathematical logic; namely, by a list of necessary and sufficient conditions. For the most part, human beings don�t think in terms of such categories. Humans think in terms of categories that have very different properties; they may be graded (or fuzzy) , they may be radial (having central members and extending to other noncentral members), they may have a "prototype" structure, where you reason in terms of typical cases, ideal cases, stereotypes of a social nature, and so on. In short, most of the actual reasoning that humans do is not characterizable by mathematical logic.

Q. Does anything follow, then, vis-�-vis modern technologies of communication, from the central fact that human reasoning is embodied in the ways you describe? Is it grounds for relating face-to-face, for "keeping it oral"? Isn�t it an argument against certain kinds of mediation, against virtuality?

A. There is indeed a lot that follows for face-to-face communication�and I don�t mean

face-to-face communication over a video screen. I mean where there is a body present, where they is body language being shown, where there is emotion being shown. For instance, in a book I just received today, Descartes� Error, Antonio Damasio, a neuroscientist who works with patients who have brain injuries, discusses the case of a man who has all his rational faculties�he can reason abstractly quite will�but has lost the capacity to feel. He can feel nothing about poetry, music, sex. It turns out that he does very badly in reasoning about his won life. His life is a mess. Reasoning about his own life seems to depend upon emotional involvement. Damasio�s claim suggests that if we turn over important policy decisions to computer programs, then our lives will be a mess because the emotional component was absent in decision-making.

Q. That would be a threat to Descartes.

A. Yes, if what Damasio says is true, it suggests that reason isn�t separate from emotion, that reason has everything to do with the capacity for felling. That again would shoot down the idea that more logical manipulation would be sufficient to maximize self-interest in a situation. That is one part of the problem. The other part has to do with understanding. Computers don�t understand anything.

Q. How so?

A. They don�t have bodies. They cannot experience things. Most of our abstract concepts are extensions of bodily based concepts that have to do with motion and space, and objects we manipulate, and states of our bodies, and so on. They then get projected by metaphor onto abstract concepts. We understand through the body. Computers don�t have bodies.

This does not mean that important aspects of reason cannot be modeled on a computer, and indeed I work with people who are engaged in modeling small aspects of mind. Each small aspect requires a monumental task of analysis and representation, which is not likely to be incorporated into computer technology in anything like the foreseeable future. Perhaps it�s so complicated it never will be. But beyond that, there is now no reason whatever to think that the kinds of computations that are done in artificial intelligence programs are "intelligent" in the way that human beings are.. All they can do is follow algorithms. Now that does not mean there is no utility in them�in fact, they can be very useful. But it is important of understand that they are not intelligent in the way human beings are and that they don�t understand anything at all.

Q. But humans can be reduced to doing the kinds of things that can be done "algorithmically," and surely that is what a lot of labor consists of, especially in modern times.

A. Yes, that is one of the sad things about industrialization�it tries to turn people into machines. Do computers do that, or do they liberate people from machine-like work? To a significant extent, the computer can turn you into even more of a machine. One of the things that disturbs me in working on computers is certain forms of repetition that make me machine-like, and that�s what I simply loathe in interacting with a computer. It could be that future user-friendly computers will eliminate that. I hope so.

But there is another important issue we haven�t discussed yet; that is, human limitations. You asked whether it was true that there is more information available to us than before. Well, we cannot possibly process all the information that we could understand. There is no way for a human being to do it. I�ve had to get off many many e-mail lists simply because, when I get a thousand messages a week, there is no way I can read them. One of the good things about computers is that it enables people to write more; that is, more than you can read.

In many disciplines, largely because of computer technology, more work is produced than anybody can take in. so academic fields are becoming fragmented more and more. Certainly more is being done, but no one can grasp all that is being done or have an overall view of a discipline as was possible twenty years ago. As a result, new "information" out there is not really knowable. It�s not information for you�or for another human beings. There�s only so much one can comprehend.

Q. So you must find the "information superhighway" metaphor misleading . . .?

A. Very misleading. Sure, come things about it seem to make sense. A huge array of things may become potentially available to you directly�lectures, texts, movies, whatever.

That is fine, except that every time you take advantage of it, there�s something else you can�t do. If you think of information as relative to a person, there are only a certain number of waking hours in a lifetime�and you don�t want to spend all of them at a computer. Add to that limit the limit on what you can understand and the training it takes to be able to achieve understanding, and there is a strict limit on how much information is available to each person. Already what is available has passed the limit that any person can possibly use. The amount for you cannot grow any further. So however many more different sorts of things may become available to you, it is not more.

Q. Your account is in terms of bits of information�there is nothing in it of affect or intensity of experience. It seems flattened out.

A. You�re right. Actually, most of the so-called interactive stuff is pretty uninteractive ! It has to do with some fixed menu, not with being able to probe as you would a person or to judge or be moved as you would in a live interaction. There have to be canned answers and canned possibilities. The idea of interactive video is rather minimal now and not likely to be very rich or interesting for a very long time.

Q. In an ample life, then, how much weight would one attach to technologies such as the computer and video?

A. One of the sad things is that the increase in computer technology does not get you out into the world more, into nature, into the community, dancing, singing, and so on. In fact, as the technology expands, there is more expectation that you will spend more of your life at a screen. That is not, for my money, the way one should live one�s life. The more that the use of computers is demanded of us, the more we shall be taken away from truly deep human experiences. That does not mean you should never be at a computer screen. Nor does it mean that if you spend time at a computer, you will never have any deep human experiences. It just means that current developments tend to put pressure on people to live less humane lives.

Q. Less humane, because, for example, at an automatic teller one has to conform in a mechanical way to the pacing and protocols of a machine?

A. Right, you have to conform, and even if you could say to the automatic teller, "Machine, give me money," you�d still have to say f form of words, the magic words that will get you the money, and you�d still not be interacting with another human in any sense. Similarly, if you have a computer program that enables you to sing with a recorded orchestra, that is very different from singing with live musicians whom you can groove with�who adjust to you and you to them, and with whom you have a human relationship. That doesn�t mean that people using good judgment can�t know when to stop.

Q. Given what you have said about the powers and limits of human bodies and the new machines, I take it you find chilling recent speculation about "artificial life."

A. This talk about virtual reality and artificial life is at once interesting and silly and weird. Let�s start with the positive parts. I could imagine some interesting and fun things to do with virtual reality, and some important ones---for example, ways of guiding surgical operations via virtual reality�so I don�t want to put it down. On the other hand, the idea of virtual interactions replacing interaction with real humans or things made of wood, of paper, of natural materials, plant, flowers, and animals�that I do find chilling. The more you interact not with something natural and alive, but with something electronic, it takes the sense of the earth away from you, takes your embodiment away from you, robs you more and more of embodied experiences. That is a deep impoverishment of the human soul.

"Artificial life" is a different kind of issue. There is interesting work going on in complexity theory and in the study of what�s being called "artificial life." But again, it�s being done under certain metaphors, which like the conduit metaphor, are not always understood as metaphors.

Take the idea, common in the study of artificial life, that life is just the organization of matter, and that the organization can be separated from the thing that�s organized. Therefore, if you can represent the organization in the machine, then life would be in the machine. A weird idea. That form of reasoning is metaphorical reasoning, extremely strange metaphorical reasoning, yet a form that seems natural given our metaphorical conceptual system.

There is a very general metaphor called the "properties-as-possessions" metaphor. In expressions like "I have a headache" and "My headache went away," you understand your headache as a possessible object, something that you have, that you can lose. The same headache can even return to you. This metaphor suggests that a headache can exist independently of you�which is a very bizarre idea, a metaphorical entailment, a way of understanding aspects of ourselves as if they were objects.

Similarly, there are aspects of ourselves that are organized, but once you see the organization as a possessible object separable from the organism�which it isn�t�then you can think of this property existing independently. Now, thinking that way can be useful�architects think that way. If you isolate the structure of a house, you can draw architectural plans; you can then design buildings more easily. That does not mean, however, that what you have on the plans is the actual structure of a house. The architectural plan is a separate entity, which bears a very indirect relationship to the structure of the house. As soon as you thing if the structure of a house as being the architectural plan, that is then metaphorical entailment takes over. That is where the mistake is.

The same mistake applies in the understanding of artificial life. If the organization is what gives a thing life, then the life is seen as in the organization. Purely a metaphorical idea. And if organization can be modeled in the computer, and life is in the organization, then the metaphorical logic says that life is in the computer. This is a metaphorical inference made by some people who study artificial life.

Q. What is at stake in this whole discussion of metaphor and the new technologies of information and communication? What, if you like, are the politics in these metaphors?

A. There is a great deal at stake both in terms of politics and economics. To begin with economics: the effects on our lives are likely to be enormous. It won�t be long before everybody has perhaps half a dozen wires coming into the house, wires they pay for, not just cable TV. The Internet, for example, is not going to be free for very long. There is a very large economic incentive to make people more and more dependent on this technology. Part of the propaganda behind it is that you will have more information at your fingertips. Well, it will be different information, not more information.

Q. An argument that you have demolished.

A. Yes, in the sense that all this information could not possibly be more information for you. If you have 500 TV channels, how many programs can you watch, even if you wanted to? Then there is the question of who is going to control it. Sometimes that�s fine�you and can put things on the Internet. But advertisers and politicians will, as times goes on, learn to control what is on the Internet in ways they cannot do now.

As you know, I had a remarkable experience putting my paper "Metaphor and War" on the Internet. That was one of the most widely distributed papers ever on the Internet, and it was because, when the Gulf War was about to start, there were many people around the world who found that paper useful and they kept forwarding it to recipients on more and more bulletin boards across the Internet. For me, that was a marvelous thing: the paper was read by millions of people.

I suspect that the Internet is now too big for something like that to ever happen again. People are already too jaded. Eventually, much of what will end up on the Internet will be corporate stuff, advertising, entertainment, material from government agencies, and so on. The possibilities for exercising social control are quite remarkable. Take the way Ross Perot tried to set up these community forums around the country, as if they were real community forums. Fifty million people all with access to Perot�that�s ridiculous! Perot is there for an hour: how many can ask him a single question, let alone follow up? Twenty? Well, twenty people have "access" to Perot, not fifty million, and he still controls the format. Politicians will want to make this look like a serious form of inquiry. It isn�t.

References

Damasio, Antonio R. 1994. Descartes� Error: Emotion, Reason, and the Human Brain.

       New York: G. P. Putnam�s Sons.

Emmeche, Claus. 1994. The Garden in the Machine: The Emerging Science of Artificial

       Life. Princeton: Princeton University Press.

Lakoff, George. 1992. "Metaphor and War." In Confrontation in the Gulf. Edited by

      Harry Kreisler. Berkeley: Institute for International Studies.

Reddy, Michael. 1993. "The Conduit Metaphor." In Metaphor and Thought. 2nd ed.

     Edited by Andrew Ortony. Cambridge: Cambridge University Press.