Against the Power of Computers and the Destruction of Reason. For the Development of Human Creativity and Judgement. Remembering the Critical Intellectual Joseph Weizenbaum on the Occasion of his 100th Birthday

Klaus Fuchs-Kittowski

Leibniz-Sozietät der Wissenschaften, Berlin, Germany, fuchs-kittowski@t-online.de


Abstract: This article is a reflection on the relevance of Joseph Weizenbaum’s ethics today on the occasion of his 100th birthday. Today, there are many debates about the impact of AI technologies such as ChatGPT on society. Weizenbaum understood himself not as a computer and AI critic, but as a critic of society. He situated the problems of computing in the context of society. The paper shows that in the spirit of Weizenbaum we should also in the contemporary age of advanced AI remind ourselves that computers cannot understand, do not have feelings, and therefore cannot do many things that humans actively and consciously do.

Keywords: Joseph Weizenbaum’s 100th birthday, informatics and society, AI and society, Artificial Intelligence, ChatGPT

Acknowledgement: This article was translated from German to English by Christian Fuchs. A German version of this article will be published in FIfF-Kommunikation (see https://www.fiff.de/publikationen/fiff-kommunikation)

1.   My First Encounter with Joseph Weizenbaum

I met Joseph Weizenbaum for the first time at the IFIP (International Federation for Information Processing) conference “Human Choice and Computers II” in Baden near Vienna in 1979. The meeting was not entirely unexpected, as I had read his book Computer Power and Human Reason: From Judgement to Calculation (Weizenbaum 1976) on the recommendation of a member of staff at the German National Library.  The librarian had called me and said: “A new book has arrived that you should get before it goes through the registration process. It says a lot of the same things you say in your lectures”. The name Weizenbaum had also been mentioned to me before, by the Cologne molecular biologist Benno Müller-Hill. At Samuel Mitja Rapoport’s invitation, we discussed Müller-Hill’s ideas on biology and philosophy, in which he shows how racist thinking runs through biology, from Plato to Ernst Häckel. At the end of the discussion, Müller-Hill said to me: “I was recently in the USA. I met at least one real personality there, a professor at MIT, Joe Weizenbaum. That would be an important dialogue partner for you”.

I spontaneously invited Joseph Weizenbaum to Humboldt University right after the welcome at the IFIP/TC9 conference. I had never invited anyone before, and certainly not an American in the middle of the Cold War. Weizenbaum took a step back and then came up to me again and said: “The invitation to Humboldt University would be a satisfaction for me”. I took a step back too and asked: “Why?” He replied: “Don’t you know that I’m from Berlin? As a young Jewish boy, I always had to sneak past this university. If this university invites me now, it would be a satisfaction for me”.

In the working group on “Computers and Ethics” for which I was responsible at the IFIP/TC9 conference, Weizenbaum formulates his minimal moral imperative for computer scientists:

 

“Don’t use computers to do what people ought not do” (in Fuchs-Kittowski 1980, 279).

 

At the very least, you shouldn’t do anything with a computer that you shouldn’t do as a human being! We are currently experiencing just how topical this moral imperative is. With autonomous weapons and the use of armed drones, the distance between killing and the atrocities of the theatre of war is becoming so great that the inhibition threshold for war and killing is being lowered to an alarming degree.

2.   A Seminar on the Critique of AI with Joe Weizenbaum at Humboldt University

The joint seminar with Joseph Weizenbaum on the problems addressed in his book Computer Power and Human Reason (Weizenbaum 1976) and our fundamental distinction between the automaton as an information transformer and the creatively active human being capable of generating information, took place seven days after the outbreak of the Soviet Union’s war against Afghanistan.


Weizenbaum explained to the Vice-Chancellor for Social Sciences:

 

“I am here, not because I am a particular friend of the GDR, but because I am an American patriot. As a patriot, I am against the arms race because it is ruining our economy. It’s like sharpening a pencil and throwing it in the wastebasket and repeating it over and over again. It’s crossly dangerous to all our lives!”

 

After his stay at Humboldt University, Joe Weizenbaum called me from Zurich and said: “Klaus, yesterday I spoke in front of more than 1000 people in a church in Zurich! That was really great”. Weizenbaum’s international advocacy of détente and disarmament had an impact right up to the dramatic decisions in the circle around Gewandhaus director Kurt Masur in Leipzig (see Fuchs-Kittowski 2004). Klaus Masur was the Gewandhaus Orchestra’s conductor in Leipzig. On October 9, 1989, the day of the Leipzig Monday demonstrations, Masur was one of the six prominent Leipzigers (who wrote the call “No violence!”). This call was broadcast several times during the demonstration over the loudspeakers of Leipzig city radio and contributed significantly to its peacefulness.

As an expert and critical intellectual, you can only mobilise many people if you believe in the principle that all terrible developments are possible and warn against them emphatically. Joe Weizenbaum understood how to do this. Once, for example, when the World Council of Churches met at MIT, Weizenbaum pointed out to the international representatives that dangerous weapons were being designed and developed. This led to the World Council of Churches seriously addressing the issue of disarmament for the first time. A young theologian from Leipzig was very impressed by this. It is therefore no coincidence that he was one of the supporters of Gewandhaus conductor Kurt Masur who with his appeal prevented a “Chinese solution”. The Leipzig demonstrations against the GDR regime were, in contrast to the 1989 Tiananmen Square protest in China, not violently suppressed. When we commemorate Joseph Weizenbaum’s 100th birthday, it is important to remember his commitment to general disarmament. This also includes the founding of the movement Computer Professionals for Social Responsibility (CPSR) together with the famous pioneer of AI research Terry Winograd in the USA and the Forum Informatikerinnen und Informatiker für Frieden und gesellschaftliche Verantwortung (FIfF, Computer Scientists for Peace and Social Responsibility) together with the German pioneer of computer science Christiane Floyd.

3.   Joseph Weizenbaum as a Critic of Society

After a seminar organised with us at Humboldt University, a planned honorary doctorate was not awarded to Joseph Weizenbaum by Technical University Berlin. My attempt to obtain an honorary doctorate for Weizenbaum at Humboldt University was rejected, despite at least one committee member interpreting Weizenbaum’s criticism of Artificial Intelligence, of which he is regarded as a co-founder, as a general hostility to technology. The basic concern and approach of Joe Weizenbaum was the following one: “I am not an AI critic. I am a critic of society”.

In one of his final talks, Weizenbaum stressed again that the computer has no feelings. The computer lacks basic human capacities. Weizenbaum pointed out that Marvin Minsky had admitted not having succeeded in instilling feelings in robots.

For me, this is the real legacy of Joseph Weizenbaum’s work, the basis of his criticism of AI and society. What is essential for humans is alien to the computer and remains alien to it: feelings. We have to ask AI enthusiasts such as the former director of the Carnegie Mellon Mobile Robots Laboratory, how they think they can transfer a mother’s smile towards her child to a computer memory if they believe the replacement of human society by a computer society is possible and propagate it in the name of modern science.

Weizenbaum fought against anti-human reductionism. He was, for example, opposed to Herbert Simon’s idea (1969) that ants, computers, and humans are systems of the same kind because they are information processing systems. This reduction of the human being to the computer, which is inherent in the information processing approach or the physical symbol system hypothesis is an ideological attitude that is extremely dangerous. Joseph Weizenbaum pointed out in his work Computer Power and Human Reason that this identification of automata and humans distorts reality and one is then tempted to accept this distortion as a “complete and exhaustive” representation (Weizenbaum 1976, 128). This is what “the computer scientist Herbert A. Simon” describes as “his own fundamental theoretical orientation” (Weizenbaum 1976, 128). Just like previous world wars were fought under the banner of racism, and the reduction of humans to animals, today’s wars can be ideologically based on the reduction of humans to their technical creations, to machines.

In his circle of friends, Joe repeatedly made the following remark, which in my opinion deserves much more attention: “I’m not an AI critic, I’m a critic of society”. This is an important statement to bear in mind if you really want to do justice to Joe Weizenbaum’s concerns. He came to the Computer and Ethics working group that I headed remarking that he is not interested in some AI research group but in ethics. He wasn’t so much interested in discussing the limits of AI development. He only took up this topic later with his lectures on the subject of how information is created and where its meaning comes from (Weizenbaum 2002). He was more concerned with the ethical question: If you could in principle do anything with AI, should you do it?

Joseph Weizenbaum’s name is now repeatedly mentioned in connection with the development of ChatGPT by the start-up openAI and Google’s Bart. Weizenbaum created some of the essential foundations for the development of generative AI with his Eliza programme. He was critical of the use of his AI programme Eliza. On the one hand, there was a lack of understanding of his critique. On the other hand, he also received much praise for it. On the occasion of Weizenbaum’s 80th birthday, Hans-Alfred Rosenthal and I wrote a greeting address entitled “J. Weizenbaum – ein kritischer Wissenschaftler par excellence” (J. Weizenbaum: a critical academic par excellence). We said:

“There are perhaps many critical academics, in varying degrees and with different aims of criticism. But those who invent or develop something important or fundamental for the further development of the science in question and then critically scrutinise or even question everything connected with it, including the societal value of their findings and inventions, even if they may be correct per se, are extremely rare specimens of the Homo sapiens species. Joe is such a person, one could almost say such a case. He is a Blue Mauritius of science. Universities and academic associations in Europe invite him to give lectures, and recently even the President of the Czech Republic took notice of him because he is one of those who go against the grain, and awarded and honoured him with a very special shepherd’s crook.

Joe had developed a computer programme that allowed the computer to give ‘answers’ to simple questions that seemed to testify to a kind of human intelligence. Joe was not pleased with the impact his computer had made. After all, he had contributed to making other scientists believe that it might be possible to replace human intelligence, and therefore human nature and ultimately human beings in general, by vastly increasing computing power - which is all a computer can do. But Joe rightly says that the computer doesn’t recognise human emotions such as hope, sadness, joy, chastity, affection, hate, love, and many others, because this can’t be done with computing. He is also one of those who believe that information consists of syntax, semantics, and pragmatics, and we believe that this also applies to genetic information, which is not understood by all molecular biologists. Quite a few experts believe that DNA, as it is present in a cell, is information. However, it is only the syntactic form of genetic information. Even the protein molecules synthesised based on the DNA genome are not yet complete information. We have invented an example to illustrate this circumstance:

Imagine that an experienced molecular biologist, who is also a zoologist but has the shortcoming of having never heard, read or otherwise experienced anything molecular biological in the field of ornithology in his entire life, is presented with the complete DNA sequence of a chicken. The molecular biologist sequences this DNA and finds out how many genes this DNA represents. With the help of the latest techniques, many of which do not even exist today, the scientist also finds out which proteins are encoded, what their structure and function are, and how they interact with each other. Can the scientist only use this knowledge to visualise the chicken in his or her mind, what its life cycle is like and what complex functions it can perform? No. The scientist can only recognise the biochemical details. Neither the DNA nor the individual proteins tell us how they interact to form a complex organism. Although DNA is very important, it is not everything that makes up life. The proteins are also very important, but not everything that makes up life. Only the interaction of all components, which we will probably never be able to grasp, not even with the new generations of computers that do not yet exist, makes up life. We must therefore dampen our hopes that DNA research and biochemistry will solve the riddles of life. But the overall view will probably not help us much either.

The great Berlin physiologist Emil Du Bois-Reymond (1818-1896) once said: Ignoramus et ignorabimus. There is nothing to add to this except: Happy Birthday, dear Joe, and: Ad me’ah v’esrim – עַד מֵאָה וְעֶשְׂרִים. That is only half of the distance already travelled” (Rosenthal and Fuchs-Kittowski 2003).

4.   Why was Joseph Weizenbaum so Critical of the Use of his AI Programme Eliza?

Weizenbaum has repeatedly made clear why he was critical of Eliza. The structure of Eliza’s questions and answers roughly corresponded to the dialogue therapy developed by Carl Rogers. Psychologists therefore came up with the idea of using Eliza for such therapeutic conversations. This is where Weizenbaum says: That’s not possible, it’s cheating! Because to be able to conduct psychotherapy, you need to understand the client’s specific situation. The computer does not have such an understanding and cannot have it because it has no feelings. When it comes to assessing complex life situations, calculations tend to mislead. Hence the subtitle of the original edition of Weizenbaum’s (1976) book: From Judgement to Calculation.

Despite all the new developments in the field of AI research, the successes achieved based on increased computing speed and storage capacity, as well as the paradigm shift in AI research, Weizenbaum’s basic ethical assumption still holds today.

Joe Weizenbaum also takes up the discussion on the creation of information and thus the limits of computers and AI systems. He poses the question: “Where does meaning come from and how is information created?” (Weizenbaum 2002, 2001b). In this article, Weizenbaum refers to the virologist Hans-Alfred Rosenthal and the thought experiment of the chicken that was already mentioned (see also Rosenthal 2002). Weizenbaum makes it clear that the computer scientist is in a comparable epistemological situation, since the computer does not process any information, but signals or data and therefore does not know about the overall process. By taking up Joseph Weizenbaum’s description of the epistemological situation and using it to argue against exaggerations in AI research, these arguments have gained considerable attention and thus acceptance in both molecular biology and computer science (see Fuchs-Kittowski 1998; Fuchs-Kittowski, Rosenthal and Rosenthal 2005a, 2005b).

In both cases, one is led to the decisive conclusion that recognising the syntactic structure alone is not sufficient. The meaning of the information is also always required, which is only gained through the interpretation of the structure in interaction with the environment. In the case of DNA, it therefore requires the involvement of the living cell in the data processing of the consciously active human being in the social organisation.

This is the decisive argument against AI researchers and philosophers, such as Daniel Dennett (2005), who want to reduce mental processes to their underlying syntactic structures, to neuronal interactions. This confirms one of the basic statements of the evolutionary steps model of information (Fuchs-Kittowski 1992): At no level of the organisation of matter can information be reduced to its syntactic structure, genetic information cannot be reduced to DNA, mental processes cannot be reduced to the neuronal structures of the brain, and social information processes cannot be reduced to data processing.

In biology, the important question is whether ontogenesis is purely a transformation of information. Is only genetic information read from the DNA? So, is everything preformed, or is new information added in the course of ontogenesis, even if it is not genetic information?

In computer science, we actually know that when data is processed by a computer, no fundamentally new element is added to the amount of input data. We just need to reassure ourselves that this also applies to the generation of texts and the processing of big data. Computers, including those that learn based on large amounts of data and artificial neural networks, are not creative. Nothing fundamentally new is created.

With its chatbot ChatGPT, the Californian company openAI has certainly launched a new, powerful AI software on the market. It demonstrates the enormous power of technical “superintelligence”. But, as Joe Weizenbaum has tried to demonstrate from the very beginning of AI research, such technologies in no way mean the disempowerment of humans or even the displacement of humanity. After all, AI systems are not capable of creative thinking, of creating genuinely new information and knowledge. The question of whether a computer can create beautiful music or a sophisticated poem is probably as old as computer applications. Joe Weizenbaum had an apt answer to this question even before AI language models: Why shouldn’t the computer be able to generate another beautiful poem from many good poems presented to it? The decisive difference between the poet is that the poet wants to tell us something with his poem and is able to say (see Weizenbaum 2001b): There are things the computer can’t do!


References

Dennett, Daniel C. 2005. Sweet Dreams. Philosophical Obstacles to a Science of Consciousness. Cambridge, MA: The MIT Press.

Fuchs-Kittowski, Klaus. 2004. Die kleinen Schritte der Verständigung – Können Wunder erklärt werden? FIfF-Kommunikation 2004 (2): 46-50.

Fuchs-Kittowski, Klaus. 1998. Information und Biologie: Informationsentstehung – eine neue Kategorie für eine Theorie der Biologie. Sitzungsberichte der Leibniz-Sozietät 22: 5-17.

Fuchs-Kittowski, Klaus. 1992. Reflections on the Essence of Information. In Software Development and Reality Construction, edited by Christiane Floyd, Heinz Züllighoven, Reinhard Budde, and Reinhard Keil-Slawik, 416-432. Berlin: Springer Verlag.

Fuchs-Kittowski, Klaus. 1980. Report of Working Group: Computer and Ethics. In Human Choice and Computers, 2, edited by Abbe Mowshowitz. Amsterdam: North-Holland. 

Fuchs-Kittowski, Klaus, Hans A. Rosenthal, and André Rosenthal. 2005a. Die Entschlüsselung des Humangenoms – ambivalente Auswirkungen auf Gesellschaft und Wissenschaft. In Erwägen, Wissen, Ethik – Streitforum für Erwägungskultur 16 (2): 149-163.

Fuchs-Kittowski, Klaus Hans A. Rosenthal, and André Rosenthal. 2005b. Replik: Geistes- und Naturwissenschaften im Dialog. Erwägen, Wissen, Ethik – Streitforum für Erwägungskultur 16 (2): 218-234.

Moravec, Hans. 1988. Mind Children. The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press.

Rosenthal, Hans-Alfred. 2002. Zu einem Aspekt der genetischen Information: Geist und Materie in der frühen biologischen Evolution. In Stufen zur Informationsgesellschaft. Festschrift zum 65. Geburtstag von Klaus Fuchs-Kittowski, 225-232. Frankfurt am Main: Peter Lang.    

Rosenthal, Hans-Alfred and Klaus Fuchs-Kittowski. 2003. J. Weizenbaum – ein kritischer Wissenschaftler par excellence. Unpublished.          

Simon, Herbert. 1969. The Sciences of the Artificial. Cambridge, MA: The MIT Press.  

Weizenbaum, Joseph. 2002. Wo kommt Bedeutung her und wie wird Information erzeugt? In Stufen zur Informationsgesellschaft. Festschrift zum 65. Geburtstag von Klaus Fuchs-Kittowski, edited by Christiane Floyd, Christian Fuchs, and Wolfgang Hofkirchner, 233-239. Frankfurt am Main: Peter Lang.    

Weizenbaum, Joseph. 2001a. Kunst und Computer. In Computermacht und Gesellschaft, edited by Gunna Wendt and Franz Klug, 98-103. Frankfurt am Main: Suhrkamp Verlag.

Weizenbaum, Joseph. 2001b. Wo kommt Bedeutung her und wie wird Information erzeugt?. In Computermacht und Gesellschaft, edited by Gunna Wendt and Franz Klug, 7-14. Frankfurt am Main: Suhrkamp Verlag.

Weizenbaum, Joseph. 1976. Computer Power and Human Reason. From Judgement to Calculation. New York: W. H. Freeman and Company.

About the Author

Klaus Fuchs-Kittowski

Prof. Dr. Phil. Habil. Klaus Fuchs-Kittowski is a German computer scientist and philosopher of science. He was a Professor of Information Processing at Humboldt University in Berlin. He was born on December 31st, 1934, in Berlin. He is the grandson of Emil Fuchs, a religious socialist who contributed to the philosophy of Humanist Socialism and was active in the resistance against German fascism. Klaus Fuchs-Kittowski studied philosophy in Leipzig and undertook postgraduate training in biochemistry, biology, the mathematical foundations of cybernetics and philosophy of science at Humboldt University. He earned a PhD in philosophy on the problem of determinism and cybernetics in molecular biology. In 1964 he was among the founders of the University’s Computer Center and, in 1968, of its Department of Economical Cybernetics and Operation Research, which later became the Department for Theory and Organiza­tion of Science. He was vice Director of the Department and Head of the Division of Information System Design and Automated Information Processing. In 1972, he was awarded the Rudolf Virchow Prize for medical research. He collaborated with the IIASA-groups on Modelling of Healthcare Systems and on Data-Communication. He became a member of IFIP/TC9 (International Federation for Informa­tion Processing, Technical Committee 9 – Interaction of Computer and Society). For six years he was Chairman of the “Computer and Work” Working Group 1 of IFIP’s TC9. Fuchs-Kittowski held Visiting Professorship at University of Hamburg’s Department of Informatics and Johannes Kepler University Linz’s Institute of Business Informatics. He also taught at the University of Applied Sciences (HTW) Berlin in the field of Environmental Informatics and Society. In 1992, the International Federation for Information Processing (IFIP) awarded its Silver Core to Klaus Fuchs-Kittowski for his work in Technical Committee 9 that deals with the interaction of computers and society and TC’s working group on Computers and Work (TC9, WG9.1). In 2022, he was awarded the Wiener-Schmidt-Price by the German Society for Cybernetics (Deutsche Gesellschaft für Kybernetik).