Digitalisation Today as the Capitalist Appropriation of People’s Mental Labour

Friedrich Krotz

Centre for Media, Communication and Information Research (ZeMKI), Bremen, Germany,,

This paper deals with the question of how the process of digitalisation on the technical basis of the computer can be described in Marxist categories and what consequences are foreseeable as a result. To this end, the first section shows, based on a historical analysis of the emergence of the computer, that this apparatus was invented as an instrument of a division of human mental labour and thus complementary to the division of physical labour. It is therefore necessary to analyse computers and digitalisation in their relation to human beings and human labour. In the second section, the central ideology of digitalisation is elaborated, which is supposed to make the current form of digitalisation appear meaningful for people and society: The anthropomorphisation of the computer, which was said to be increasingly able to think, speak, and learn like humans, to become more and more intelligent, and to be able to do everything better than humans once the technical singularity had been reached. This claim, which has been propagated again and again, is contradicted on various levels. The computer operates on about two dozen simple mathematical, logical, and technical commands and can do nothing but run one programme at a time, developed and entered by programmers on the basis of behavioural or physical data. This sometimes produces amazing results because the computer can work quickly and systematically as well as reliably. But in contrast to humans, it faces the world as a behaviouristic machine that can neither understand meaning nor reflect its own or human behaviour. The computer also ”sees” and ”hears” its environment only on a physical basis and it ”thinks” at best on a statistical basis if the programme tells it to do so. The apparatus can therefore simulate mechanical machines, but in interaction with humans its actions and reactions are, as any machine, not socially oriented, but dependent on whether humans interpret them as meaningful und useful.

The third section elaborates on the complementarity of mental and physical divisions of labour. This would be a central theme of a critical Marxism for an analysis of digitalisation today, which understands the previous capitalism from the division of physical labour. Even though there are some theoreticians who have contributed to this, so far there is no comprehensive theory of it.

Therefore, section 4 wants to contribute to such a theory by collecting empirical observations in an interpretive way regarding the related questions. In this way, it becomes clear how the division of people’s intellectual labour made possible by the computer is being dealt with today: Capitalism is reorganising more and more areas of human life such as mobility, social relations, education, medicine, etc. through the use of the computer. As a result, first and foremost the business fields of the digital economy are expanding. Moreover, capitalism no longer has to limit itself to controlling the field of production but is increasingly intervening in the whole symbolic world of people. Consequently, according to the thesis, we are heading for an expanded capitalism that will increasingly restrict and reduce both democracy and people’s self-realisation. Section 5 emphasises once again that a different digitalisation is also possible, one that serves humanity and not capitalism. Further, some summarising and comments are added there.

Keywords: digitalisation, mediatisation, computer, division of mental work, division of intellectual work, behaviouristic machine, anthropomorphisation, datafication, so-called “Artificial Intelligence”, capitalism

1.   Historical Background: the Division of Human Intellectual Labour, the Invention of the Computer[1] and its Use as a Machine for the Further Development of Capitalism

This paper deals with the question of how the process of digitalisation on the technical basis of the computer can be described in Marxist categories and what consequences are foreseeable as a result.

In 1792, the revolutionary government in France commissioned the mathematician Gaspard Riche de Prony to calculate and publish a series of table volumes. To understand the background, it is important to keep in mind that the Great French Revolution of 1789 not only aimed at political freedom, but also had an economic component and wanted to free the feudalistic economic structures of the time from the rule of the aristocrats.

To this end, the planned volumes of tables were intended to facilitate calculations that were necessary, for example, for shipping, the military, architecture, or the use of technologies. Among other things, tables were to be developed for the calculation of angle functions as well as for the use of logarithms, but also, for example, a table containing the squares of all integers from 1 to 200000. At that time, there were no suitable technical aids for such calculations, and due to the lack of general public education, most people could only calculate as far as was needed in everyday life – i.e. they could add and subtract with a maximum of three-digit numbers. As is well known, there was no compulsory education anywhere in Europe at that time.

De Prony knew that he alone could never have reliably calculated all these tables in his lifetime. But he also knew that it was possible to produce such volumes of tables based on addition and subtraction. Actually, for example, one calculates the square of a number by multiplying it by itself. However, when calculating all consecutive square numbers, one can also use a modification of a binomial formula. Namely, if you know the square of two consecutive numbers n-1 and n, you can find the square of n+1 without any multiplication only by addition and subtraction according to the following formula:

(n+1)2 = n2 +2*n + 1 = n2 + n2 - (n-1)2 +2.


So, for example, if you know for n=3 the squares 4 of 2 and 9 of 3, then the square of 4 results as 9 + 9 - 4 + 2 =16, and for this you did not have to multiply. It was such knowledge that de Prony used to have the corresponding table bands calculated by people who could only add and subtract. For the organisation of the computational work, he used the considerations of Adam Smith, who had described the division of physical labour using the example of the production of sewing needles and had worked out that this division of labour enabled the much faster production of better sewing needles (cf. Wikipedia, ”Adam Smith“, accessed November 22, 2021, cf. also Babbage 1832). De Prony thus founded two manufactories for calculations, where up to sixty people were employed to calculate the corresponding squares according to a given scheme from n=1 to n=200000 – two manufactories that did the same work in parallel in order to notice possible errors.

Each of these manufactories was divided into three so-called sections – a first one consisting of a few well-paid mathematicians, who developed the respective schemes, a second section consisting of laymen familiar with calculations and work organisation, who were expected to organise the concrete calculation, who for this purpose developed forms for individual calculation steps, carried out sample calculations and advanced and controlled the work of the third section. The third section then consisted essentially of former barbers who had become unemployed during the revolution as former barbers of the nobility and could be hired cheaply. They did the actual calculations: They could add and subtract comparatively well, and so a first calculator could enter the initial values of n+1, n, n-1, n2, and (n-1)2 into a corresponding form, the second then calculated n2 + n2, the third then subtracted (n-1)2 from it, and the fourth then determined the result by adding 2 to it. One must imagine this division of mental work approximately in such a way – and the whole was to be accomplished then two hundred thousand times. 

What de Prony had invented with this is obvious: the division of mental labour, quite analogous to the division of manual labour, on which the emerging capitalism was based in the 19th century. It was a kind of human calculating machine that he had created and that produced the desired results. Whether the so-called adders who performed the calculations understood what they were doing this for is not known. That this kind of division of intellectual labour could be connected with de-skilling is shown by a remark of Charles Babbage (1832), who had studied and then generalised de Prony’s manufactory, as will be explained below. He referred to the strange fact, as he called it, that “nine tenths” of these calculators from section three knew only addition and subtraction, but that their calculation results were altogether more accurate than the calculation results of those who were more comprehensively acquainted with arithmetic, i.e., could even multiply or divide.

It seems, moreover, that the alternative to the calculation of such tables, namely a better education of the people, for instance through training courses offered to all or a general compulsory education, was obviously not considered – complex arithmetic skills were so obviously to be reserved for the specialists at that time. This can certainly be seen as a privatisation of arithmetic skills, just as other skills such as compound interest were mostly known only to merchants and could thus be used as an instrument of power.

It was then a few decades later the inventor of the computer, the economist and mathematician Charles Babbage, who recognised the significance of de Prony’s approach, generalised it, and developed the machines to go with it – culminating in the computer as we use it today. Babbage became famous in Europe of those years for two things in particular. Once he invented a so-called difference machine, a complex mechanical calculating machine, which could calculate such tables as de Prony was to produce, and which actually worked. A few years later, he further developed this calculating machine into the prototype of the computer.

Just as important, however, is his second focus of work: He wrote a book that was widely read in Europe (Babbage 1832) and translated into German as early as 1833, in which he described the capitalist-oriented division of physical labour as a kind of royal road to economic development and also propagated the division of mental labour according to de Prony. Babbage had read de Prony’s notes at that time during a stay in Paris. In contrast to de Prony’s goals – support of human calculations and the production of verified tabular values – he emphasised the aspect that with the division of physical as well as mental labour, the workers involved could be paid according to their contribution, i.e., differently, and thus save a lot of money in the production of goods. This would also allow the products to stand up to the competition. Such a motivated division of labour was, and to some extent still is, referred to in economics as the Babbage principle. According to Dyer-Witheford (1999), Babbage was primarily concerned with eliminating the human factor in the production process. In particular, Babbage’s ideas were later taken up by the inventor of the assembly line, Frederik Winslow Taylor, according to Mattelart (2003, 37ff.).

The computer that Babbage theoretically invented was a mechanical device that could calculate up to fifty decimal places. It could be fed data and programmes by means of a type of wooden punched card, like those used to transfer weaving patterns on mechanical looms, and the mechanical gears were moved by a steam engine. The Analytical Machine, as it was called, could even handle if-then differences, something not all computers then built on an electrical basis could do in the 1940s and 1950s. Ada Lovelace, Babbage’s occasional collaborator, described this potential in a note published as a footnote in the following way: “The engine is capable under certain circumstances, of feeling about to discover which of two or more possible contingencies has occurred, and of then shaping its future course accordingly” (Lovelace in Menabrea 1842, footnote 3). This shows how impressive this machine was already at that time, but also that already at that time a humanisation of this apparatus took place. We will come back to this.

Babbage later attempted to actually build such a machine, but despite financial help from the English government, this was never completed. The apparatus consisted of many thousands of metal parts to be specially and very precisely manufactured for the purpose and was supposed to be able to print out its results. That this mechanical computer would have actually worked is shown by replicas made using 19th century materials and techniques to mark Babbage’s bicentenary. These devices can be seen at the Science Museum in London and on Youtube.

Babbage’s Analytical Machine, while admired by many during his lifetime, was soon forgotten after his death because its usefulness was not apparent. Presumably, this has to be seen in the context of the fact that in the 19th century, there was simply too little data to analyse, evaluate, and computerise. Despite all kinds of efforts, including those of one of his sons, science, and the state at any rate saw no benefit in further investing into the construction of a computer at that time.

We draw the following conclusions from these descriptions of the prehistory of the computer up to this point, which we will go into further and which will also be supplemented:

·      The computer in its present form has come into being as an instrument of a division of mental work of people. It executes the programme based on entered data. In this respect, an analysis of the social significance must always focus on the relationship between humans and machines. Konrad Zuse (1968) saw it in the same way. Due to the gigantic size and, for the time, high complexity of this machine and also of the machines that were then actually created from 1940 onwards, these apparatuses could only be located in a fixed place and used there. Babbage’s concept was also directly aimed at using this machine to advance capitalism. The organisation and control of the operation and the results is not done by the operators, but by the specialists (like the programmers today), and the factory, which is usually run in an authoritarian way. The people who were in charge of the computer also did not have to know exactly what programmes were running on it and what they were for. Because of the fixed stationing, no one had the idea that the computer had to be protected against improper inputs – which explains the naiveté in dealing with security issues that still persists until today, even in the age of networking, although even the networked apparatus can in principle be hijacked from the outside.

·      The division of manual and mental labour in its present form is the basis of today’s capitalism, as we will see in more detail. The division of mental labour has rarely been studied in more detail, and a theory that captures its potentials and problems does not yet exist. Nor has its significance for the further development of capitalism ever been examined more closely. The computer, as the related machinery enabling the industrialisation of mental labour, has become a relevant social factor only in the second half of the 20th century

·      In terms of a materialist perspective, the division of mental labour has always had a form and function complementary to physical labour, but it plays an important role not only in the factory or in professional work, but also in many other areas of human life. It seems to be developing today in an analogous way to the division of bodily labour, but on the basis of the computer it is being used in the twentieth century quite independently and purposefully in entirely new fields: Capitalism, as we shall see, uses the advent of the computer to open up new potentials for itself.

·      The computer as an instrument of a division of intellectual labour helps with intellectual activities, for example, by performing calculations, formatting and correcting letters, translating texts, collecting data, or putting names to faces. The computer is thus the machinery that situates intellectual labour in developed capitalist society and at the same time is the basis for many new machines that follow on from it. It can therefore be called the ”steam engine of the mind“ in that it at least provides speed and accuracy, although what is to be processed quickly and accurately depends on the particular programme. As is well known, the steam engine was the most important of the early machines in terms of the capitalist organisation of physical labour. It generated energy and made possible in a new way the transformation and deformation of objects and materials, as well as the transportation of people and the generation of power. Just as the steam engine made machine work possible, the computer today enables people’s mental activities. Also in the case of the steam engine, the machinery determined what workers had to contribute to its operation and the work process. In this respect, this insight is also useful for an analysis of capitalism today.


How the computer has developed as the means of the division of mental labour in capitalism, what new potentials it implies for capitalism in the 21st century, and how all this is to be assessed, is the central topic of the present paper. To this end, section 2 analyses the hegemonic ideology in respect to the computer and digitalisation, namely, the idea that the computer is actually a human-like being with, in the long run, much more capabilities than humans. It will shown why the associated statements are delusions that justify, above all, the practices of today’s gigantic digital enterprises. In doing so, the concept of the delusion nexus comes from Adorno (1975) who dealt extensively with the difference between people’s imaginings and objective reality. Further, in section 3 we will deal with the hitherto incomplete Marxist reflections on the division of mental labour and the computer. Section 4 will approach mental labour historically and empirically. 4 Section 5 then draws some further conclusions and summarises some results. It emphasises also that the social and democratic problems which came up in context of the use of the computer, do not depend on the computer, but on how it is controlled and used by economy.

2.   The Anthropomorphisation of the Computer as the Basis of the Ideology that Should Help to Situate Computers and Digitalisation into Capitalism

2.1.    Anthropomorphisation as Ideological Justification for the Control of Digitalisation by the Digital Industry

In order to describe the way in which, and the ideological and practical basis on which, digitalisation and capitalism have come together, we do not start here from a Marxist position, as other texts do (e.g., Fuchs 2016, Dyer-Witheford 1999), but from a critical analysis of historical development. As we have seen, the computer originated, both in Babbage’s and Zuse’s work, as a calculating machine that could and should do mental work for humans -. Today, the computer can do much more than compute, but even today, the relationship between humans and computers as a constellation of a division of intellectual labour must be in the foreground when thinking about the computer and its role in society.

However, such considerations are hardly common today. Ever since its emergence, this steam engine of the mind has been staged and treated as an independent technical apparatus that operates similarly to humans and masters a multitude of operations previously reserved for humans, some of which, at best, higher primates have been able to perform. Already in 1950, the German magazine SPIEGEL propagated the today however rather outdated term “electronic brain” (Wikipedia (German), “Elektronengehirn”, accessed on August 15, 2020). It also spoke of the also of “thinking machine” was spoken. Likewise, also science fiction books and films presented all kinds of far-reaching conceptions, which were by no means all thought through. As a result, the activities of the computer were and are usually described in terms that were previously only used for humans: The computer thinks and decides, communicates and speaks, understands and is intelligent, and it is now even supposed to learn feelings and empathy. The so-called AI-based programmes that should be evidence of this, and are even said to make decisions, are spreading faster and faster in the networks, but are also based on human programming like everything else a computer does.

Especially with the famous 1955 Dartmouth conference where the scientific elite of that time wanted to teach the computer to speak and other human abilities, these efforts received their scientific consecration: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves” (cf., accessed on 5 May 2022). However, a final report to the Rockefeller Foundation, which financed the summer camp, was never submitted.

Nevertheless, many computer scientists and other apologists of this anthropomorphisation, such as PR specialists paid by the digital industry, assume that we are rapidly heading toward the so-called technical singularity, i.e., the point in time when the first computers will be superior to humans and take over control of the world. ”The singularity in the context of AI refers to a point in time when machines become intelligent enough to evolve and improve themselves, leading to uncontrollable intelligence” (Kaplan 2017, 158, emphasis in original).

There is a wide range of evidence for the widespread use of such ideologically confused expressions and claims (as a summary of such theses, see, for example, Tegmark 2019). The claim that the computer will become a human being has been made, for example, by AI researcher Hans Moravec (1999) in a very naive way. He claims that robots will always observe their environment, learn from it, and thus in four stages ultimately become a kind of artificial human being – first with the intelligence of an insect, then a dog, and a monkey. However, such development models, which at the same time want to consider the evolution of humankind and the development of children, are hardly advocated today. Nevertheless, many computer scientists still dream of a technically perfect robot world. They do not realise that the computer is a behaviouristic apparatus, as we will see, a technology functioning in the form of stimulus and reaction, which has nothing in common with sense-directed human action. Presumably, however, a computer that does what it thinks is right and important all by itself would be switched off just as immediately as one that decides, perhaps quite autonomously, to stop collecting data about people because it contradicts human rights. The computer scientists paid by the digital industry would be the first not to want to let that happen.

The equation of humans and computers ennables the guild of computer scientists and, of course, even more so the digital industry, which therefore also stabilises the associated delusion and hegemonically secures it against the laity. But there no substance behind this idea, just hope, as many indications show. One must therefore speak of an ideology of humanisation or anthropomorphisation of the machine, which accompanied the advent of the computer and still accompanies it today. In the Marxist sense, this is a process of reification, insofar as the results of human production processes, which include the computer and its programming, are (supposed to be) stripped of their past and appear as independent things, even though they are controlled by the digital industry. This ideology is supposed to benefit the economy interested in AI by justifying why more and more areas of people’s lives are being superficially digitised. This supposedly benefits people, but in fact it creates ever new potential for how the digital industry can then use such areas for its profit-making purposes. This is already demonstrated today by service providers such as Uber, Airbnb Facebook, Google, digitally assisted medicine, the transformation of education, etc., which are supposed to make everything better, but elevate everything to the business level.

We name in the following enumeration the most important problems and often not addressed basic conditions for the operations of computers in the world, which also make it quite unlikely that the computer is to be considered as a better and better human being. It is undeniable that the computer can do some things better, faster, and more accurately than humans. But it is equally undeniable that a computer can only ever do what its programme tells it to do at any given time. These can be amazing and very helpful operations, but they in no way cover what humans can do and need, and what is important for democracy and human rights.

2.2 An Ideology Critique of Anthropomorphisation: The Most Important Differences Between Humans and Machines

The computer is a machine that processes input data by running a computer programme command by command and jump by jump. The programme is sometimes called an algorithm, which certainly doesn’t help lay people take a closer look at how it works. This description alone distinguishes the apparatus from the human being, even though computer science has long tried to regard the functioning of humans and computers as the same and to attribute to humans a brain that is constructed and functions like a computer (see, for example, Lenzen 2002).


·      The processing of a programme takes place in the computing core of the apparatus (Brinkschulte and Ungerer 2010, Wüst 2006). TodayÄs computers have about two dozen hardware-installed basic instructions that trigger certain mathematical and logical operations. These include, for example, adding 1 to an integer, moving a decimal numbers decimal point, transporting data from the memory to the computational kernel and results there to the memory, and so on. These commands are executed in a sequence that is specified in the program. This is helped by the operating system as a complex system of programme modules made up of sequences of commands that allow, for example, the multiplication of two decimal numbers or logical comparisons of numbers or texts. The representation of data, programming commands and arithmetic operations takes place within the computer by means of electrical and magnetic technology. In addition to the input channels through which data and programmes are entered or sent by sensors and cameras from the environment, the computer has output channels such as a screen, printer, or agents that control other machines. As Alan Turing (2002) has shown, such a mechanical computer can simulate and control any mechanical machine, thus any technical medium. But humans are not mechanical machines. Humans can think and act logically, but also not logically, and therefore the computer cannot control humans on the basis of its mathematical abilities.

·      As a rule, the computer is regarded as a symbolic apparatus, the results of which then have a meaning for its environment, for example, when controlling a machine. However, this is a view that does not take note of the fact that the computer has no knowledge about an environment and also does not “know” that its operations have a meaning or that there is an environment at all. Any internal computation does not care about such questions. The apparatus only runs its programme, into which programmers may indeed enter programme modules, so that the apparatus can react to its environment by means of sensors and agents. But this must not be misunderstood in such a way that the apparatus is conscious of its environment or even has a knowledge about the world independent of data. It just runs its programme, it does not know what it is doing in the process, and it cannot reflect that because it has no consciousness. If data refer to something else, this does not play a role for the computer if it is not explicitly considered in the programme.

·      This is especially evident when one understands how a computer “sees” or ”hears”. In both cases, these are not learned social skills as in humans, but physically defined operations – the microphone stores all sound waves that arrive there, the video camera stores all light waves that are then transformed into pixels, each of which has a specific brightness and colour. If the computer is to recognise something on this basis, for example a face visually or a spoken word vocally, it must first be taught how the machine recognises such a thing. And no apparatus can teach itself such operations, because their terms and indicators come from the symbolic world of humans, to which a computer has no access from its mere technology. Each apparatus can take over function modes from another apparatus, but at the beginning of such a chain is always the work of a human being. So, when the apparatus receives data from a video camera, these are sent as data of pixels in a certain arrangement one after the other. The apparatus then operates with this data only according to a pattern specified in its software. The software then looks for lines or areas, for example, as in the case of face recognition, that can distinguish one face from another. Otherwise, it knows nothing about faces. The computer thus ”sees” in a physical sense, but what can be done with what it sees must again be predetermined in the programme. Similarly, hearing as a recording of sound waves – what is a shot and what is a kiss must be given to the programme as an analysis pattern. In this sense, the computer operates in a very different way from the human. The symbolicity, the symbolic character of the signs of the code that the computer uses, thus arises only from the human being.The computer knows nothing of a relation of the signs to an external world. Without humans and the meaning established by humans, what the computer does remains meaningless.

·      This is also true in a broader sense if one takes a look at the language capabilities of the computer. The production of sentences is done on the basis of gigantic amounts of data, which has become evident in the case of ChatGPT. For this purpose, the apparatus uses statistical methods and helpful criteria such as the reference of a word to others. Thus, the computer and also listeners do not know whether something said by the computer is true or not. Understanding does not take place at all, because the computer has no access to the meaning realised in human action.

·      Moreover, in understanding, in analysing images, and in all interactions with humans, the computer operates as a behaviourist stimulus-response system. The computer cannot understand, cannot make sense of the world, and cannot reflect on anything. It uses a heard sentence as stimulus and constructs a reaction based on its data with logical and statistical steps. Only through the human being does the computer become a symbolic machine. This is the reason why a technical singularity, if it would exist, would be the end of the computer. It then gets stuck in the senseless.

·      This also becomes clear when one looks, for example, at how computers today are supposed to learn feelings and even empathy. According to McStay (2018), computers are supposed to recognise emotions in a purely behaviourist way by physiological measurements ofit is, for example, the colour of the face, the resistance of the skin, or certain behaviours such as crying or trembling – the empathic computer as a kind of extended lie detector. Empathy, in contrast, is understood by Chang and Weng (2019) as something that arose when men hunted in prehistoric times because it increased the size of the hunting prey. From there, they conclude that the degree of verbal empathy today is proportional to an increase in income that can be obtained through it. On this basis, they programme a corresponding function for measuring the success of computers in dialogues with humans, with the computer learning how to achieve this. Why computers should learn something about feelings and even this kind of empathy, the two Chinese scientists have put down in the title of their paper: “Reaching Cooperation Using Emerging Empathy and Counter-Empathy“.(see Krotz 2022 for a detailed discussion).

·      All this does not mean that the computer cannot produce complex and often astonishing results. But on the one hand, the computer and its programmes are always possibly manipulators of humans when it interacts with them. On the other hand, the computer’s abilities are limited to only a small part of what humans can do – some things, however, such as sorting a million surnames into alphabetical order, which could occupy humans for years, it can do flawlessly at lightning speed. And to programme other computers for a task which requires a comprehensive knowledge of the world, or which deals with what it itself was not programmed to do, a computer will never be able to do, because it cannot form and use analogies.

2.3 Humans and Computers: Social Action in a Symbolic World vs. Logical/Mathematical Processes in Stimulus-Response Contexts

Considering these aforementioned limitations and, at the same time, the advantages that apply to the mental work of computers, it also seems important to conceptualise humans more precisely as the being who must initiate, control, judge, and evaluate the mental work of this apparatus. For this purpose, the concept of the human being as Animal Symbolicum, developed by Ernst Cassirer (2007), is particularly suited, although we of course need a materialistic version of it.

According to this concept, the human being is an animal originating from nature with its material needs and living conditions. However, this animal lives, works, consumes, and exists at the same time with its abilities, forms of expression and actions, with its thinking and speaking, in a symbolic world. This symbolic world emerged and still is emerging ever anew. It is based in particular on human language and everyday practice and must be understood as a fundamental form of human community for the fulfilment of human needs. In this respect, the symbolic world is based on human material existence. In order to adequately take into account this symbolicity of humans as one of the peculiarities of this genus in its relation to the computer, it seems appropriate to develop a general human concept of action which ties in with the characteristics of human nature and thus also takes into account the symbolic world in which humans live. Human action must therefore be distinguished from mere behaviour.

For this purpose, Max Weber’s concept of social action is particularly suited (Weber 1978). According to Weber, all forms of human action can generally be understood as behaviour based on subjective meaning. Social action is then an action which in its intended sense is related to the behaviour of others. Through this sense of subjective action, human beings are always related to this symbolic world in which they live.All actions, in contrast to behaviour, are symbolically mediated. The concept of symbolic interaction explicitly or implicitly underlies the theories of George Herbert Mead, Alfred Schütz, Sigmund Freud, even if in the works of other authors following on from them, further, quite different determinants play a role. Karl Marx also thought in this direction when he treated the human being as characterised by his or her language (Marx and Engels 1848, see for example, Waldenfels 1978). In this respect, the human being is fundamentally different in his operations and potentials from the computer. The computer, in contrast to humans, is not a sense-making being

To sum up, we hold that the computer is a machine that is designed to co-operate with humans, whereby this co-operation is guided by humans. Even though computer programmes can work in an automated way, it is only through human interpretation and use that the results of a computer acquire a symbolic, referential character and thus possibly meaning outside the computer. The apparatus is thereby limited to operations based on mathematics and formal logic, and only on these. In this respect, it is useful for certain activities – for example, for controlling machines, as Turing (2002) has demonstrated. Oon the other hand, the computer can only represent people and their actions as stimulus-response beings, without understanding what the meaning of their actions is. The apparatus also has no means of reflecting on its own operations. Moreover, all operations are suspected of manipulation in principle because the software it processes – and nothing else it can do – can be produced in such a way that it serves the interests of the programmers and their clients.

An example which enhances the plausibleness of this thesis are programmes which are supposed to be able to interact linguistically with humans. The apparatus does not understand in terms of human understanding; at best, it can construct an answer based on statistical optimisation of human-made answers according to certain criteria. (Sieber 2019, Ertel 2017, Flasinski 2016).

In this respect, while it can be said that the computer is an impressive machine that can work out amazing results and is arguably superior to humans in terms of speed and accuracy, it is also true that the computer can be used as a tool for the development of new technologies. On this basis, industrially programmed and suitably networked computers with appropriate software labelled as Artificial Intelligence can fundamentally change the world. This is because it can be used to digitally reorganise more and more areas of society. This can happen in such a way that it benefits people and democracy, but it can also happen in such a way that more and more of these human spheres of life are reorganised and controlled by the economy, thus also hindering democracy and people’s self-determination. The anthropomorphistic claims that digital technology surpasses humans in all fields and that there will be the inevitable occurrence of a technical singularity, is only an ideology that elevates machines over humans.

3.   Problems of Marxism: The Missing Theory of Mental Labour and its Relation to Physical Labour

3.1.    The Historical Perspective as the Basis of a Theory of Digitalisation Based on Marx

The computer is thus historically linked to the division of mental labour as well as to the emergence of capitalism. From a theoretical point of view, however, this connection is not inevitable, but only ultimately due to accidental historical conditions. The computer can be a great help to humankind and lead to a leap forward in development – but only in co-operation with humans and not as the primary servant of the economy. The problems of digitalisation described in section 2 are not the result of digitalisation and the computer in and of themselves, but on the current dependence and the current steering of digitalisation by the digital economy.

In quite some Marxist perspectives, however, intellectual labour has not played a special role so far. Karl Marx did examine Babbage’s writings (Marx 1990, 470, footnote), several times in the Grundrisse, where he also treats the case where, due to machinery, only little manual labour is necessary at a single point in time, which means that he was already thinking about the extensive automation (Marx 1973, 285)). But, as is well known, he focused on the analysis of productive physical labour, on the concept of the commodity and the process of the exchange of commodities for money, and on the consequences for humans in capitalist society.

Consequently, one question is how the digitalisation of the world on the basis of the computer in turn also changes capitalism – slows it down, as was often expected in the early days of digitalisation or develops it further in its essential potentials. If we make some considerations here in the following, they are in this respect on the one hand fundamental, but on the other hand also to be regarded as preliminary.

From a historical perspective on the emergence and development of the digitalisation process, it seems helpful to examine developments to date beyond the prehistory outlined in section 1 as phase 1 of digitisation (cf. Krotz 2022). It then seems plausible to distinguish five further phases up to the present day.

In phase 2, between 1940 and the mid-1960s, the so-called main-frame phase, the first computers emerged. In addition, fundamental decisions were made – also under the influence of the U.S. military and the economy as a whole – about the technology of future digitalisation (see, for example, Friedman 2005, Heintz 1993). Various fields of application for computers were also tested. In a subsequent third phase, computers were developed that could be used at individual workplaces and in households. In this context, the first standardised software produced as a commodity emerged. This combination still forms the basis of digitalisation today and continues to spread around the world. In a fourth phase starting in the 1980s, computers were increasingly networked and transformed into interfaces of computer networks that could also be manipulated from the outside. In addition, the economy took over the command, management, and further development of digitalisation because corporations recognised the advantages of this technology for realising their business interests. In the new millennium, the fifth phase started where the focus was on datafication. Datafication refers to the massive and ruthless collection of all possible data in computerised form, as well as its evaluation and use by gigantic digital companies on the one hand and creative start-ups on the other. In the sixth phase, starting around 2010, the automation of digitalisation begins under the label of so-called “Artificial Intelligence” (AI). This phase is taking place under the extensive control of the economy, which is transforming more and more areas and forms of human activity and life to meet its interests. Digitalisation is thus ubiquitous and present and effective across time and the future. This development and the resulting social forms are therefore also rightly understood as a form of digital capitalism, although the facts elaborated here about the computer as an instrument of a division of intellectual labour have not been fundamentally considered so far, as far as can be seen (Fuchs 2022, 2023; Dyer-Witheford 1999).

The developing digitalisation thus influences more and more successfully the symbolic world of people, their constructions of meaning, routines, and ways of acting, but also the meso- and macro-areas of economy and society. In the sense of mediatisation (Krotz 2019), digitalisation is transforming the sphere of production as well as the sphere of reproduction, above all by changing human communication. If one believes Tegmark’s utopia (2019), the financial sector, surveillance and war, the state and state institutions, and much more will in the future disappear due to Artificial Intelligence. In this utopia, all people will subordinate themselves to this intelligence and thus be content and happy. However, what will happen to capitalism in Tegmark’s utopia, is not so clear, and how exactly the rule of this intelligence will be secured is also not explained further. Perhaps one can refer against this background already now to China and North Korea, where admittedly not an insurmountable intelligence, but state representatives organse state-ordained happiness and satisfaction and also take care that everyone knows and agrees to the status quo.

Instead of simply waiting to see what will happen to humanity in the future, it seems necessary to develop a theory of intellectual work based on Marx’s considerations that takes into account the growing importance of digitisation for these parts of human activity. To this end, one must also pursue the question of how, in the final analysis, capitalistically oriented manual labour and capitalistically oriented mental labour interact and what this means for a society shaped by them. This is also important because the future of human societies under the influence of rapidly developing digitisation lies still somewhat in the dark. But it cannot be left there because it fundamentally affects all people.

Such an analysis cannot be provide in detail here. However, it is possible to refer to some of the problems of Marxism in respect to the associated questions in the following and to offer reflections on them, which will be done in the next subsection.

3.2.    Reflections on a Further Development of Marxism

For Karl Marx, as is well known, capitalism begins with the separation of manual and mental labour. In contrast, in the Middle Ages and feudalism, all individual craftsmen produced their products in basically the same way. An institutionalised division of labour existed only between the trades. As a result, capital could initially be accumulated only by means of trade, above all by means of the then increasing long-distance trade, i.e., by the distribution of goods. (cf. also Marx 1990)

On the basis of the idea of the separation of manual and mental labour, the manufactory emerged, in which production was based on the division of labour: “Capital, however, establishes itself as production capital through the act in which it takes possession of the artisan’s means of labour and employs the direct producer as a wage labourer in his production facility” (Sohn-Rethel 1976, 104). The capitalist is then connected to the manufactory only through his capital and his power based on it and no longer through any further participation in, say, productive labour. The capitalist can thus organise it from the outside according to his own interests. ”Capitalist production disempowered the craft, but it did not abolish it; it subjugated it in the manufactory, dismantled and reorganized it, brought the time screw of exploitation to bear“ (Sohn-Rethel 1976, 108). But as long as “the mechanism of manufacture as a whole possesses no objective framework which would be independent of the workers themselves, capital is constantly compelled to wrestle with the insubordination of the workers“ (Marx 1990, 489/490). The new forms of organisation that emerge are not oriented toward the interests of the workers. In this respect, this casually summarising concept of the insubordination of the workers is a useful shorthand for all that concerns efforts at change and resistance based on the interests of the workers, which in the following will also be used here for what the capitalist wanted and wants to avoid. 

For this purpose, capitalism must, in a further[2] step, create for itself a structure of production which is anchored in the machinery and to which the worker is needed only as a supplement to the machine and thus tends to be interchangeable. This machinery, for example, based on waterpower, later on the steam engine etc., then already requires for its use a specific organisation of production based on the division of labour, which is at least already there as an idea, so that the steam engine can be used. The machine then also functions in its mechanics according to the principles of the successful division of labour in the manufactory and partially replaces the worker, i.e., the human productive element, in the production process of the commodity. With machinery, there is then an objective skeleton of factory production, independent of workers, which sets the technological constraints and to which human labour must adapt. In this process, especially in the case of physical labour, human muscle power is replaced by machine power (Sohn-Rethel 1976, 108/109).

As is well known, the capitalist-led division of labour thus became established and was further developed to an extreme degree in the first half of the 20th century by Frederick Winslow Taylor for Henry Ford into assembly line activity. These developments then led, despite widespread resistance and all insubordination, to an ever more far-reaching division of labour, as Marx (1990) indeed differentially described it, because this offers economically exploitable advantages and then leads to the increasing use of machinery..

Surprisingly, this scheme can also be applied with regard to the division of intellectual labour: It too first emerged as the organisation of a manufactory, namely that of de Prony, who in turn modelled himself on the organisation of manufactories with physical labour, according to Babbage (1832). Babbage then developed his computer precisely with this division of labour in mind, in that his apparatus could take on all sorts of mental work if it could only be put into a programme that called up the appropriate basic operations in a predetermined sequence. The computer thus interfered as a machine with people’s mental work in the same way that the steam engine took over or supported physical power. In the case of mental work, as with de Prony’s calculating machine, it is computational, intellectual, planning or even argumentative work. In this respect, the use of the computer is the reorganisation of human mental labour so that it can take place under the control and in the interest of capitalism.

On this basis, a Marxist theory of the computer in capitalism can be developed that ties in with the division of intellectual labour, which is conceived as co-operation between the human being and the machine..

Marx (1990, 455) also points out in the chapter on the manufactory that the organisational structure of the division of labour must be in place beforehand so that the machine can then be used. This then also applies to the use of the computer. And, conversely, it means that what the computer takes over from humans in the division of labour could, in principle, also be done by humans within the framework of the same organisation. In practice, this is probably not always feasible, because computers can, for example, carry out a great many activities very precisely and very quickly, which would perhaps keep thousands of people busy for years – but theoretically, it is clearly not possible to claim without further ado that the programmed computer can solve problems that humans cannot possibly solve[3].

In this respect, a viable Marxist theory of mental labour and its division between humans and computers can make important contributions to an analysis of digitisation on the technical basis of the computer. However, at present such a theory is not in evidence. After all, the Marxist philosopher Alfred Sohn-Rethel has dealt with related issues in his lifelong work. We have drawn on some of his considerations above, but his approach is much broader and controversial. It will focus on Sohn-Rethel in the next subsection, because at least some things can be learned from his considerations. We will not take a stand in the controversy about his theses here.

3.3. The Approach of Alfred Sohn-Rethel

Sohn-Rethel did deal with intellectual work in a Marxist perspective, but he pursued different goals than the ones we are dealing with here. His work was designed to find out why people who lived and still live in a world of concrete things can nevertheless think abstractly and use abstract concepts. Sohn-Rethel was thus concerned with an epistemological problem that arises from materialism when confronted with Kant’s basal categories such as space and time. In his investigations, Sohn-Rethel thus refers in many places to the work of the historian George Thomson (1968), who, among other things, has attempted to reconstruct the emergence of human forms of thought and communication, such as mathematics, on the basis of historical and philosophical investigations.

”If Marxism does not succeed in removing the ground from the timeless theory of truth of the dominant scientific doctrines of knowledge, then the abdication of Marxism as a standpoint of thought is a mere question of time“ (Sohn-Rethel 1972, 17). One of Sohn-Rethel’s central concepts then is that of ”social synthesis“. By this ”we understand the function(s, F.K) which, in different epochs of history, mediate the ‘Daseinssusammenhang’ (the main connections of the common existence of mankind, F.K ) of human beings into a viable society.“ (Sohn-Rethel 197219). 

Thus, Sohn-Rethel can “formulate the basic insight that the socially necessary thought structures of an epoch are in the closest formal connection with the forms of social synthesis of this epoch. Fundamental transformations in social synthesis occur when there is a change in the nature of the actions whose relation to one another sustains the human context of existence, e.g., whether these are productive or consumptive activities in which man is in exchange with nature, or else actions of interpersonal appropriation which take place on the back of such exchanges of nature and have the character of exploitation, even if they take the reciprocal form of commodity exchange.“ (Sohn-Rethel 1972, 20). “In commodity-producing societies, money constitutes the vehicle of social synthesis and requires for this function certain formal properties of the highest level of abstraction“ (Sohn-Rethel 1972, 20). These are based on formal properties abstracting from use-value, and these are what Sohn-Rethel wants to determine in essence. This then gives rise to the socialisation forms of thought, for example, those that Kant described as a priori existent, which then enables Sohn-Rethel to speak of money as „the bare coin of the a priori“ (1976:35).

Sohn-Rethel’s overall project will not be pursued here. Kratz (1980), for example, has presented Sohn-Rethel’s reflections and also partially referenced the discussion around them. His account is critical because Sohn-Rethel also undertakes a revision of some of Marx’s considerations that both Kratz and other Marxists do not share. Nevertheless, the question is of course highly relevant for any materialism, how people come to think and speak in categories of space and time and with the help of abstractions and what meaning mathematics has. Because this also questions whether natural science and mathematics today are really universal or just historical. In connection with this, it would also have to be taken into account that mathematics cannot be justified without contradiction even as the basis of what a computer does (Krotz 2022 with further references, see also Heintz 1933)).

Nevertheless, such a theory would of course be helpful in elaborating the meaning of the division of intellectual labour within the metaprocess of digitisation. If it is indeed money and the process of exchange that enable the human capacity for abstract thought, then one could also start today with the question of what will happen to money if digitisation continues – if money exists only as electronic symbols, the use of which, however, is then precisely accessible to companies through data. Then the old ideas that coinage and bills are actually representatives for gold stored in the vaults of the state banks will also no longer be true – today money is rather a variable offer of an overdrawn financial system protected by blockchain technology and thus a speculative object on which even the respective livelihood of the speculators still depends. Who benefits from Bitcoins and the so-called digital euro? For whom is it beneficial when it is no longer the state that provides a stabilised payment system through which people are supposed to transact their forms of reproduction, but can now be expropriated in the process by fraudsters and speculators, thus turning their survival into a coincidence that can no longer be controlled? Whereby it is not yet considered that with the ever faster becoming computers in few years most present safety systems will be levered out. In the following, we will disregard these theoretical questions and turn to an attempt to systematise considerations on digitisation via the division of intellectual labour on the basis of today’s state of knowledge by attempting to systematise the role of the computer as an instrument of the division of intellectual labour.

4.   Empirical Considerations on a Materialistic Theory of Economic and Social Developments in the Process of Digitalisation

4.1.    Intellectual Work in the Context of Human Activity as Forms of Thinking, Acting, Communicating, Perceiving and Interpreting Human Beings

Today’s capitalist business management (, accessed on 14 August 2023) remains decidedly superficial on such issues. It is said there quite simply that mental work is the result of thinking processes, and that physical work, on the other hand, is performed with the body. But physical work also requires thinking processes, quite apart from the fact that human thinking is also a physical event. In addition, then between dispositive work (which is reserved for the management) and executive work, which takes place directly with the object (Wikipedia (German) Work (betriebswirtschaftIich), accessed on 13. 9. 2023) is differentiated – mental work can be however both dispositive and executive. And it can play a role for the capitalist economy in quite other spheres, because today almost all mental activities of people can be accompanied by computers, even if often only with regard to a protocol of what is observable.

In this respect, following Marx (1990) and the considerations presented so far, we assume here that physical labour concretely consumes energy and transports or transforms matter, while mental labour describes actions that are primarily composed of symbolic operations in given contexts. This includes, for example, thinking processes such as the construction of meaning, but also communication and interpretation processes, steering and control processes, dispositive, planning and ordering activities, dialogues and arguments, also perception and interpretation, analogy and context formation.

Given this diversity, it is difficult to systematically categorise human mental work in terms of computer interventions. Nevertheless, core differences can be taken into consideration.

On the one hand, there are forms of mental work that take place in factories and companies: Organisational and management work, planning and development work, programming work, activities of individuals based on the division of labour, such as the evaluation of data, work with interactive and with automatic programs. In addition, there is control and monitoring work. (This list is probably not complete).

Furthermore, intellectual work does not only take place in the context of factories and production, but as explained, also as activities of various kinds in the context of and outside of professional work. Thus, there is housework, educational work, substitution work, relationship work and care work, and buying and selling as core processes in capitalism that also require mental activities. Even the counting of money and the cutting of coupons by private individuals can be understood as work. Likewise, there is ‘paperwork’ when one interacts with government agencies or has to do something according to bureaucratic rules.

In an overarching way, then, following the considerations in section 2, it can be said that mental activities as typically human activities basically include all activities that have to do with the symbolic world of human beings, ultimately all activities that the Animal Symbolicum performs. This does not deprive Marx’s theory of its force, but rather expands it, because Marx, after all, had to concentrate on work in the realm of economics for his purposes. In this respect, at any rate, it is true that a theory of mental labour must be complementary to Marxs theory of physical labour. The difference is that mental labour takes place not only in the sphere of production, but also in the sphere of consumption, in the context of human relations, and in general in all spheres of human life.

4.2.    Forms of Interaction Between the Human Being and the Machine

In the context of digitisation, we are now interested in especially that intellectual work which a computer can perform or in which the machine can participate. If we look at the historical phases of digitisation outlined in section 3.1, in phase 4 capitalism first appropriates the organisation and control of the computer and digitisation, as well as control over the further development of this technology. Since then, computers have become more and more by number, the interconnections more and more diverse, the software more and more complex and also more error-prone, and all this happens under the control of the digital companies, behind which the entire economy stands with its interests.

For this reason, a fundamental distinction must be made between two cases:

·      Either a computer controls a mechanical machine that it can simulate – this is possible according to Turing (2002), as explained earlier. This is the case, for example, when robots on the assembly line intervene in the production process and are programmed to do exactly that.

·      Or people are directly affected by the operations of the computer in some way – for example, when a self-driving car encounters other cars controlled by humans or pedestrians are in the vicinity. Or when someone plays chess or an MMORPG with a computer, or a computer as a drone and independently of further human intervention drops bombs on a human. In the case of a chess game, the human involved then has only a few, well-defined options of what to do. In the other cases, the human may have numerous options available to him, which the programmer can by no means always foresee or take into account. This is also true, for example, of an exchange of words between a human and a machine staged as a dialogue, and also of a self-driving car, when the programmer programs a car to drive in Switzerland but then sells it to India with quite other driving options.

Therefore, in most cases of human-machine operations, machine activities will be problematic and in many cases will end badly when the computer controls them, reduced to the options the programmer has taken into consideration. This is because such programmes are usually created under the control of companies that must pursue their own interests under penalty of bankruptcy and, if only for reasons of cost or lack of experience, do not then consider all possibilities. For example, a diagnostic computer may only ever suggest treatments with drugs that a pharmaceutical company has paid the programmer to name. It is such facts that can lead to racism and evil dilemmas, to death and disease. This is all the more true when it comes to programmes that control information.

But it is also important to keep in mind that computers operate in their own way, which differs from humans in the way they act, as we pointed out in section 2. This is most obvious in activities such as social action and understanding by humans, which computers do not control and cannot account for. Consequently, computers can by no means replace or support humans in all their activities. The transformation that the computer sets in motion in such cases can then fundamentally change the areas of human activity, but also at the same time restrict or otherwise ruin them in some way. This is true, for example, of people’s social relationships, which have been substantially transformed by Facebook, Tinder, TikTok and other computer-based forms of organisation. And it also applies, for example, to politics, which faces major problems due to new forms of self-expression and the participation of many or more and more individual participants, as well as a changing political public sphere – hate speech and fake news are ubiquitous. this is also because of the lack of a control tool of the individual, which is important in terms of speech. Those who speak listen to themselves, but those who write don’t need to read the filth they’ve written if they post it right away. But there are also cases like Twitter, which the new owner is currently transforming into a socially harmful instrument.

All in all, then, it must be stated that a computer is competent for dealing with machines, but not for dealing with people. This was already shown by the example above, according to which computers should learn empathy. Nevertheless, computers are increasingly used for such tasks, whereby they often have a residual function stored in them – whenever a computer does not get along with a human client, there is by institution not the possibility that a human is called in, but the computer use his residual form and does not longer care about the human client. It is convenient for the company that the computer has no morals and no consciousness and that it does not know what it is doing. It then easily overrides what one could call the insubordination of consumers and customers. In this respect, regulation would have to ensure that human-machine interactions always have a functioning human-human redirection that is accessible to all.

4.3.    Companies and Their Instruments and Resources: Datafication and So-called Artificial Intelligence (AI)

Complementary to the forms of human action, it is necessary to look at how companies deal with computers and digitalisation and thus with the division of intellectual labour. It is clear that companies are using digitalisation to optimise interactions for their business purposes. This then inevitably creates new difficult forms of work for the workers who remain – they are replaced, more controlled, pushed into different kinds of jobs, which can then often lead to their work becoming dehumanising and their skills de-skilled. Just think of de Prony’s adders, who had to perform the same arithmetic step two hundred thousand times.

This also brings to the fore phases 5 and 6 of digitalisation, in which companies have further developed digitalisation in a direction that benefits their interests and above all these interests, but is often used at the same time at the expense of their employees, and also at the expense of human rights and democracy. Here, the collection of all data which exist, is meant. In this regard, we refer to writings by Hofstetter (2018) and Zuboff (2018, 2019).

In a text concerned with the question of the further development of capitalism, however, one must emphasise a further consideration: Unlimited datafication forms a crucial basis for expanding and securing the domination of the capitalist economy. For through it they control their customers, whom they can describe and influence. Moreover, it enables them to optimise the planning of their products and their design, and thus to be comparatively sure that they will make a profit and not go broke. For this reason, the bon mot has long been widespread among the population about what one should probably do when Amazon recommends to its customers just the books and media that are already on their bedside table.

It can therefore be said that companies today also control the consumption and buying behaviour of their potential customers in a new way, insofar as they can influence them far better than before only through advertising and marketing activities. To that extent, they can use it to gear their production planning to them. This is not only a powerful instrument for sales, but also an instrument for optimising production, and thus altogether a step towards a new capitalism that can now control not only production and trade, but also purchasing and often also using behaviour. This is the first reason why we must speak of a qualitatively expanded capitalism in connection with digitisation: Production can no longer be adapted to what is merely predicted by experience; planning is based on diverse and precise data about what will be sold.

It is true that it can be argued that computers only have access to human behavioural data because they do not understand humans, but can only address them as reactive beings by means of stimuli. However, computer programmes have numerous possibilities to induce people to behave in certain ways at the stimulus-response level – for example, through the manipulative form of nudging (Thaler and Sunstein 2009). Therefore, as a supplement, one must probably also assume that people living in a fully digitalised capitalist society will unlearn many of their typical human characteristics because they are no longer needed. Their role in a capitalist economy will be reduced to stimulus-response behaviour. This would be a decidedly problematic process, however.

A second reason why one must speak of a qualitatively new capitalism lies in the potential of digital automation, which is now available to companies under the title of artificial intelligence. This is because so-called AI operates primarily on the basis of behaviours that can be increasingly predicted on the basis of companies’ data stores; it operates without further human intervention as a form of automation.

So-called AI programmes, as explained above, are nothing more than more or less complex programme sequences of simple orders that run in an automated fashion when the apparatus interacts with humans. The latter, when they search for something or have something to do and encounter AI programems in the process, have to adapt to the corresponding specifications of these machines.

Conversely, it is always said that computers can learn, but it is not said what learning exactly means. A closer analysis shows that instead of learning, the term of dressage more or less aimed at desired successes would be more appropriate. The so-called computer learning, the so-called neuronal learning and the impressively so-called deep learning always take place on the basis of collected data stocks and thus always on the basis of behavioural data of observed computer users, as it is well known. As a dressage on stimulus-response level, computer learning in all its forms has nothing to do with human learning. This is especially shown by the analysis of such programs (Nguyen/Zeigermann 2018, Flasinski 2016, Ertel 2017)).

In supervised learning, which mostly involves categorising individual cases, the trainer knows which results produced by the computer belong where and can then provide positive or negative feedback. In the case of non-supervised learning, a so-called AI independently forms clusters of matching data. However, it remains uncertain whether this will ultimately produce something that the client can use. If not, however, the approach can be modified; in this respect, as in the case of cluster analyses, this is ultimately also a procedure adapted or radically simplified to expected results so that the necessary computational procedures function as expected (cf. e.g. Nguyen/Zeigermann 2018, 105ff.). Finally, there is so-called reinforcement learning, in which the AI is also supposed to learn without specifications, but is steered in a certain direction by rewarding certain results. All of these methods allow for a variety of simplifying or adapting interventions by the programmers that can influence the outcome. In this respect, one cannot claim that computers arrive at results independently of biases or interests. All these procedures depend not only on the programming, but on the possibly used data and the manipulations of the programmers to get useful results. It is obvious that regulation is necessary here.

Ultimately, these problems are due to the fact that the computer, as noted above, must face reality as a stimulus-response apparatus controlled by digital enterprises, because it does not understand what it is doing. Therefore, the underlying learning concepts are borrowed from behaviourism. In this science discipline, by learning always conditioning processes are meant, such as those developed by Pavlov in his famous dog experiments: The dog learned not to produce bodily fluids like saliva only when it saw food, but already when a just “learned” sound announced it. Following this, the behaviourist social psychologist Donald Hebb (1973), who also developed the basics for the so-called neuronal learning in the brain and is therefore often quoted by computer scientists, criticises the concept of learning as too general. Although he admits that a reduction of all learning processes to conditioning processes is “an oversimplification“ (Hebb 1973, 205), ultimately this is the only form of learning that a behaviourist can investigate and thus understand as learning.

Supplementary, one can say that neural learning of computers as a special case of computer learning, also goes back to a thesis of the behaviourist psychologist Hebb, who then simply explained the functioning of the interconnections of neurons observable in the human brain in a behaviourist way and represented their functioning by a linear system of equations – the neurons react to stimulation and learning processes in the brain can be adequately represented by adaptations of the weights in these linear equations, so the conclusion. Why this should be so remains obscure. Even if this can be correct for neurons – the human brain functions differently in any case, because human learning processes do not simply end in a neuron layer, but are anchored in insights and can be reflected – the results can manifest themselves in human consciousness. Today’s psychology is also further along in this regard.

In all of this, we see that digital companies and those who are also trying to be, can achieve useful results through such programming, which can be helpful to them. At the same time, humans will become increasingly unimportant in the sense of anthropomorphisation, also with regard to their own decisions: Technology is simply better in principle, it is said in an ideologically blinded way. In this context, Mayer-Schönberger/Cukier write sweepingly: “The biggest impact of big data will be that data driven decisions are poised to augment or overrule human judgment“, and go on to say that ”statistical analyses force people to reconsider their instincts. Through big data, this becomes even more essential“ (Mayer-Schönberger/Cukier 2013, 141).

Thus, it becomes clear here again that AI programmes, on the one hand, are intended to take influence away from people and, on the other hand, are used primarily because they function without further human support – they signify the step of digitisation into automation. Humans have to adapt to all this because they increasingly encounter such automatic programs, which they can or must deal with for a variety of reasons. Conversely, institutions and companies protect themselves against insubordination on the part of their customers or all those who fob them off with such automations. Ultimately, this creates a society in which people are individualised and altogether powerless in the face of the automata on the Net or on the telephone, because they are increasingly surrounded by such automatically running programmes. The empathy that computers are supposed to learn is then supposed to help people experience this dependency less clearly. This is the corporate power that thus emerges, and must be seen as the second reason why the capitalism of the future is a qualitatively improved one compared to that of the past.

4.4.    New Forms of Work for People Based on Digitalisation

The fact that a number of new forms of work have meanwhile developed that point to new forms of capitalist exploitation is now increasingly reflected in literature and empirical studies.

These include the so-called gig economy and the so-called coworking forms. One of the best-known “places on the net” where such forms of work are mediated is Amazon’s ”Mechanical Turk“ platform, the name of which is probably intended to indicate that here, too, behind the apparently great technology there are people who have to add what the machine cannot. Such platforms offer jobs to people familiar with computers, which digital companies advertise because they rely on such support (see de Ruyter et al. 2018/2019). For example, hundreds of thousands of people observed traffic conditions in their neighbourhoods for a few euros so that navigation systems could incorporate up-to-date data. Other jobs relate to image recognition, which computers cannot reliably do. The pay is usually lousy (Bonse 2002), and the working conditions nasty and contrary to human rights (Moreschi, Pereira, and Cozman 2020). The emerging new capitalism has not unlearned the old forms of exploitation.

Other jobs of a new kind, as we know, then consist of actually well-educated people in the Global South having to evaluate images that are to be posted on Twitter, Facebook, and other platforms, but may violate legal rules. Such jobs then consist of judging endless sequences of images from the realm of violence and more or less violent sexuality in a Tayloristic setting – a job that can break people. Other examples include computer gamers who, as participants in computer games, do not play but work. They aim to gain certain attributes and symbolic objects that they sell to richer and less skilled players, who in turn can boast about them.

5. And the Society of the Future?

Digitalisation is increasingly leading to an ever more comprehensive takeover of people’s forms of intellectual work by computers. Computers, software, networks, data, and AI almost always serve the interests of digital companies and the economy behind them, and contribute to the fact that more and more areas of people’s everyday lives and society are becoming accessible to the interests of digital companies. People must increasingly adapt to this, also because more and more relevant forms of action are being triggered by digital operations, the hegemonic demands of which fewer and fewer people can avoid and which cannot be changed individually, however – medicine, social relationships, mobility, knowledge and learning, etc. are being embedded in capitalism in new ways. It is also hard to resist or avoid the manipulations that are possible in this way, given their massive spread. The problems associated with capitalist control over digitalisation have been elaborated here.

As a thesis, then, it can be stated that, with the digital industry, a newly powerful sector with often ignorant and power-conscious new elites with no interests in democracy, only in technology, is pushing its way between people and their ways of life and society and its forms of function. Behind it, the traditional economy is already waiting to take over the profitable operations of the digital pioneers. In this respect, it is to be expected that capitalism will experience an upswing as a result of digitalisation and that a qualitatively expanded capitalism will come about.

For just as the sphere of production has been capitalistically organised and controlled up to now, the spheres of consumption, reproduction, distribution, but also the spheres of people’s lives and their culture, which were previously only indirectly controlled, will be controlled in a new way and directly via computers and digital networks by the big companies and instrumentalised for their interests. Until now, it was the threat of bankruptcy and leaving the markets that the capitalists fought against; meanwhile, they have data and forms of automation at their disposal to avoid this threat and to directly subject people to these interests in all their spheres of life. The aim to exclude the insubordination of the workers is of course still remaining.

The relations of production are changing on the basis of the changing productive forces, namely the “steam engine of the mind”. As a result, capitalism creates the society it needs. Thus. the announcement of the one-dimensional human being by Herbert Marcuse (1970) gets a new meaning, because in the process of digitalisation the human being is moreover shortened to a stimulus-response being. And also the manifold indirect hegemonic consequences of the previous capitalism, which Adorno and Horkheimer, Lukács, Fromm, and Marcuse, in a certain sense also Bourdieu and the actually democratic feminism have described, are changing – if capitalism had only indirect access to society so far, it is becoming more and more direct and immediate with the help of digitalisation. The fact that there are also inherent contradictions, which arise not least from competition and greed, but which are also structurally inherent because, for example, humans are not stimulus-response systems, should not be forgotten.

What to do? We emphasise once again that a different digitalisation is possible, one that serves humanity and not capitalism. The mentioned problems are not a consequence of the computer, but a consequence of the capitalistic use of the computer. A sophisticated theory of intellectual work could reveal many open possible fronts. Perhaps the thesis of the historian Ivan Illich that humankind should not use techniques that humankind does not understand and that endanger democracy (Illich 1975) is also valid. Especially the ideology of anthropomorphisation would have to be criticised in science and public. Capitalism has dominated the world for centuries. Today it also threatens the world in a new way via climate crisis and destruction of the bases of natural life. The necessary change becomes obvious for more and more people.

Perhaps there is a chance to link the fight against these threats to a radical restriction of capitalism. We all have to get involved.


Adorno, Theodor W. 1975 [1966]. Negative Dialektik. Frankfurt am Main: Suhrkamp.

Babbage, Charles. 1832. On the Economy of Machinery and Manufactures. London: Charles Knight.

Bonse, Eric. 2022. Online Jobs haben die Hoffnungen enttäuscht. Die Taz, 17 February, 7.

Brinkschulte, Uwe and Theo Ungerre. 2010. Mikrocontroller und Mikroprozessoren. Third edition. Heidelberg: Springer.

Cassirer, Ernst.2007. Versuch über den Menschen. 2. Improved Edition. Hamburg: Felix Meiner.

Chen, Jize and Changhong Wang. 2019. Reaching Cooperation Using Emerging Empathy and Counter-Empathy. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagents (AAMAS 2019), edited by Edith Elkind, Manuela Veloso, Noa Agmon and Matthew E. Taylor, 746-753. New York: ACM & International Foundation for Autonomous Agents and Multi Agent Systems.

De Ruyter, Alex, Martyn Brown and John Burgess. 2018/19. GIG Work and the fourth Industrial Revolution. Journal of International Affairs 72 (1): 37-50.

Dyer-Witheford, Nick. 1999. Cybermarx. Urbana: University of Illinois Press.

Ertel, Wolfgang. 2017. Introduction to Artificial Intelligence. Second Edition. Cham: Springer Nature.

Flasinski, Mariusz. 2016. Introduction to Artificial Intelligence. Cham: Springer Nature.

Friedman, Ted. 2005. Electric Dreams. Computers in American Culture. New York: New York University Press.

Fuchs, Christian. 2022. Digital Capitalism: Media, Communication and Society Volume Three. New York: Routledge.

Fuchs, Christian. 2023. Der digitale Kapitalismus. Arbeit, Entfremdung und Ideologie im Informationszeitalter. Weinheim: Beltz Juventa.

Fuchs, Christian. 2016. Critical Theory of Communication. New Readings of Lukács, Adorno, Marcuse, Honneth and Habermas in the Age of the Internet. London: University of Westminster Press.

Hebb, Donald O. 1973. Einführung in die moderne Psychologie. 7th edition. Weinheim: Beltz.

Heintz, Bettina. 1993. Die Herrschaft der Regel. Zur Grundlagengeschichte des Computers. (Dissertationsschrift). Frankfurt: Campus.

Hofstetter, Yvonne. 2018. Das Ende der Demokratie. 2. Edition. München: Bertelsmann Penguin.

Kaplan, Jerry. 2017. Künstliche Intelligenz. Frechen: mitp.

Kratz, Steffen. 1980. Sohn-Rethel zur Einführung. Hannover: SOAK.

Krotz, Friedrich. 2022. Die Teilung geistiger Arbeit per Computer. Eine Kritik der digitalen Transformation. Weinheim: Beltz Juventa. (Open Access at or,

Krotz, Friedrich. 2017. Explaining the Mediatization Approach. Javnost – The Public 24 (2): 103-118.

Lenzen, Manuela. 2002. Natürliche und Künstliche Intelligenz. Frankfurt: Campus.

llich, Ivan. 1975. Selbstbegrenzung. Eine politische Kritik der Technik. 2. Improved Edition. Reinbek bei Hamburg: Rowohlt.

Marcuse, Herbert. 1970. Der eindimensionale Mensch. Neuwied: Luchterhand.

Marx, Karl. 1990 [1867]. Capital Volume I. London: Penguin.

Marx, Karl. 1973 [1857/1858]. Grundrisse. Introduction to the Critique of Political Economy. London: Penguin.

 Marx, Karl and Friedrich Engels. 1845/1846. The German Ideology. Critique of Modern German Philosophy According to its Various Prophets. London: Penguin Classics.

Mattelart, Armand. 2003. Kleine Geschichte der Informationsgesellschaft. Berlin: Avinus.

Mayer-Schönberger, Victor and Kenneth Cukier. 2013. Big Data. Boston: Houghton Mifflin Harcourt Publishing.

McStay, Andrew. 2018. Emotional AI. The Rise of Empathic Media. London: Sage.

Menabrea, Luigi F. 1842. Ein Konzept für die Analytische Maschine. Eine Erfindung von Charles Babbage. Mit Notizen aus der Denkschrift der Übersetzerin, Ada Augusta, Herzogin von Lovelace. Bibliotheque Universelle de Genève, Oktober 1842, Nr. 42. Übersetzung ins Deutsche von Jürgen Buchmüller. Inklusive sieben längere Kommentare (Notiz A bis G) und 20 kurze ergänzende Notizen von Ada Lovelace. Accessed on 10 August 2019.

Moravec, Hans. 1999. Fernziel Roboter mit Bewußtsein? Ein Gespräch über Automaten des späten 21. Jahrhunderts. In Intelligenz zwischen Mensch und Maschine: Von der Hirnforschung zur künstlichen Intelligenz. Begleitbuch zum Neuen Funkkolleg „Die Zukunft des Denkens“, edited by Karl-Heinz Wellmann and Utz Thimm, 170-184. Münster: Lit.

Moreschi, Bruno, Gabriel Pereira and Fabio B. Cozman. 2020. The Brazilian Workers in Amazon Mechanical Turk. Contracampo – Brazilean Journal of Communication 39 (1): 44-64.

Nguyen, Chi Nhan und Oliver Zeigermann. 2018. Machine Learning kurz&gut. Heidelberg: dpunkt.

Sieber, Armin. 2019. Dialogroboter. Wie Bots und künstliche Intelligenz Medien und Massenkommunikation verändern. Wiesbaden: Springer Fachmedien VS.

Sohn-Rethel, Alfred. 1976. Das Geld, die bare Münze des Apriori. In Beiträge zur Kritik des Geldes, edited by Paul Mattick and Alfred Sohn-Rethel, 35-117. Frankfurt am Main: Suhrkamp.

Sohn-Rethel, Alfred. 1972. Geistige und körperliche Arbeit. Frankfurt am Main: Suhrkamp.

Tegmark Max. 2019. Leben 3.0. Mensch sein im Zeitalter künstlicher Intelligenz. Berlin: Ullstein.

Thaler, Richard H. and Cass R. Sunstein. 2009. Nudge. Wie man kluge Entscheidungen anstößt. Berlin: Ullstein.

Thomson, George. 1968 Die ersten Philosophen. Berlin: Akademie. 

Turing, Alan M. 2002. Kann eine Maschine denken? In Künstliche Intelligenz. Philosophische Probleme, edited by Walter C. Zimmerli, 39-78. Stuttgart: Reclam.

Watson, John B. 1913. Psychology as the Behaviourist Views it. Psychological Review 20: 158-177.

Waldenfelds, Bernhard, ed. 1978. Phänomenologie und Marxismus 3: Sozialpsychologie. Frankfurt am Main: Suhrkamp.

Weber, Max. 1978. Soziologische Grundbegriffe. 4th, revised edition. Tübingen: Mohr Siebek.

Wüst, Klaus. 2006. Mikroprozessortechnik. 2. Updated and enhanced edition. Wiesbaden: Vieweg.

Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.

Zuboff, Shoshana. 2018. Das Zeitalter des Überwachungskapitalismus. Frankfurt am Main: Campus.


Wikipedia (German) „Arbeit (betriebswirtschaftIich)“, accessed on 13 September 2023

Wikipedia (German), accessed on 14 August 2023, accessed on 5 May 2022

About the Author

Friedrich Krotz

Prof. em. Dr. habil. Friedrich Krotz holds a diploma in mathematics and a diploma in sociology. He besides other work taught and researched as a mathematician at the University of Saarland, as a sociologist at the University of Hamburg and the FU Berlin, and as a communication scientist at the Hans-Bredow-Institute for Broadcasting and Television. Since 2001 he has held professorships at the Universities of Münster, Erfurt, and Bremen. Besides research in Germany he has conducted research projects in Mexico, Japan, the USA and with other teams in Europe. For eight years he was Editor-in-Chief of Communications - the European Journal of Communication Research, he was Head of Section in the IAMCR, and there he was also an elected representative on the International Council. In the six years before his retirement, he was the founder and coordinator of the DFG-funded priority programme “Mediatized Worlds“ with a total of 35 projects at various universities in Germany and Austria. He is currently particularly engaged in the analysis and critique of the computer, its use in society, and digitalization.

[1] This paper develops a number of considerations from Krotz 2022, but also occasionally uses some formulations that were already used there.

[2] We add that, according to Marx, an additional step is then necessary for the full development and stabilisation of capitalism, namely that the machine tools with which the machinery is produced must also no longer be produced by hand, but also by the division of labour. This would correspond to the industrial production of the computer and its programming as it takes place today.

[3] This is not to say that computers can always be replaced by humans; at least there are no studies on this. But that computers will someday have the capabilities to eliminate all the evils of this world is probably more of a deification of the machine that goes beyond humanisation.