… instead of control by IT and AI or: From Norbert Wiener’s Cybernetics to Chat GPT
The starting point for discussions about supposedly “intelligent” technical systems were ten interdisciplinary Macy Conferences from 1946 to 1953. The aim of these conferences was to develop a universal science that could be used to describe the functioning of both the human brain and electronic systems (especially computers). Now these systems are being taught in schools as AI. By Ralf Lankau, published in: Fromm Forum (German edition, 2025, Tübingen)
Article in English (24 pages): Lankau (2025) Human intelligence and autonomy instead of control by IT and AI
Article in German (18pages): Lankau (2025) Humane Intelligenz und Autonomie statt Steuerung durch IT und KI
“I like to compare our current naive approach to digital technologies with the way the Native Americans welcomed the Spanish conquistadors. These people had no way of knowing the significance of the arrival of a new power that brought their eventual subjugation.” (Shoshana Zuboff 2018b)
Prologue: The school
The scientist and science fiction author Isaac Asimov published a short story about the “School of the Future” in 1954. In the year 2157, there would be no more schools at all because a small classroom would be set up next to every child’s room, in which children and young people would be taught by a mechanical teacher (a machine with a screen and a slot to insert homework). This teaching machine is perfectly adjusted to the abilities of the respective child and can therefore teach them in the best possible way. But machines can break. Eleven-year-old Margie is quizzed again and again in geography by her mechanical teacher, gives the correct answer, but gets a lower mark each time. She despairs, her mother sees this and calls the school inspector to fix the machine.
“Margie had hoped he wouldn’t know how to put it together again, but he knew how all right, and, after an hour or so, there it was again, large and black and ugly, with a big screen on which all the lessons were shown and the questions were asked. That wasn’t so bad. The part Margie hated most was the slot where she had to put homework and test papers. She always had to write them out in a punch code they made her learn when she was six years old, and the mechanical teacher calculated the mark in no time.” (Asimov 2016, p. 146.)
In just a few sentences, Asimov describes what is already common practice today or, if it were up to the providers of such systems, should become teaching practice. Children sit at screens and are taught by machines. Teaching and testing is delegated to machines and software and depersonalized. Learning itself is reduced to something that can be tested automatically, because neither a learning machine nor software “understands” anything of what it displays on the screen. This is based on the idea that learning processes can be calculated and controlled: Metrics and prognostics instead of pedagogy. After all, what Norbert Wiener called cybernetics in 1948 and has been called artificial intelligence (AI) since 1956 and was intended to enable “programmed learning” is essentially measurement and control technology (measure. control. control.) based on data or automated data processing. The aim is system and process control.
Cybernetics and artificial intelligence (AI)
The starting point for discussions about supposedly “intelligent” technical systems were ten interdisciplinary Macy Conferences from 1946 to 1953 (Pias 2016). The aim of these conferences, to which only selected scientists were invited, was to develop a universal science that could be used to describe the functioning of both the human brain and electronic systems (especially computers). Luminaries of their disciplines such as Gregory Bateson, Heinz von Foerster, Warren McCulloch, Margaret Mead and John von Neumann discussed neural networks, (group) communication and language, computers and automated pattern recognition as well as the emerging neurosciences. In 1948, Norbert Wiener published the mathematical generalization of this “art of control” as a book: Cybernetics or Control and Communication in the Animal and the Machine (1948). The basic idea that still runs through the models and applications of automated data processing (today: AI) today is the assumption that organisms and social communities can also be described as machines or technical systems and calculated and controlled using corresponding (mathematical) models. (In his 1950 book The human use of human Beings he already warns against the inhumane use of humans (p. 26f), but the models are in the world.)
The tools for this are the computers (calculating machines), databases and networks that have been developed since the mid-1930s. This includes the World Wide Web and its applications (apps). Even if you use a smartphone or tablet, you are constantly working with AI applications (search engines, route planners, social media platforms, streaming services and much more), albeit unknowingly and without being asked. It is therefore important to remember that the automated control of technical systems, but also of people and social communities, is the core idea of cybernetics, even if the term was renamed “artificial intelligence” (AI) in 1956 for marketing reasons. It was easier to attract sponsorship for the promise of developing a system with “artificial intelligence” than for a project to “control human behavior using computers”.
The basic function of cybernetic systems is automated process control. This must be remembered because in the context of the generative AI systems currently under discussion, the boundary between human and machine is being dissolved in both directions and AI systems are being presented by providers as “assistants, coaches, teachers, partners”. The German Ethics Council refers to this interplay of anthropomorphism (the humanization and attribution of human characteristics to machines) and animism (the attribution of cognitive and mental characteristics to machines) as a historical constant (and error!) in its 2023 statement Man and Machine – Challenges posed by Artificial Intelligence (Ethics Council 2023, p. 107 f.). After all, it is humans as (originally) social beings who assign human attributes to machines because they need an (at least imaginary) counterpart. Machines do not communicate or interact; they function correctly or, like Margie’s school machine, have to be repaired or (re)programmed. Microsoft Germany boss Sabine Bendiek explains: “AI can do a lot of great things, but ultimately it calculates on the basis of large amounts of data” (Armbruster 2019). The humanization of (in this case: computing) machines is given a special dynamic by the fact that these systems are already in widespread use without users being able to prevent or influence them. Siri and Cortana can be switched off, but the flow of data cannot.
The staged hype surrounding “generative AI”
In November 2022, the US company OpenAI launched the ChatGPT text generator online. Since then, more than 100 million users worldwide have been working as free beta testers. Neither AI systems nor voice bots are new. You could already chat with Joseph Weizenbaum’s “Eliza” in 1966, but Silicon Valley needs a new “big thing”:
November 30, 2023 (Süddeutsche Zeitung) “A year ago, a US company put a program online that was internally classified as a ‘low-threshold research preview’. Then all hell broke loose. […] Experts are not very impressed. They have known for years what AI is capable of, all the components of ChatGPT were ready. […] [But AI; rl] is currently filling an important gap for Silicon Valley. It is finally providing the tech industry with a grand narrative again that the golden future is being created by tech companies.” (Brühl 2023, p. 18.)
A grand narrative about the golden future through tech companies … Once again, data processing technology is being attributed the potential for both world salvation and the end of the world – doctrine of salvation or doom and gloom? Marc Andreessen, who developed the first graphical browser Netscape in the early 1990s (and thus helped make the Consulter possible), expects nothing less than the salvation of the world from AI. In the article “WHO AI will save the world”, he announces the good news that AI will not destroy the world, but “perhaps even save it”. A new era will begin:
“Everyone will have a AI assistant / coach / mentor / trainer / counsellor / therapist who is infinitely patient, infinitely compassionate, infinitely knowledgeable and infinitely helpful. The CHI assistant will be there for all of life’s opportunities and challenges, maximizing each person’s results.” (Andreessen 2023)
Thanks to ubiquitous assistance systems, economic growth and productivity growth would accelerate dramatically, scientific breakthroughs, new technologies and medicines would be developed faster, the creative arts would enter a golden age, children would maximize their full potential “with the machine version of infinite love” and even wars would become shorter and less bloody thanks to better strategic and tactical decisions (ibid.). In its promise of salvation, this is reminiscent of John Perry Barlow’s Declaration of Independence of Cyberspace and the utopia of a global village of a peaceful and non-violent world community (Barlow 1996).
Strangely, renowned scientists are warning against the unthinking use of AI systems. In May 2023, more than 1,300 IT and AI experts warned of the consequences of this technology and equated the potential dangers of AI with those of the atomic bomb or a pandemic (Center for AI Safety 2023). In an open letter, more than 33 thousand scientists call for a moratorium (a pause for thought) and an interdisciplinary discourse on the consequences of such applications (Open Letter 2023). But economic interests are more important than technology assessment and responsibility for possible consequences. The main sponsor of Open AI, Microsoft, has held a 49% stake since the beginning of 2019 (via its for-profit subsidiary OpenAI Global, LLC). Investments of more than USD 13 billion to date and pledges of further double-digit billion sums in subsequent years are the argument for this “huge social experiment”, as Judith Simon, member of the German Ethics Council, called the activation of Open AI at the Munich Economics Debates in January 2024 (Brühl 2024). Following market logic, competitors have released their own AI tools (and multiplied the problem). After the beginnings in 1956 and the “expert systems” of the 1980s, this is already the third AI hype – each of which was followed by an “AI winter” because hypothetical promises could not be kept. (For the history of AI, see Seising 2021 and Manhart 2022)
User control through digital twins
At best, “artificial intelligence” is the “simulation of intelligence” (Hansch, 2023). These are data processing machines that work on the mathematical basis of pattern recognition, probability calculation and statistics and, in the case of generative AI systems, can generate texts, graphics, videos or computer code based on large amounts of data that formally look as if they were produced by humans. Formally, because an AI does not even “know” what it is calculating. The deception is successful because all human communication and sign systems are rule-based, i.e. logical and structurally compliant (linguistics and semiotics). Control systems can be modeled mathematically and run just as rule-based as a mechanical clockwork. But a clock doesn’t know what time it is, no matter how precisely it runs. A clock doesn’t even know what time is!
That’s why all the fuss about “machines taking over and ruling” is science fiction. Machines want neither domination nor power. They have neither consciousness nor any intentions or goals. You don’t have to be afraid of (digital) systems, but of what people do with them. As with the web and social media services, we are already inevitably discussing the misuse of AI systems (cyberbullying, fake news and deep fakes, bullying, stalking, sexting, etc.). The misuse of media is nothing new. However, one design flaw of the web is that companies such as Alphabet/Google, Meta/Facebook and others have been able to define themselves as platform operators who are not responsible for the servers published on their systems. With the Digital Services Act, the EU is correcting this, at least for Europe. What makes these systems critical is that AI is not only used to control technical systems, but also people, their (consumer) behavior and attitudes. Personality profiles (digital twins) can be generated from data, which can be used to predict behavior and test offers on digital test subjects before they are used on real people.
“Data is fed into adaptive algorithms that create a digital double of us that behaves similarly to us. This can be used to test which information tempts us to buy certain products, download a computer virus, or hate refugees or other religions.” (Helbing, 2018, p. 2)
The individual on the digital end device is both a data donor for the big data collection of the data economy and an addressee for digital offers. While the control of machines and processes through IT and AI is the explicit task of engineers and computer scientists, the control of people violates their right to privacy and self-determination. The US economist Shoshana Zuboff referred to digital systems as control systems as early as 1988: Automate. Digitize. Control. (Zuboff 1988). This has since developed into an “age of surveillance capitalism” (Zuboff 2018). Frank Schirrmacher referred to this infrastructure as technological totalitarianism back in 2015. Digital as a control tool is therefore unsuitable for use in social systems – and educational institutions in particular.
Economic interests vs. pedagogy
Such concerns are rarely addressed in the public debate. Belief in progress and technological determinism (and the work of numerous lobbyists) also dominate the discourse in educational institutions. AI systems are on the market, are used by teachers and pupils anyway and therefore need to be integrated into schools. The question of which media is used for teaching is left to the providers of these systems. Digital providers compensate for the demonstrable lack of benefits and added value of IT systems (Balslev 2020) by claiming that the question of the added value and benefits of IT (now also AI) in schools is already wrong.
This is the argument of the Forum Bildung Digitalisierung e.V., an association of private foundations that advocate a “systemic digital transformation in the education sector”. The companies represented in the forum by their foundations come from the IT and telecommunications industry and/or are themselves players in the digital economy or education industry. Their credo: “In projects, publications and events, we identify the conditions for success for digital change in schools and navigate through the necessary change processes (FBD 2022). In the wake of such foundations, there are digital-savvy teachers, bloggers and, more recently, “educational influencers” who take on such positions and also only want to discuss education “under the conditions of digitalization” (Krommer 2018).
A look at school practice
Teaching practice, on the other hand, confirms the importance of face-to-face teaching, emphatically confirmed by studies from two corona years with digitally supported distance learning due to the pandemic:. “No medium can replace the teacher as the person who structures and leads the lesson,” states Heidelberg education professor Karl-Heinz Dammer in his expert opinion (Dammer 2022, p. 5). The study by Engzell et al. (2021) shows that even pupils from technically very well-equipped Dutch schools, who were used to using digital technology in face-to-face lessons, develop learning deficits through distance learning that correspond to the time of school closure. If they are children from educationally disadvantaged families with a migration background, the learning deficits are significantly greater. (Maldonado et al.)
A Frankfurt research group states: “Distance learning is as effective as summer vacation” (Hammerstein et. al., 2021). Manfred Spitzer has compiled studies for nurseries and elementary school (Spitzer 2022). Klaus Zierer adds studies on physical and psychological consequences (Zierer 2021, p. 37 f.). The studies by Andresen (Jugend und Corona) and Ravens-Sieberer (CoPsy I-III) also show serious consequences for both physical and mental health due to forced social isolation, in addition to learning deficits. The same applies to students. Here it is anxiety, depression, mental disorders and dropping out of university (DZHW 2021). In addition, many children and young people are gaining weight due to contact restrictions and lack of exercise and, associated with this, are spending more time using screens.
UNESCO has examined the use of digital technology in schools worldwide and presented the results in the 2023 Global Education Monitor report (UNESCO 2023). The subtitle was “Technology in education: a tool for whose benefit?” The result: current IT concepts for educational institutions focus on the economic interests of IT providers and aspects of the data economy, not on learning and educational benefits. The first countries are reacting. In December 2023, the Danish Minister for Children and Education, Mattias Tesfaye, apologized for the fact that the Danish government had turned young people into “guinea pigs in a digital experiment” and in February published a series of strict recommendations for the use of digital devices in school and leisure (!), including a general ban on cell phones in schools and the blocking of irrelevant websites.
The Swedish government reversed its decision to make it compulsory for preschools to be equipped with digital devices following a report by the Karolinska Institute (2023). Sweden’s Education Minister Lotta Edholm banned tablets from preschool and elementary school, has school books printed again and requires teachers to spend time reading with their classes every day.
In January 2024, the French government commissioned the study “Enfant et écrans” (Children and Screens). The experts‘ recommendations were published on April 30, 2024, including a general ban on digital media for kindergartens. Children up to the age of 13 should not be given a smartphone (at most a cell phone without the internet from the age of 11) and adolescents from the age of 15 should only use “ethical” (non-commercial) social networks such as Mastodon, which define themselves as non-profit companies. Access to profit-oriented networks such as Instagram, Facebook, Snapchat or TikTok should only be permitted from the age of 18. Recommendations and bans are justified by psychosomatic effects such as depression, anxiety, screen addiction, attachment disorders (technoference), effects on sleep, lack of exercise and obesity.
More and more countries (currently every sixth country according to the UNESCO report, and the trend is rising) are banning private (!) devices in schools in order to be able to teach and communicate with each other regularly and face-to-face again. There is no other way to combat the smartphones of social media and messaging addicted pupils, even if they are stowed away in their school bags switched off (Haidt 2023a; 2023b; Böttger et.al. 2023). The consequence: banning smartphones throughout the school, including during breaks, so that people can act, play and communicate “normally” again. Devices brought into school are switched off and stored in lockers at the start of lessons and only handed back at the end of the school day. In return, schools must provide the digital devices required for lessons.
Teaching as a dialogical process
Teaching as an interpersonal process is always linked to people and direct dialog. The pedagogical triangle describes the partners involved, who must be present and facing each other in order to speak of pedagogical work and teaching: Teachers and learners. The third is the subject as the object of instruction, the topic or subject of the lesson. If necessary and depending on the age group, topic and opportunity, (analog and digital) media complement teaching and learning. It goes without saying that media belong in the classroom, but the teacher must be able to decide on their choice and use.
Fig. 1: The pedagogical triangle
Joint teaching and learning takes place in a social space and is a dialogical, discursive process. The classic term for this is “classroom discussion” and goes back to Socrates‘ Academia. The “Socratic midwifery” is the art of the teacher asking questions and the students gaining knowledge through their own reflection and formulation of possible answers. It is a two-way dialog in which the teacher leaves it to the students to learn to formulate and argue in their own words. If the teacher is removed from this triangle, as is the case with IT and AI systems, teaching and learning become media-based instruction.
This is a possible form of learning, especially for advanced, intrinsically motivated and above all disciplined learners, but it is not teaching. Independent learning with media also requires prior knowledge, the ability to reflect and the power of judgment. Media-supported self-learning phases are a goal of education and teaching in order to introduce young people and, above all, adults (at university) to independent reading, learning and research that is also self-determined in terms of content. It is not a skill that can be expected or taken for granted in children and young people. This curiosity and willingness to make an effort is rarely found even in adults.
Learning at learning stations and IT systems is neither self-determined nor individualized. The learning objective is just as predefined as the learning paths. Individualization only means variance and the number of possible intermediate steps. It is algorithmically controlled “teaching to the test”, with small-scale integrated test loops and calculation of the next tasks by learning control software. The goal is predetermined and should be achieved as efficiently as possible. In contrast to Asimov, technical systems today work with digitally generated figures (avatars as pseudo-characters) and computer-generated voices.
But Asimov anticipated the principle of social isolation on the display or touchscreen, the reduction of learning processes to the completion of predefined tasks with the aim of immediate verifiability and technically generated rather than interpersonal interaction. The social space of school, with classrooms and playgrounds, is being eliminated by the use of devices, as is learning together and helping each other. People on the display are ultimately incapacitated. You can’t have a discussion with a machine. Instruction by mechanical teachers (today’s digital systems) reduces learning to repetitive knowledge that can be queried and learning status checks for age and performance levels (today: competence grids and levels). Pedagogy becomes metrics and prognostics.
AI and chatbots in schools
he even more important question for educational institutions is: what do such systems do to pupils? The answer: young people are being accustomed to working on screens and to supposedly omnipotent systems that they can neither understand nor question. AI systems are black boxes in terms of how they work (algorithms), the underlying parameters (attributes, values) and the data itself. The database changes with every input. Every user input (prompt), together with the automatically generated response, becomes part of the database for subsequent queries and increases the data pool – without being checked for validity or relevance. These bots work with huge amounts of data. To do this, they scan the entire network; they are veritable “data vacuum cleaners”, with no regard for copyright or usage rights. As a result, more and more legal proceedings are pending.
As these systems modify themselves through “machine learning” (a euphemism for the automated generation of further algorithms based on pattern recognition and statistics) without humans (being able to) control these additions and/or modifications, not even the experts know what these systems are doing – and why. The concept of data maximization without quality control is already leading to these systems delivering more and more erroneous results the longer they are used.
Added to this is the irrelevance of the results. With AIs, it is possible to show that bots can generate text or images in a matter of seconds based on a prompt. But it’s like a kaleidoscope: something different emerges with every turn, but it remains arbitrary and random. The same applies to texts. Formally, they are text forms such as reports or poems, but they remain generic texts. There is no learning or comprehension process for the users because the results of the prompt are arbitrary. In this way, pupils are trained to function like a machine (generate a prompt for text X for subject Y) instead of initiating learning and comprehension processes that allow young people to become communicative individuals. To do this, they would have to experience curiosity and joy through their own active doing and gain knowledge in their own work process. Educator Gottfried Böhme identifies the even more profound danger posed by the use of AI systems in schools: “AI ruins the motivational structure of conventional teaching”, he explains:
“Artificial intelligence is breaking the backbone of schools as they exist today. There has never been an invention in the history of educational institutions that has so infamously called into question the entire motivational structure of the learning system as this atomic bomb AI – to put it bluntly. We are currently raising a generation of young people who, for a while, can still fool their teachers into believing that what ChatGPT or another program has given them is their achievement, and soon no longer know why learning should be worthwhile at all.” (Böhme 2023, p. 9)
ChatBots get young people used to trusting technical systems and their calculations as supposedly valid aids. Convenience and laziness are well-known human characteristics, but unfortunately not very helpful when it comes to learning. The supposedly simple production of texts, images or presentations prevents people from engaging with language and other sign systems and undermines the creative act of generating ideas and concepts. The education (seduction) of convenience through the use of bots suppresses the play of fantasy and imagination, which should be at the beginning of every work, whether essay, composition or drawing.
What to do?
„Problems can never be solved with the same way of thinking that created them”, Albert Einstein is said to have formulated. Anyone who has followed the sobering results of the last PISA test in 2022 or the IQB Education Trend 2021 (Stanat et.al. 2022) on the academic performance of primary school pupils in Germany will not see any solutions in the use of digital technology in schools, which has been pushed for more than 40 years, or in the empirical turnaround in educational science implemented with the first PISA test. Weighing the pig does not make it fat. It would be intelligent to end the orientation of schools towards neoliberal guidelines (keyword: human capital theory, the school enterprise) as well as the cybernetic and control attempts using psychotechnologies (Stern et al. 1903).
We need a pedagogical turnaround. Instead of defining school and teaching in terms of media and digital technology and measurable learning achievements, the focus of educational work must once again be on educating and training people and “teaching understanding” (Andreas Gruschka). Instead of social isolation on the display (including the resulting social and psychological problems), dialog in and with the class and the learning community in presence must once again become the core of pedagogical practice.
Educational institutions are not institutions for measuring competence. They must reflect on their origins and their goal: To be a place of leisure, imparting values and education, in which people become mature, self-responsible individuals who contribute to society out of intrinsic motivation and conviction. At a school leadership symposium back in 2017, the following criteria for an adequate education for an open future were formulated as educational policy goals: “a stronger perspective orientation towards personal development, maturity, promotion of a sense of community, personal responsibility, responsible participation in democracy and respectful treatment of the fragile environment” (Simanowski, 2021). None of these learning objectives can be taught using IT and AI. Values as a basis for one’s own actions only arise through commitment and trust, in dialog and discourse.
A quote attributed to Albert Einstein reads: “If you want your children to be intelligent, read them fairy tales. If you want them to be more intelligent, read them more fairy tales.” Images, characters and stories are created in the mind. It’s called fantasy and imagination. This can give rise to new worlds in science, art and culture. This is one of the most important goals of education: To be able to develop ideas and imagination. We – as parent and student representatives, teachers and school authorities – must break through the logic of the data economy and education industry and regain autonomy over educational institutions and their technical systems (keyword: digital sovereignty). Federal President Frank-Walter Steinmeier formulated this for the political sphere in 2019:
“It is not the digitalization of democracy that we need to concern ourselves with first and foremost, but the democratization of the digital! […] Reclaiming the political space – against the brutalization and truncation of language and debates, but also against the enormous concentration of power in a handful of data giants from Silicon Valley – that is the most urgent task!” (Steinmeier 2019.)
The goal is not the digital transformation of educational institutions, but the reclaiming of educational space – and sovereignty over one’s own thoughts, feelings and actions. Israeli historian Yuval Noah Harari was asked why he didn’t own a smartphone. The answer of the scientist, who is decidedly concerned with the effects of digitalization on human behaviour, should make you think: he is not naive and knows that he can be followed in an increasingly smart environment even without a smartphone. It’s about more than that:
“The main point is to keep distractions away. I know how difficult it is to control the mind, to stay focused. And also, the people on the other side of the smartphone – the smartest people in the world – have spent the last 20 years learning how to hack the human brain through the smartphone. I’m no match for them. If I have to compete against them, they will win. So I won’t give them my screen, I won’t give them direct access to my brain.” (Matthes 2021.)
Margie’s teaching machine can hack brains today. We need to rethink and move away from the fixation on digital technology. If adults prostitute themselves online with their data, that’s their business. Parents and school authorities are responsible for children and minors and must protect them from data collectors. In the USA, the Children’s Online Privacy Protection Act (COPPA) has been in force since 1998, which makes the storage and analysis of data of minors under the age of 13 a punishable offense. There is no corresponding regulation in the EU for consistently cutting off the back channel for minors‘ data. The next question is whether digital and network technologies can be used in educational institutions to emancipate and promote people’s autonomy and freedom of action (Lankau 2020) or whether IT systems will continue to serve the business interests of representatives of the data economy and remain instruments for (behavioral) control of people.
Epilogue: She was thinking about the fun they had
In the second part of Asimov’s story, 13-year-old Tommy finds an old book in the attic. It describes how school used to be organized: children were not taught by machines, but by men. Margie protests: No man was clever enough for that, no man could know as much as a mechanical teacher. She said:
“I wouldn’t want a strange man in my house to teach me.“ Tommy screamed with laughter. „You don’t know much, Margie. The teachers didn’t live in the house. They had a special building and all the kids went there.“ “And all the kids learned the same thing?” “Sure, if they were the same age.” „But my mother says a teacher has to be adjusted to fit the mind of each boy and girl it teaches and that each kid has to be taught differently.“ „Just the same they didn’t do it that way then. If you don’t like it, you don’t have to read the book.“ (Asimov 2016, p. 157.)
Margie begs to be allowed to read the book. But her mother calls her to order, saying she must return to her school machine. As she sits alone again in front of her now repaired mechanical learning machine, she imagines what it would be like to learn together with other children in a classroom, playing together and helping each other. The last sentence of the short story gives the story its title in English. “Margie couldn’t help thinking how happy the children must have been in the old days. How nice they must have had it.” “She was thinking about the fun they had”.
Definition of terms
- Algorithm: Instructions/guidelines for solving tasks, in computers: Rules for processing data.
- Artificial intelligence (formerly cybernetics): mathematical methods (pattern recognition, statistics, probability calculation) for automated data processing.
- ChatGPT (from “to chat”) is a program for completing texts using statistical methods and on the basis of a database.
- Digitization: Technical transformation of any signals into a machine-readable format (digitized data)
- Digital transformation. Reorganization of any process (communication, teaching, learning, production …) in order to be digitally recorded and evaluated/controlled. As a result, only what can be digitally depicted counts.
- Generative AI. Automated data processing systems that simulate original human skills such as writing texts, designing graphics, presentations, videos and much more.
- GPT stands for “Generative Pre-trained Transformer” and is a large language model (LLM) that is trained with large amounts of text and can generate itself without knowing what is being calculated (generated).
- Cybernetics. Control. The cyberneticist is the helmsman. Cybernetic systems are used for process control and optimization (regulate. measure. control.).
- Post-digitality. The complete datafication of human behavior and living spaces is no longer questioned because it is omnipresent
References
(In brackets: Last call of the URL for online sources)
Andreessen, Marc (2023): Why AI Will Save the World; (19.6.2023).
Armbruster, Alexander (2019): Nicht jeder muss ein Informatiker sein, Interview mit Microsoft-Deutschland-Chefin
Sabine Bendiek, Frankfurter Allgemeine Zeitung (FAZ) vom 1. April 2019.
Asimov, Isaac (2016): Die Schule [1954], in: Geliebter Roboter, 3. Aufl., München (Heyne), S. 154-158.
Balslev, Jesper (2020): Evidence of a potential. The political arguments for digitizing education 1983 – 2015. Ph.D. Dissertation of Jesper Balslev, Department of Communication and Arts, Roskilde University, January 2020.
Barlow, John Perry (1996): A Declaration of the Independence of Cyberspace; (12.05.2024)
Böhme, Gottfried (2023): ChatGPT bricht der Schule das Rückgrat, in: FAZ vom 14. September 2023.
Böttger, Tobias; Poschik, Michael; Zierer, Klaus (2023): Does the Brain Drain Effect Really Exist? A Meta-Analysis, in: Behavioral Sciences, Band. 13, S. 751. https://doi.org/10.3390/bs13090751.
Brühl, Jannis (2023): Ein Jahr Chat-GPT: Keine Hoffnung auf Luxuskommunismus, in: SZ vom 30. November 2023, S. 18.
Brühl, Jannis (2024): KI ist bald überall, in: Süddeutsche Zeitung (SZ) vom7. Februar 2024, S. 15.
Center for AI Safety (2023): Statement on AI Risk: AI experts and public figures express their concern about AI risk. https://www.safe.ai/work/statement-on-ai-risk (26.5.2024).
Dammer, Karl-Heinz (2022): Gutachten zur Digitalstrategie der KMK des Landes NRW (2022) PDF: https://phvnrw.de/wpcontent/uploads/2022/09/PhVNRWGutachtenDigitaleWeltimDiskurs1 50dpi.pdf.
Delgado, Pablo; Vargas, Cristina; Ackerman, Rakefet; Salmerón, Ladislao (2018): Don’t throw away your printed books: A meta-analysis on the effects of reading media on reading comprehension, in: Educational Research Review, Band 25, S. 23–38. https://doi.org/10.1016/j.edurev.2018.09.003.
Deutscher Ethikrat (2023): Stellungnahme «Mensch und Maschine – Herausforderungen durch Künstliche Intelligenz» des Deutschen Ethikrats vom 20. März 2023, hier Kap. 3.4.2 Der Mensch als Maschine – die Maschine als Mensch?, S. 107 f.
DZHW (2021): Deutsches Zentrum für Hochschul- und Wissenschaftsforschung: Studieren in Deutschland zu Zeiten der Corona-Pandemie, Rubrik: Publikationen; https://www.dzhw.eu/forschung/projekt?pr_id=665 (30.8.2022).
Engzell, Per; Frey, Arun; Verhagen, Mark (2020, October 29): Learning Inequality During the Covid-19 Pandemic. https://doi.org/10.31235/osf.io/ve4z7 (5. Mai 2024).
FBD (2022): Forum Bildung Digitalisierung: https://www.forumbd.de/verein/; 20.11.2023 (3.6.2024).
Gelhard, Andreas (2011): Kritik der Kompetenz, Zürich (Diaphanes).
Gruschka, Andreas (2011): Verstehen lehren. Ein Plädoyer für guten Unterricht, Ditzingen (Reclam).
Haidt, Jonathan (2023a): The Case for Phone-Free Schools. The research is clear: Smartphones undermine attention, learning, relationships, and belonging, https://jonathanhaidt.substack.com/p/phone-free-schools (6.6.2023 / 29.8.2023).
Haidt, Jonathan (2023b): Get Phones Out of Schools Now. They impede learning, stunt relationships, and lessen belonging, https://www.theatlantic.com/ideas/archive/2023/06/ban-smartphones-phone-free-schools-social-media/674304/ (June 6 2023).
Haidt, Jonathan (2024): Generation Angst, Hamburg (Rowohlt).
Hammerstein, Svenja; König, Christoph; Dreisörner, Thomas; Frey, Andreas (2021): Effects of COVID-19-Related School Closures on Student Achievement — A Systematic Review. 2021 https://psyarxiv.com/mcnvk/; https://doi.org/10.3389/fpsyg.2021.746289 (3.6.2024)
Hansch, Dieter (2023): Der ehrlichere Name wäre «Simulierte Intelligenz», in: FAZ vom 1. März 2023, S. N2.
Helbing, Dirk (2018): Untertanen des Digitalen, in: SZ vom 25. März 2018, S. 2.
IQB-Bildungsbericht (2022): https://www.iqb.hu-berlin.de/bt/BT2021/Bericht/.
Karolinska-Institut (2023): Stellungnahme des Karolinska-Institutes zur nationalen Digitalisierungsstrategie in der Bildung (2023). – https://die-pädagogische-wende.de/wp-content/uploads/2023/07/Karolinska-Stellungnahme_2023_dt.pdf.
Krommer, Axel (2018): Wider den Mehrwert! Oder: Argumente gegen einen überflüssigen Begriff, https://axelkrommer.com/2018/09/05/wider-den-mehrwert-oder-argumente-gegen-einen-ueberfluessigen-begriff/ (12.6.2023).
Lankau, Ralf (2020): Alternative IT-Infrastruktur für Schule und Unterricht. Wie man digitale Medientechnik zur Emanzipation und Förderung der Autonomie des Menschen einsetzt, statt sich von IT-Systemen und Algorithmen steuern zu lassen, Köln (Gesellschaft für Bildung und Wissen).
Lankau, Ralf (Hg.) (2024): Die pädagogische Wende. Über die notwendige (Rück-)Besinnung auf das Unterrichten, Heidelberg (Beltz).
Maldonado, Joana; De Witte, Kristof (2020): The effect of school closures on standardised student test outcomes. https://www.researchgate.net/publication/344367883_The_effect_of_school_
closures_on_standardised_student_test_outcomes (5. Juli 2021).
Manhart, Klaus (2022): Eine kleine Geschichte der Künstlichen Intelligenz (7 Folgen); https://www.computerwoche.de/a/eine-kleine-geschichte-der-kuenstlichen-intelligenz,3330537 (20.8.2023).
Matthes, Sebastian (2021): Sie haben gelernt, unser Gehirn zu hacken, Interview mit dem Historiker Yuval Noah Harari, in: Handelsblatt vom 30. Dezember 2021 bis 2. Januar 2022, Nr. 253, S. 16-18, https://futur-iii.de/2022/01/sie-haben-gelernt-unser-gehirn-zu-hacken/ (12.6.2023).
Open Letter (2023): Pause Giant AI Experiments: An Open Letter; https://futureoflife.org/open-letter/pause-giant-ai-experiments/, publiziert am 22. März 2023 (26.5.2024).
Pias, Claus (2013): Eine kurze Geschichte der Unterrichtsmaschinen, in: FAZ vom 10. Dezember 2013; www.faz.net/aktuell/feuilleton/forschung-und-lehre/automatisierung-der-lehre-eine-kurze-geschichte-der-unterrichtsmaschinen-12692010.html (30.6.2022).
Pias, Claus (Hg.) (2016): Cybernetics | Kybernetik 2. The Macy-Conferences 1946–1953. Band 2. Documents/Dokumente.
Ravens-Sieberer, Ulrike, et al. (2021): Seelische Gesundheit und psychische Belastungen von Kindern und Jugendlichen in der ersten Welle der COVID-19-Pandemie – Ergebnisse der COPSY-Studie, 01. März 2021; PDF: https://link.springer.com/content/pdf/10.1007/s00103-021-03291-3.pdf (5.7.2021).
Schirrmacher, Frank (Hg) (2015): Technologischer Totalitarismus, Berlin (Suhrkamp).
Seising, Rudolf (2021): Es denkt nicht. Die vergessenen Geschichten der KI, Frankfurt (Büchergilde Gutenberg).
Simanowski, Roberto (2021): Digitale Revolution und Bildung, Weinheim (Beltz).
Spitzer, Manfred (2022): Digitalisierung in Kindergarten und Grundschule schadet der Entwicklung, Gesundheit und Bildung von Kindern. Kommentar. Zum Gutachten der Ständigen Wissenschaftlichen Kommission der KMK vom 19.9.2022, in: Geist und Gehirn. Nervenheilkunde 2022, Heft 41, S. 797–808; https://www.thieme-connect.com/products/ejournals/pdf/10.1055/a-1826-8225.pdf (18.11.2022).
Stanat, Petra; Schipolowski, Stefan; Schneider, Rebecca; Sachse, Karoline A.; Weirich, Sebastian; Henschel, Sofie (2022): (Hg.) IQB-Bildungstrend 2021. Kompetenzen in den Fächern Deutsch und Mathematik am Ende der 4. Jahrgangsstufe im dritten Ländervergleich, https://box.hu-berlin.de/f/e907cc6bb64440de8408/?dl=1 (29.11.2022),
Steinmeier, Frank (2019): Rede zur Eröffnung der Podiumsdiskussion «Zukunftsvertrauen in der digitalen Moderne» beim 37. Deutschen Evangelischen Kirchentag am 20. Juni 2019 in Dortmund, https://www.bundesregierung.de/breg-de/service/bulletin/rede-von-bundespraesident-dr-frank-walter-steinmeier-1640914 (27.2.2020).
Stern, William (1903): Angewandte Psychologie, in: L. W. Stern, E. Bernheim (Hg.): Beiträge zur Psychologie der Aussage : mit besonderer Berücksichtigung von Problemen der Rechtspflege, Pädagogik, Psychiatrie und Geschichtsforschung, Band 1: Beiträge zur Psychologie der Aussage. Leipzig (Barth), 1903–1904, S. 4–45.
UNESCO (2023): Technology in Education – A tool on whose terms? https://www.unesco.org/gem-report/en/technology, dt. Überstezung und Auszüge: https://die-pädagogische-wende.de/unesco-bericht-zu-it-in-schulen-fordert-mehr-bildungsgerechtigkeit/ (24. Mai 2024).
Weizenbaum, Joseph (1976): Die Macht der Computer und die Ohnmacht der Vernunft, Frankfurt (Suhrkamp).
Wiener, Norbert (1948): Cybernetics or Control and Communication in the Animal and the Machine, Cambridge (MIT Press); dt.: Kybernetik: Regelung und Nachrichtenübertragung im Lebewesen und in der Maschine, Düsseldorf (Econ), 1963.
Wiener, Norbert (1950): The Human Use of Human Beings. Cybernetics and Socierty, New York (Doubleday Anchor); dt.: Mensch und Menschmaschine. Kybernetik und Gesellschaft, Frankfurt (Vittorio Klostermann), 2022.
Zierer, Klaus (2021): Ein Jahr zum Vergessen. Wie wir die Bildungskatastrophe nach Corona verhindern, Freiburg, Basel, Wien (Herder).
Zuboff, Shosana (1988): In the Age of the Smart Machine. The Future of Work and Power, New York (Basics Book).
Zuboff, Shosana (2018): Zeitalter des Überwachungskapitalismus, Frankfurt (Campus).
Zuboff, Shosana (2018b): Shoshana Zuboff über Überwachungskapitalismus: «Pokémon Go – ein wahres Menschenexperiment», Spiegel-Interview 40/2018.