Under certain rare conditions, ants may find themselves in what is referred to as a circular mill. They can get stuck in a dangerous behaviour pattern because they follow the scent trails of other ants scurrying in front of them. The ants in front will follow the scent trails of those who have followed them until the whole insect collective is frantically scurrying around in a circle. With no external intervention, the ants cannot know what they are doing, so they will persist in this circular mill until they drop dead.
This is an example from nature of something that systems theorists refer to as a problem of uninterrupted positive feedback. In his book Systemantics, John Gall notes the following:
“Alternating positive and negative feedback produces a specific form of stability represented by endless oscillation between two polar states or conditions.”
In other words, systems are most likely to achieve functional stability when both positive and negative feedback are permitted. An aeroplane is a more mechanical example: the internal structure of the plane needs to cohere (that’s positive feedback) but it also needs to be sufficiently capable of responding to often formidable environmental conditions (negative feedback) to stay airborne. Positive feedback alone is especially dangerous. The analogy to sound systems can be noted. Gall writes, “Positive feedback in electronic systems leads to a loud squeal and loss of function.” Loud sound distorts. Aeroplanes crash. Ants die.
This is not to say, of course, that negative feedback is ultimately absent in any of the above examples. Rather, it is simply postponed. A certain form of temporality is especially evident in any system that totalises positive feedback by reinforcing only what is immediately apparent. Positive feedback grips onto immediacy. It takes time, after all, to allow negative feedback. And if no time is allowed, eventually the ultimate source of negative feedback, reality itself, catches up to the system. When it does, the delayed negative feedback arrives all at once. The system collapses into a state beyond any hope of recovery.
In our current academic context (this thing I often think about because it’s the context I’m in), now that Large Language Model (LLM) AI is on the scene, we are naturally drawn to questions concerning what this technological development means for university education. Nevertheless, I am mindful of Martin Heidegger’s insight that we should not be too quickly allured into considering only the consequences of any technology but should understand what the technology itself is a consequence of. In other words, we need to find a reference point outside the technological frame to judge the technological frame better. If we are to avoid the problem of the circular mill, negative feedback is indispensable.
For this reason, I attempt to supply the often-missing negativity; and, in fact, my argument aims to suggest that the ‘cure’ for our problems with AI is not yet another technological solution but must be a decisively non-technological and fundamentally human one, namely (at least, this is my primary focus) free time—especially time away from electronic technologies. My proposal of free time—time to read, ponder, imagine, write, time to adopt philosophy as a way of life, and so on—stems from recognising that AI unavoidably operates from and within what I call a ‘schitzotemporal’ frame, devoid of rhythm and duration, and is therefore antagonistic towards what has always been at the root of the academic enterprise, namely contemplation.
Unfortunately, the university, in its most modern form, has operated in keeping with this schitzotemporal mode for quite a while now, long before I even started playing the game. AI simply continues and entrenches the trend; it reinforces what has already, for a while now, been a system of redoubling positive feedback. I want to examine what current debates on AI in higher education tend to overlook, namely how AI contributes to the ‘schitzotemporality’ of the contemporary university environment. AI attacks what academics need most to achieve the kind of quality work they are (hopefully) after.
Here, very briefly, is a list of what is currently being debated in the discourse around AI in higher education: plagiarism and academic integrity, job displacement and the changing roles of educators, the biases of AI, privacy, data security and data ownership (included in this is so-called data colonisation), the uncertainty of what other technological developments in AI are likely to happen, as well as a concern that unforeseen repercussions around using AI are likely. Overwhelmingly, the focus on issues suggests that the consequences of AI ought to be our main consideration. They fall into the widespread trap of thinking that digital technology is so integral to education that anything beyond the digital must therefore be disregarded automatically—that is, in the manner of an automaton. Because of this, they suggest that the question of what to make of AI is one of control or, to use a softer term, management.
In Heidegger’s thinking, technology, in its very essence, sets specific limits on how reality can manifest. These limits predetermine and thus predefine what we are allowed to encounter. This tends to have a decisive funnelling effect on perception. What is eradicated from the technological frame is deemed irrelevant, even if it is anything but irrelevant. With every technological intervention, the world becomes ever smaller and smaller; in a way, you could even say it disappears. Byung-Chul Han, in The Scent of Time, describes this as follows:
Modern technology moves the human being away from the earth. Aeroplanes and spaceships pull the human being away from the earth’s gravitational field. The further one moves away from the earth, the smaller it gets. And the faster one moves on the earth, the more it shrinks. Every removal of distance on the earth brings with it an increasing distancing of the human being from the earth, thus estranging the human being from it. The internet and electronic mail let geography, even the earth itself, disappear. Electronic mail carries no mark indicating the place from which it was sent; it is without space. Modern technology de-terrestrializes human life.
Although he does not mention it directly, resonances with what AI does should be fairly obvious. With this in mind, consider Hartmut Rosa’s assessment of modernity: it has nearly perfected the human being’s “ability to establish a certain distance from the world while at the same time bringing it within our manipulative reach.” Everything becomes atomic, endlessly removable from context and combinable with other abstract atomic structures or nominalist designations. This is how big data works, and also how machine learning works. Certain ideological forms in our time become plausible because nothing supposedly belongs to a world; everything belongs only to whichever abstract category it has been assigned. What is distant, i.e. removed from direct personal experience, becomes all-important. What is near disappears.
Universities have been at the forefront of training their subjects to regard this dispositional option as natural, or even as imperative. We scholars are all trained into unworldedness. We are conditioned, perhaps propagandised, into thinking that meaning must always be imposed from without and mediated by the most recent technological interventions. But this, as Han and Rosa argue, is precisely to destroy meaning itself. Without access to reality, and by this I do not mean our merely methodological constrictions regarding what we are allowed to discover in reality, we lose contact with our ability to be open to the world.
A non-electronic technological example of this disposition may help to clarify my point. Whalers with their tools do not seek to encounter a whale; they do not want the whale to be manifest in all of its whale-ishness; rather, their entire technological intention is to destroy that beautiful creature for the sake of its parts. “The essence of modern technology is enframing,” Heidegger writes. “Enframing belongs within the destining [i.e. constricting or truncating] of revealing.” In other words, by placing an artificial frame around things, technology constrains our access to the fullness of discoverable meaning. Technology is essentially impoverishing, although I realise that the quality of impoverishment will differ depending on the tools we use. Perhaps some impoverishment is helpful, in certain research contexts, but admitting this does not amount to insisting that a certain kind of impoverishment ought to be universal. AI undoubtedly enhances—one might even say it perfects—de-terrestrialisation. Every aspect of the technology demonstrates this. In its very essence, in its very being, it refuses what Han calls the “re-terrestrialization” and “re-factualization” of the human being. Since space and time are inextricably linked, the implication of this is that de-terrestrialisation (de-siting) implies a certain de-temporalisation (de-timing). Machine-time becomes entirely abstracted from the world.
What concerns me is what Neil Postman notes, following McLuhan, about all technologies. Technologies present us with “ungraded” and invisible curricula. They train us to think according to what they are. Keep in mind that any medium is best understood not by what you say about it but by what it does. Just as television “educates by teaching children to do what television-viewing requires of them,” AI educates and shapes education by surreptitiously teaching scholars what AI requires of them. To understand all of this rightly means making explicit those aspects of AI that are not discussed. The counter-intuitive implication of this is that we should try to understand everything AI does apart from its most explicit functions.
Here is a description of LLM AI as inherently impoverishing (@jeffowski, 13 July 2024):
“The underlying purpose of AI is to allow wealth to access skill while removing from the skilled the ability to access wealth.”
Yes, of course, this is not all that AI does and yet it is unreasonable to ignore this assessment. Technology is never a merely neutral thing but always operates as a total environmental overhaul. Technologies always impose dramatic epistemological and ethical shifts. Technologies force us to interpret the world differently.
A brilliant articulation of this is found in the classic 1946 work The Failure of Technology, written by Heidegger’s friend Friedrich Georg Jünger. As I’ll get to, Jünger perceives how technology might affect time and freedom particularly acutely, and therefore helps me to bridge the current concerns about AI with the issue of academic freedom. However, we need to first have a few of his other argumentative coordinates in mind before we can get to that issue. We should keep in mind, for instance, his contention that “within the realm of technology there exist solely technical purposes.” By implication, when we adopt any new technology, we find ourselves almost forced to ask technical questions concerning that new technology. We lose touch with the question of meaning, as meaning becomes subordinate to method.
To be clear, it is like all technology to restrict thinking. What is of concern to me is a loss of a certain richness, layeredness, and flexibility of thought as directly connected to the living human experience. A pencil is one technology that doesn’t stop me from daydreaming, doodling, drawing, mind-mapping, writing backwards, or adopting any number of other modes of thought. It is fairly easy to put a pencil down while I let my mind wander all over the place. Inductive, deductive and abductive ways of thinking are left to roam and interact fairly freely. The problem comes in with especially complex technological mediations, which result in a certain hermeneutical impoverishment. What the technology organises it does so in keeping with its own being. If its own being requires more of us, as is the case with AI more than with hammers and knives, our capacity to think gets reconfigured by the technology itself. I say reconfigured but not improved because we would need a way to ascertain improvement that transcends strictly technological confines.
Moreover, the “object of the organization,” Jünger writes, “cannot be what is already organized; the organization must necessarily seize upon things as yet unorganized.” But this is done, as I have already suggested, according to the principles of organisation embedded in the technology itself. Here we sense a certain redefinition of freedom. The specific technology becomes a rule or a law, often unconsciously adhered to and according to which things must be done. It becomes a system to be conformed to. Of course, in research, constraints are inevitable. But AI, being a particularly complex technological intervention, presents an unusual, and as yet far from entirely determinate, number of restrictions.
AI is also, by nature, not entirely transparent in terms of its functions; its indeterminacies are far from apparent even in the simplest instances. If AI does not give me what I ask for, it is only after the machine has wasted my time that I can know that it has wasted my time. This is to say that the constraints of the technology adhered to in advance of any academic task are not necessarily going to be entirely explicit ahead of my using it. With AI, a certain paradoxical consequence is especially evident in its indeterminacy; while the technology itself insists on a managerial frame, the elements to be managed are in a state of constant flux. This fact at least still leaves some room for the possibility, perhaps the necessity, of a genuinely intelligent human presence in any AI-mediate process.
Let’s take a simple example of one AI development in progress, speaking more directly to the plagiarism debate around AI, which undoubtedly has implications for academic freedom. Implicit in this debate is the largely unexamined and typically misunderstood value of academic integrity. Part of not spelling this value out means that it easily becomes subordinate to technical issues. The risk-management paradigm gets swallowed up in a series of further technical questions as AI programs are developed to detect plagiarism, then adjusted and improved as other AI programs are designed to bypass AI detection, while yet other AI programs are developed to trick the AI-plagiarism-detection software. All of this is not to say that the programs themselves will perform well. To speak more concretely, we have LLMs like ChatGPT that imitate human writing. Other LLMs like Turnitin have been made to detect these other plagiarism-generating AI, with worrying levels of reliability. But, as we may have predicted, other LLMs like StealthGPT have been developed to evade detection. I’m setting aside, for the moment, the fact that any thinking person can often spot algorithmic writing better than AI-detection programmes.
Here it is worth turning to another of Jünger’s points about technological developments and their technical focus. Jünger says that once the technological-managerial paradigm has been adopted, a ballooning bureaucracy is inevitable. No one who has worked in a university for any length of time would deny that, as more and more electronic tools have been added (tools for presentations, online teaching, online administration, and so on), the job of the academic, usually described as only concerned with teaching and research, has become predominantly administrative. It has yielded to what René Guenon, in 1945, referred to as the reign of quantity. It is one of the laws of systems that “people in systems do not do what the system says they are doing.” In academia, as the reliance upon technologies has increased and as the bureaucracy has grown, teaching and research have become different facets of the academic’s main function, which is more managerial than scholarly. As student numbers grow, management becomes tricker, and more technological solutions, including therapeutic managerialism, are added. But to add only technological solutions to technological problems is like providing cyanide as a cure for food poisoning. The solution is an echo and perhaps a further manifestation of the existing problem.
Already in my example of AI plagiarism, we can see that we are no longer dealing with something as simple as a student writing an assignment and submitting it to a professor for assessment. Scholarly aims and intentions become diversions from the technological frame. We are dealing with plagiarism management as students try, as some of mine have, to make use of different AI to help them with their research, while lecturers are now making use of AI to check if the writing is the result of the student’s thought processes. But just think about the programmers involved, marketing and salespeople, purchasing orders, application demonstrations, transactions that go into purchasing the software, and so on. Consider the various email exchanges, debates, meetings, and seminars going on, together with the multiple academic misconduct hearings and further administration repercussions that have happened and have yet to happen, simply to address the more than tenuous question of what we ought to do about the fact that students are using AI to cheat (even as AI steals the work of others to learn how to cheat or how to catch cheaters). A student submits a paper and, before you know it, hundreds of people are involved in any technological-managerial instance, directly or indirectly, visibly or invisibly.
Given some of the debates around AI ethics, and here I mean the way ethics committees might tackle how AI is used in research, the technical questions arising are almost too numerous to count, depending, of course, on what kinds of AI might be used during the research. It is not unimaginable that entirely different research ethics committees might need to be set up later to deal with the problem. But this is unlikely to be a decision made by any individual university. If a few universities start doing this, and if various councils for higher education catch wind of this, the trend will become contagious. Universities are, after all, primarily domains of scholarly and administrative fashion. Soon, perhaps for no good reason, hours of the precious time of scholars will go into further AI management.
My point in mentioning all of this is not to dismiss the possibilities that AI brings with it. I’m not saying AI should be scrapped. I am aware of developments in AI suitable for more quantitative and statistically-driven disciplines. Still, I mention all of this because what I have noticed about the AI debates raging in higher education is the neglect of the question that ought to be at the heart of our discussions. I am not even talking about the vital question of whether any of this makes us better, happier people. I’m also not talking about the question of what the purpose of universities might be. I mean only the question of whether AI is helping us to be better thinkers. I can’t answer this question fully but I can suggest an answer by referring to one essential requirement for good thinking: time.
Jünger notes that modern technological achievements have required a total reconfiguration of how we think about and experience time: “The beholder of the clock becomes conscious of time only in its emptiness; all time that enters our consciousness in this fashion is dead time.” Automation, and AI is one example of this, does not free up time. That automation would free up time is a lie that too many believe. Having looked at the few examples of electronic automation I’ve experienced in my career, I can confirm that I now have less time. Technological solutions always create technological problems and because we live and move and have our being within a technological frame, further technological solutions are soon offered. Automation costs time. But it also changes the quality of the time we have. “An automation,” writes Jünger, “gives us the same feeling of lifeless, mechanically repetitions time; it is nothing essentially but a timepiece, which performs smoothly with dead clock time. Without clocks, there are no automatons.”
Jünger has in mind a certain non-recursive sense of time, one that does have the anti-gravitational indeterminacies of digital programs like AI. We need to therefore modify some of what he is saying. What we can keep, however, is his basic observation that time becomes dead, filled with moments of emptiness. This emptiness has not disappeared now that everything moves more quickly. “A speed that is too fast dissipates meaning,” says Han. Meaning loses its gravity. Time whizzes by, oscillates, and accelerates. It becomes disjointed. Time shatters and gives way to a series of interruptions, notifications, micro-tasks, the fine-tuning of algorithms, and so on.
I call this schitzotemporality to suggest a widespread breakdown of duration and continuities. It suggests, among other things, a breakdown in the relationship between feeling, thinking, and acting, leading to a loss of coherence in general. Personal resonances are shredded and the drama of being becomes one of delusion, dilution, fantasy, and mental fragmentation. Memory becomes degraded, which Yoko Ogawa meditates on in her beautiful novel The Memory Police (2020). Even truth “is a temporal phenomenon. “It is a reflection of the lasting, eternal present,” says Han. The “tearing away of time, the shrinking and fleeting present, makes [truth] void.” Schitzotemporality is a primary precondition for the spread of misinformation. De-terrestrialisation is also de-temporalisation; the loss of contextualising, durational time means a loss of a context for thinking well.
The only real antidote to schitzotemporality is contemplation. Contemplative time is the best way to develop as a thinker. I don’t mean that contemplation will make us producers of knowledge for the knowledge machine; contemplation is not so easily instrumentalised. I mean that only contemplation allows time to become integrated; and by extension contemplation is our best shot at becoming integrated ourselves. “Experience is due to a giving and a receiving,” says Han in his Vita Contemplativa. “Its medium is listening. But the noise of contemporary information and communication puts an end to the ‘community of listeners.’ No one is listening; everyone is playing to the gallery.”
I have for a long time called this community of bad listeners, always so eager to take up fashionable ideologies and fashionable technologies, as performative scholars. Scholarship itself becomes essentially consumptive. Depth is eradicated as PDFs are downloaded and searched but seldom read. And more and more studies are produced that grant little joy to the reader. Yes, I am aware of exceptions, but I am trying to identify the pattern, which functions like a rule. “Today,” Han writes, “the consumerist form of life prevails everywhere. In this form of life, every need must be satisfied at once. We are impatient if we are told to wait for something to slowly ripen.” All that matters, as efficiency starts to overrule excellence, are “short-term effects and quick gains. Actions are reduced to reactions. Experiences are diluted and become events. Feelings are impoverished, and become emotions of affects. We have no access to reality—reality reveals itself only to contemplative attention.”
If anything, I mean to highlight how AI might help us to engage in a thorough revaluation of values, a revaluation of the importance of what we may have already lost before AI walked onto the stage of the drama of being. In the face of some novelty, the temptation is to let the novelty itself determine what is valuable, whereas I would suggest that higher-order values, like wisdom, knowledge, depth, and the like, ought to help us to determine not just how we use AI but if we use it. I would hope that contemplative attention might be taken seriously as the measure of good thinking rather than the schitzotemporal attention that AI extends. Contemplation, after all, assures our connection with reality and with ourselves. And is it not this connection to ourselves and reality that we, especially those of us in the humanities, are here to understand?