Against sloppification
Resisting the temptation to automate thought
In the year I was born, Harvard Law professor Roger Fisher proposed the rather shocking and, as it turns out, unwelcome idea that America’s nuclear launch codes should be implanted next to a volunteer’s heart so that the President would have to personally murder that volunteer to retrieve the codes and unleash doom upon the world. “My suggestion was quite simple,” Fisher explained. “Put that needed code number in a little capsule, and then implant that capsule right next to the heart of a volunteer. The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being.” The President would have to look the volunteer in the eye and say, “George, I’m sorry, but tens of millions must die.” He’d need to know, first-hand, what he was about to do. “Blood on the White House carpet” and blood on his hands would bring the reality home to him.
Fisher’s intention was not difficult to discern. It shouldn’t be easy to unleash terror on the world. Launching nuclear weapons should really feel like the very last resort. Fisher’s friends in the Pentagon had only this to say to him: “My God, that’s terrible. Having to kill someone would distort the President’s judgment. He might never push the button!” Of course, that was precisely Fisher’s point. And I cannot help but wonder how many advancements in the name of the ideology of progress we’d halt immediately if we could look in the eyes of someone we love and know what it’d do to them. We can’t always know all the effects of every step we take into the future. However, we are also certainly not as uninformed about the consequences of our technological optimism as we may have been a century ago.
I’ve heard teachers and professors around me, as well as those who lurk in various corners of the internet, argue that their students will use AI (“It’s inevitable and unavoidable,” they say), so it is vital to teach them how to use it well. To my ears, this sounds like suggesting that we should teach students how to take arsenic well or kill themselves more effectively. All of these people, with their Summer-childlike good intentions, are making a fundamental conceptual error. They’re assuming that the thing itself, this specific technology, changes its nature when you simply adjust your intentions towards it.
I understand why people think like this nowadays. Someone “intends” to be called a woman, even when he is a biological male, and whole hosts of reality-haters, in the name of some strange emotional overidentification, will go along with it. But this is to treat human intentions like magic spells, and no one could really live in a world run by spells. Let’s take a much less palatable example to make the point clear. Imagine a known rapist telling a jury that he had only good intentions while raping his most recent victim. Then imagine the jury nodding sympathetically as he tells his story before turning to the victim and saying, “We know he hurt you, but don’t you see, he meant well?” This would be nothing but the most barbaric and horrific judgment, and yet the logic is much the same as what we get all the time from large corporations and institutions. Again, you do not change a thing’s nature simply by having the best intentions towards it.
I’ll come back to this point, as I explain below how to interpret what AI is by what it does to us, but I want to first pause to note how the mindset that requires adoption and not resistance conforms to the ever-present, albeit implicit, modern injunction to keep oneself ever-open to the new, the now, and the next. The openness-imperative, which I have referred to before as ossified openness, is the engine of the modern machine-world. It demands no negativity and no friction. The unspoken rule, symbolised in so many technological products, seems to be that everything must be smooth, quick, slick, unimpeded, and efficient.
And yet, of course, this thorough lubrication of the contentsphere or ideaspace that governs our thoughtlife, this de-frictionalising of reality, is tantamount to the negation of reality. Reality is precisely that which manifests itself to us as resisting mere consumption. Moreover, consciousness can be considered the epitome of negativity; at its alert best, it generates friction in its encounters with reality to foster understanding. It is the friction in the encounter between self and other that allows thoughts to flourish. I don’t mean that consciousness merely negates or critiques, for negativity is not equal to sheer opposition; rather, consciousness hesitates in imposing its assumptions on the given for the sake of understanding it properly. In experiential terms, to hesitate is to be found, rather than to be lost.
When considering AI, I am more than mindful of the many developments in the field that are geared towards assisting people. I know, to take just one example, that AI is used to great effect in many hearing aids, helping those who use them to hear far better than they would otherwise. There are good reasons, I think, for not simply dismissing the many ways that AI promises to assist rather than to degrade people. And yet, I remain stubbornly, perhaps idiotically, opposed to the use of AI in education. I am just as opposed to having AI take over any meaningful work that the human mind can do in the same way most people would be opposed to having machines do their lovemaking for them. I think my reasons are sound enough, and so here I want to revisit some of my thinking in and around what I regard as poison to the human mind.
To orientate ourselves, let’s take a moment to consider the origin of the idea of artificial intelligence. The term was coined in 1955 by John McCarthy in preparation for the 1956 Dartmouth Conference on the machine simulation of human intelligence. This event is still widely regarded as the birthplace of AI as a field. There, AI was understood as being when “a machine” is to “behave in ways that would be called intelligent if a human were so behaving.” In this comparison, AI is not equated with human intelligence as a whole, but with human intelligence engaged in a specific act or behaviour. Human intelligence remains, in this, the priority and measure of all simulated versions. AI is best regarded as derivative.
In fact, the moment we forget the real difference between human intelligence and artificial intelligence is the moment we risk reducing the former to the latter. This would be a terrible error. A Faustian sacrifice of humanity to human inventions. After all, human intelligence is the faculty of the person in his or her embodied entirety. It involves the whole breadth of the human experience, including abstraction, emotions, creativity, aesthetic appreciation, morality, values, as well as religious sensibilities and spiritual sensitivities. In contrast, AI offers a merely functional orientation fixated on problem-solving. Problem-solving, I need hardly mention, is certainly an aspect of human intelligence. And yet, it is also by no means all there is to human intelligence. Most of the best stuff in life is not a problem to be solved but a mystery to be basked in.
Recognising the machine’s orientation towards problem solving helps us to better understand something like the Turing Test, which rests on the hope that a person might not be able to tell if something is a machine when its performance of a task cannot be distinguished from the performance of the same task by a human being. In other words, the measure is still not human intelligence as a whole but human intelligence at its most constricted and utilitarian. Such a measure fits modernity’s persistent externalist bias. The problem with closing the gap between human intelligence and artificial intelligence by means of a false equivalence is that we may far too easily consider ourselves as what Hannah Arendt calls homo faber — a mere task-performing functionary. In effect, this is to remake man in the image of the machine.
However, the grand problem of thinking of people in mechanical terms goes back to modernity’s earliest roots. It was not uncommon for early moderns like Descartes and Leibniz to consider animals as mere meat machines. If a cat gets trodden on by a horse, the moderns might say that its agonising shrieks are to be thought of only as gears grinding against other gears. The poor animal would not be in the possession of a soul, but would be a mere machine, albeit a sophisticated one. And yet, the concept-creep of such thinking has only expanded. Today, when the meat machine called the human being no longer functions, it is easy for the mechanistically-minded to think that it should be simply decommissioned; i.e. euthanised.
Of course, various thinkers have noted the reduction of human thinking to its mere functional abilities long before AI even arrived on the world’s stage. This is to say, shrinking expectations around human thought-life have accompanied us from the moment we were all born. I think especially of Max Scheler, who, in his 1926 book Cognition and Work, noted that, in a post-Cartesian world, cognition has seldom been thought about in its true fullness and variety. One kind of cognition (the utilitarian kind) has been confused with every other kind. More recently, Iain McGilchrist has shown us how we have mistaken the explicit aspects of thought for the more vital, implicit dimensions of thought. We have lost contact with our inwardness, which means, also, that we have lost contact with who we really are.
I mention this because I cannot stress enough that AI is not, in some sense, new. It is a symptom of the kind of thinking that has dominated the West for centuries. Oddly enough, the more functionalistic we become, the more we will think of AI as a miracle. The more human we become, the more we will see it for the monstrosity that it is. To the more mechanistically-minded, AI is wonderful news. “At last,” they will say, “we have a way to destroy the remnants and residues of human inefficiencies!” They would welcome hell because they have misunderstood heaven.
Take a brief moment to consider what AI can’t do. It can’t feel, it can’t take delight in ideas, weep at bad student research, marvel at the way light plays on the clouds at golden hour, or anything like that. It can’t be consciously self-aware, intuitive or wise. It can’t pause or contemplate or meditate, and it certainly can’t just ‘be’. It can’t really make moral or ethical judgments. It can’t empathise or sympathise. And it cannot even make a fair guess beyond its training data. Can it be spontaneous? No. Can it make gut-instinct decisions? Also no. It cannot employ philosophical or metaphysical reasoning; it can’t understand people in their embodied context; and it can’t play or demonstrate childlike curiosity. AI is trained on representations of human articulations of certain conceptual models, and has no access to reality itself. With AI, there is no spiritual or transcendent thinking, no awe, no faith, and no connection with something greater. Indeed, despite the adjective “generative” that is often placed in front of LLM AI, it cannot actually be creative. It is “replicative” by nature, and thus entirely without depth.
To be human, as Robert Sokolowski explores, is to be caught up and interested in truth. The desire to know and to seek out truth is part of human nature itself. And this desire is not reducible to the postmodern emphasis on brute power. I’d even risk the suggestion that the postmodern obsession with power-dynamics, a binary world of one against zero, has provided a particularly potent preparation for AI. Truth, in contrast to such a reduction, is not to be found in mere explicit representation. It exists in and through the manifolds of being’s manifestations. Identities are not, thus, reducible to any specific manifestation. Through our intelligence, we can integrate truth without squashing it into an impossibly small conceptual container. Truth is found in a lively participation with reality and not just in articulation. Truth is found in being first, and very much secondarily in language. And, as Bernard Lonergan explores, through our participation in truth, we can aim towards what AI can never acquire, namely, insight.
I draw attention to the fact that AI is a symptom of modernity rather than a merely new problem because believing the latter would have us buy into the very paradigm that created AI. In modernity, ontology (which looks at what is real) tends to be subservient to epistemology (which looks at how we understand things) in much the way that, as in Sartre’s famous formula, essence becomes subservient to existence. When this happens, we get caught up in thinking of the world in terms of mere efficient causality. We address problems by providing solutions, without realising that the solutions themselves produce further problems, and so they also demand further solutions. And so on. To see AI as a new problem is to refuse to attack the weed at its root. I’ve lost track of how many times I’ve seen people respond to AI with the encouragement that we should be sure to get our intentions in order, as I suggested above. “If we control the AI, it won’t control us,” say the efficient cause defenders. This is such a stupid response to AI, if only because it refuses to see that AI has a certain kind of being. It works on us in a certain way over and above our best intentions. Its nature does not change just because we have pleasant thoughts in our heads about it.
The best way to understand AI is to attempt to situate it in a larger context, in terms of formal causality rather than efficient causality. We need to have a clear sense of what AI is — a sense, that is, of how it happens in the happening of being, over and above any explicit articulation of its functions. To get a sense of this, it helps to turn to the work of Marshall McLuhan, whose insight that “the medium is the message” never gets old.
McLuhan’s point was that the explicit function of a technology does not account for the technology’s form, which is most evident in its effects. Here’s an old example to make the point. The maker of one of the first steam-powered cars didn’t realise he was making a murder weapon. He thought he was making a means of transportation. And yet, in 1869, Mary Ward became the first person killed by a car when that very invention, by no means even at great speed, crushed her beneath its weighty wheels. No one explicitly intended for the car to be a loaded gun, and yet it was. And that remains true, in potential, for every car. Car manufacturers are also murder weapon manufacturers. And that’s the point. Hammers are useful tools and thumb-destroyers. Electricity is good for televisions and electrocutions. Every single technology exists beyond the intentional frame in the effects it actually produces. In a certain sense, although I wouldn’t want to take this too far, the thing is what it does and not just what we think it does.
For McLuhan, and for me, what the medium or technology does to us extends far beyond what we are consciously aware of. He writes: “The effects of technology do not occur at the level of opinions or concepts, but alter sense ratios or patterns of perception steadily and without any resistance.” Thus:
“The content or uses of [any] medium are as diverse as they are ineffectual in shaping the form of human association. Indeed, it is only too typical that the ‘content’ of any medium binds us to the character of the medium … Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot. For the ‘content’ of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.”
To get a better feel for the form of a medium, McLuhan offers us his famous tetrad of media effects. This demands answers to four simple, illuminating questions. Before I take these questions up, however, I want to mention one of Martin Heidegger’s insights, since it adds an extra layer to McLuhan’s tetrad that I find helpful for explaining the nature of AI. Heidegger notices that technology is a mode of revealing the world; in particular, it reveals the world as a warehouse full of stockpile reserves or resources. Technology forces the world to reveal only a small aspect of itself, like destroying an entire landscape through mining to reveal a single diamond. When the sheer horror of this is grasped, we start to see how terrible it is that we have human resources departments and that we have turned entire educational institutions into factories producing the human tools for human resource departments to manage. When we consider this with McLuhan’s tetrad, we understand certain of AI’s more nefarious aspects.
The first of the tetrad questions is this: What does AI enhance? This question is more or less answered by naming the explicit function of the technology; the figure, not the ground. Very simply, AI enhances the internet, which is an enhancement of the human central nervous system, albeit without the protection of a myelin sheath. If you wonder why we live in an age of anxiety, this isa significant part of it. Anxiety arises when we have no clear agency or aim, and that’s not going to get better when you take away people’s agency and aims using generative AI. But, of course, everyone knows that AI is an enhancement. Everyone knows that this is what it does. What’s less obvious, but perhaps more important, is what it undoes.
So when we get to the second question of the tetrad, which asks what AI obsolesces, we’re looking for what it downplays, hides, and/or obscures. One way to answer this is simply to look at what’s suffering now that AI is here. For one thing, all the downsides of technologies that use AI are perpetuated—and yes, I am still thinking specifically of AI in the context of teaching and learning, since this is the context I work within. On this, there is ample research, and more is forthcoming; although I should add that all of this was entirely foreseeable.
Actually, before we even get to AI, consider some studies from 2008 to 2016 that reveal to us what happens when you shift from ordinary handwriting to typing on a computer, or even writing on a tablet. All the research shows a definite decline in cognitive activity when electronic media are used. This makes sense since handwriting activates broader neural networks associated with memory and learning compared to typing. When note-taking, handwriting encourages deeper processing because it forces people to summarise and synthesise information, because of the slower pace of writing, and this only bolsters comprehension and retention. Handwriting improves both low-level (e.g., letter formation) and higher-level (e.g., composition quality) writing outcomes compared with digital writing tools like tablets (Wollscheid et al. 2016, Computers & Education). The tactile feedback from pen and paper enhances letter recognition, spelling accuracy, and memory retention, especially in young learners (Longcamp et al. 2008, Acta Psychologica; Mangen & Velay 2010, Advances in Haptics). Handwriting also improves deep attention (Aguilar-Roca et al. 2012, Applied Cognitive Psychology). It’s no surprise that digital devices diminish our capacity to pay attention (Smoker et al. 2009, Journal of Educational Psychology, 2009), or that students taking notes on a computer are worse on conceptual questions, despite taking more notes (Mueller and Oppenheimer 2014, Psychological Science). The very nature of handwriting is that it engages more of the mind. And there’s simply nothing a computer can do to change that fact.
Jonathan Haidt’s research is pretty famous by now, and so it’s worth just noting what he’s found. He’s down a clear mental health decline in everybody since the advent of social media. All of us have felt it, but that hasn’t exactly stopped us from continuing down the path of self-harm. Still, it’s also worth noting the positive effects of quitting the internet. One 2025 study showed that two weeks without smartphone internet significantly improved sustained attention. The effects were similar to being ten years younger (Castelo et al 2025, PNAS Nexus). The same study showed that being online less made people happier. Quitting the internet proved to be better than antidepressants.
Now, let’s look at adding AI to the digital mix. What do we get? Well, we are only catching some hints of this, given that science always takes a while to catch up to metaphysics. But, so far, there’s that one famous MIT study, which shows close to a 50% drop in cognitive activity of students who use AI when compared to those who don’t. One study of oncologists showed that just a short time of allowing AI to read patient scans diminished their own capacity to read patient scans. AI encourages what is called “cognitive offloading.” Think of any skill you have. Now, think of getting a machine to use that skill. Then, come back to it yourself. The effect of not using the skill turns out to be exactly like muscle atrophy. What you don’t use, you lose.
Still, I don’t think people have generally understood all of this in line with McLuhan’s insights. The deficiency-creator is not AI alone. AI functions mostly as the glue that keeps us stuck on screens, screens, and more screens. The very act of being screen-bound (at least, overly screen-bound) is already atrophying the brain. And yet it’s also becoming clear that there are spiritual and psychological harms involved that few have reckoned with. We know that people, especially several children, have killed themselves based on the so-called “advice” of AI. And there have already been indications that people with a propensity towards psychosis have been institutionalised with psychotic episodes nurtured by AI. We know that electronic media diminish psychological engagement and corrode happiness, and so it should be no surprise that AI just makes this worse. AI obsolesces the body, as is perhaps nowhere more evident than in the quite literal environmental destruction that is fostered by the machines that do the work.
The third tetrad question may seem silly to ask and answer, because it looks at historical precedents, and yet we can learn a great deal from understanding that even new technologies echo pre-existing patterns. What does AI retrieve? In a word: collage. That’s the art form created by sticking various materials, like text, photographs, pieces of paper or fabric, onto a backing. The result seems like a new synthesis, but there is little about it that is really generative. The mere combination of bits and pieces does not, on its own, constitute novelty and understanding. You can make a collage without any insight. And that’s just what AI does. It is an advanced photocopy machine. But this copying-function reminds us how ridiculously unethical the entire AI industry has been. Currently, multiple lawsuits are going on with big tech companies that are being sued for training their software on the work of writers, filmmakers, artists, and creators, with no permission. Of course, we don’t see where the original collage comes from. AI deceives us into accepting as original what is always, in every single case, derivative. To this, I’d just add that AI encourages the IKEA effect. I’ve lost count of how often I see AI companies talk about the creativity of users, when in fact, all the user did was type in a prompt. Creativity is worth understanding better, and I’ll write about this more in the future (as I have already done in the past). Suffice it to say, for now, that real creativity involves precisely those skills that AI cannot contribute: contemplation, the receiving of being as a gift, the ability to experience awe, and so on.
The last question is this: What does AI reverse into when pushed to an extreme? There’s one word for this, too, and you already know it well: Slop. Feeding recursive outputs back into AI, which is already happening with so many AI outputs being vomited onto the net, causes the quality of the AI output to decline. This has an inadvertent consequence of lowering the (already rather low) bar of what is published. The less obvious consequence is this: if people feed themselves on slop, their own intellects will suffer. The constantly lowered average of AI outputs will encourage a lower average intelligence in people. The thing is, more of the average doesn’t mean more of the same. It means more and more of what is worse and worse. Of course, there are other reversals, and some of them may be positive: if the ease with which AI can manufacture music and art and writing, and so on, increases, such that we become even more drowned in content, there’s a pretty strong likelihood that we will want to recover some sense of real human experience. All those things AI can’t do will become more precious. The result of pummeling people with the same sort of hyperactive simulation is not increased attention to the screen, but a stronger desire to get out into the world and to touch grass. To use a simple analogy, if you frequently feel bloated from eating too much, you are likely to start wanting to eat less.
Already, I have likely said too much, and so I will close with this. I’ve offered the above to make a simple point: the nature of a thing does not belong to democracy or big-tech or to anything resembling a consensus; and it does not even belong to any individual who sees only what they want to see. Even my own account of AI above is, unavoidably, incomplete — although I have attempted to distil a wide range of philosophical insights, many of which are not my own. It is worth knowing this for at least one reason: the propaganda around AI is immense right now. Big tech, in particular, is employing every means of persuasion available to it, including the most prodigious use of the bandwagon fallacy I have ever seen. And many of us are likely to feel the pressure to simply go along with all of it. It takes effort, after all, to resist — to employ our own profound capacity for negativity and friction-generation. But it is only by taking a step back that we stand a chance of recovering meaning and depth and experience in a world obsessed with surfaces. When you’re faced with a flood, it’s not an invitation to merely succumb and drown. It’s an invitation to get into the Ark building business.


