Computers began as a human beings. In black and white photographs dating back to the middle of the twentieth century, human computers appear in the form of women, with well-sharpened pencils and paper, calculating the trajectory of space ships, rockets and atoms. Occasionally they may be helped by unwieldy machines or calculators but, mostly, they are reliant on their own brainpower and mathematical skills.
Human computers – Jet Propulsion Laboratory employees. Photo by NASA from Wikimedia Commons NASA / Public domain
In the decades that followed, their electronic counterpart developed at a dizzying speed. Today no one would associate a computer with a position or job, let alone a human being. Increasingly, humans themselves are being understood as computers, sophisticated yet rudimentary, with countless neurons, genes and environmental factors that affect and react to one another, but are housed in a body liable to all kinds of defects.
Though they should know better, the story goes, humans remain stubbornly attached to their ideas about freedom and autonomy, thereby thwarting their own efficiency. Supposedly, homo sapiens
is fundamentally an information processing system. You input something one side and something else comes out the other. It’s only a matter of knowing what to put in so you get the result you want after it’s been processed. Garbage in, garbage out, as programmers say. If the data you start with doesn’t correspond, if the software is carelessly written, you can’t expect to get anything other than a heap of shit in return. And anyone can produce shit.
Ever more frequently, cutting-edge technology serves as a metaphor for describing humans or the human mind. In earlier times, this honour was bestowed on the hydraulic device, the mechanical timepiece and the telegraph. Now it is the computer’s turn – or, more specifically, software’s – to supply the blueprint for what makes human beings human.
In fact, it goes further than this: the computer has become a universal model for explaining not just how humans operate, but how the entire world functions. Ultimately, the metaphor speaks far less of what makes humans human, than of evidence that humanity in general isn’t so very special. The same rules apply to human beings as to the rest of the universe. If you map out exactly what goes into them and how their information processing functions, you can predict what will come out. Based on the prediction you can then adjust your input and try to manipulate the process to achieve a different outcome. It is a response to the desire for optimization, which is just as applicable to production or organisational processes as it is to human beings.
‘Pancomputationalism,’ as the philosopher Luciano Floridi calls this way of thinking, distinguishes itself through universal presumptions that make any challenge seem fallacious. It is therefore better to refer to it as a belief than a scientific framework or philosophical perspective. In this it is similar to dataism. The two – pancomputationalism and dataism – cannot do without each other. If the newly styled human computer is to work, then data (large volumes of it) will always be needed. Data is the resource that keeps computational thinking going and makes further optimization possible.
Dataism is like a child that has found stardom on stage and become a household name world-wide. Though still a minor, it is nevertheless larger than life.
In 2013, the American journalist David Brooks coined the term ‘data-ism’ to express the rising aspiration to achieve optimized knowledge and monitoring through large-scale data collection. Dataism enables us to look at the world differently, he wrote. Instead of relying on intuition and carefully-formulated questions or hypotheses, we can use data to discover patterns that once escaped our notice. Success in sports, the effectiveness of political ads and the use of language by liars are examples where data analysis has yielded new insights. However, whether data will also enable us to predict the future and the way we make decisions remains to be seen. Brooks admitted to being sceptical about our desire to reduce everything to the quantitative.
Less than a decade later, it is clear that data has redeemed its promise. In fact, data’s predictive power, its function and value, seem to have been elevated to an unassailable absolute. Scepticism has vanished, data is unequivocally considered to be the universal code into which everything can be translated. That it makes the world predictable and brings it under control is beyond dispute; that algorithms which process data make better decisions than humans, and that we had better outsource decisions as a result, is evident. According to the prophets of dataism, the process of outsourcing itself is likely to herald the end of humankind, yet we should be grateful for it. Larger than life, indeed… because if everything in the world is mapped out with the help of data, the long road to domination over nature will be complete. As a part of nature, old-style human beings (us) cannot escape being subjugated themselves. They will have to make way for their successor: a being that is more or less all-knowing, and consequently, divine. It is significant that the bestselling book by historian Yuval Noah Harari, which definitively put dataism on the map is called Homo Deus: A Brief History of Tomorrow.
In general, dataism is not seen as a religion, or as a philosophy in the way Brooks presented it. Instead it is perceived as a next step in the development of science. This is also true of Harari’s reading of the term. His futurological speculations are based on a specific understanding of the history of science. It is not a particularly complex interpretation of the subject. According to Harari, one hundred and fifty years of scientific research can be summarized in the words ‘organisms are algorithms’ – which signifies that every organism, from bacteria, via bananas and baboons, all the way to humans, can be traced back to a road map. Predictable biological processes and physical laws guide the map, or algorithm, and leave nothing to chance.
This is the ultimate form of pancomputationalism. According to the principles behind this view of the universe, plant and animal, human and machine all work in the same way. In comparison with a machine, human beings are hopelessly inefficient and lacking in organization. For this reason, humankind suffers from ‘Promethean shame’ – a condition the philosopher Günther Anders identified in the mid-twentieth century. His phrase expresses the shame of being unlike machines, not manufactured, and thus impossible to perfectly optimize. We are ashamed of our submissiveness, not to devices, but to our own insufficient ability to communicate, and our failing bodies We are ashamed of ugliness, of our own careless intelligence. According to Anders, we recognize in robots something that we ourselves lack, and for this reason want to do our best to serve them.
Today, in the era of dataism as Harari describes it, this human failure is easy enough to resolve. Science plumbs the depths of organic algorithms, and technology provides the means for adapting them. In the near future, organic algorithms such as humans will merge with digital algorithms, and this will result in a species of algorithmic human-being that, Harari believes, will differ more from us in two- or three-hundred years than we currently differ from the Neanderthals. It is a mechanistic worldview, extrapolated into a not-so-distant future where we will all function like a computer.
In this vision, downfall and progress go hand in hand. Inefficient human faces a certain destiny. Dataism is a cynical kind of faith depicting today’s world as a deplorable intermediate stage on the path to something better. Equally, it resembles a traditional faith in which the fall of humans – enacted either through individual death or global apocalypse – can often be a precondition for progress or paradise.
Harari’s purported ability to predict the future has conferred on him the status of a visionary. In January 2018, he gave a speech at the World Economic Forum in Davos under the title ‘Will the future be human?’. As the presenter who introduced him to the audience put it, not many historians find themselves on a stage ‘sandwiched between Angela Merkel and Macron’. Merkel, the most powerful woman in the world, the presenter added, had even stepped out of the ranks backstage to introduce herself to Harari and tell him she had read his book.
One can easily hazard a guess on the answer to the question posed in the title of Harari’s presentation. No, the future will not be human. It belongs to a being waiting in the wings, that will be as distant from us as we are from the Neanderthals. And it will be formed by means of data, bio- and brain engineering.
If there is enough data available, Harari has famously said, the individual human – who functions as a computer – can be hacked. This makes data a very valuable asset: it provides a means of control over those hackable beings known as woman and man. As the word ‘asset’ indicates, the race to acquire data is currently taking place in the market. Politics hardly interferes at all. However, because this is all about control over what life itself will look like, Harari argues, it is extremely important to pay attention politically to what is currently seen mostly as an economic issue.
Harari warns that people who own the data – those who will be developing the new, algorithmic human-type or homo deus – also own the future. He offers his assembled audience of world leaders a dystopian vision in which humanity is divided not only into rich and poor, or dominant and oppressed, but into different biological species. In this world, homo deus – the new species – will be much better off and considerably more powerful than the old homo sapiens. The latter will likely only be able to survive as a slave to the former.
I have always regarded Harari as a paradoxical figure. When I read or listen to him, I get the sense that the chef is serving up the last supper in a way that’s just a little too enticing. Does he not believe too much in his own role as a prophet? Is he seeking to avert the arrival of homo deus or does he in fact welcome it? And what is it about Harari’s warning that ultimately fails to convince me?
First, there is his proclaimed reductionism, as expressed in the slogan ‘organisms are algorithms’. Harari sounds the alarm about an approaching end time, but offers no room for an alternative vision. Perhaps no other possibility is feasible if you believe, in a mechanistic way that the future, originates in the past. The idea that the world is a conglomerate of predictable algorithms does not allow for any other outcome.
In consequence, Harari’s warning seems somewhat unfounded. Although he occasionally incorporates a degree of reservation or ironic distance into his vision, and into Homo Deus, Harari seems to believe in what he criticizes. He appears convinced that data can give you an exact picture of individuals, that algorithms make better decisions, that choices are nothing more than options, and that there is such a thing as a best option.
Even more troubling than the proclaimed death of homo sapiens is the fact that a reductionist and mechanistic worldview of this kind has reached the highest echelons of politics and economics, never mind all those kiosks in airports and railway stations. It is a vision that leaves little room for ethical questions. Much as Harari calls for data to be treated as a political issue, rather than a purely economic one, he too gets stuck on the question of ownership.
Using a personal example, he outlines what might happen if data and algorithms fall into the wrong hands. For years, he says, he lived in denial about his sexual orientation. He had no idea who he was. Now suppose there was an algorithm that had told him straightaway: you’re gay, and this is where you are on the spectrum between gay and straight. In whose possession would you want such an algorithm to end up? That of companies which will use it to target ads? That of the international secret services? Or would you rather have it be available to young people who are grappling with who they are?
What Harari does not question is how such an algorithm is created and how certain it is that blood pressure and eye movements ‘betray’ your nature. He does not ask how it interprets which terms, what a spectrum of sexual preference looks like, or what it actually means to know yourself. The starting point that organisms are algorithms provides an answer to all those questions in advance.
One key concept that can help highlight the fixations of dataism is friction. With the help of technology, this must be eliminated, exponents of dataism will say. After all, friction stops the movement that produces data that will and must make the world predictable. Dataism derives its authority from its ability to make predictions. It is focused on a future that holds few surprises, as long as its gaze is not disturbed. If I experience too much friction – in the city, through apps and on the Internet –, if it is too complicated to use my smartwatch, my smart thermostat or lighting app, then I will disappear from view. My profile will fade away. Frictionless or seamless design has therefore been the ideal for software and hardware development since the 1990s. It is also an important dogma of dataism.
At the same time, there’s no avoiding the fact that the total elimination of friction would lead to a standstill, which would necessarily affect the data collection machine. Movement is friction. Consequently, dataism casts its own shadow because, without initial friction or unpredictable behaviour, there would be no data to retrieve. It provides a paradoxical image of the dataist end time: if everything is datified, all friction has been eliminated and every movement follows familiar patterns, the world comes to a halt. A wholly predictable future is not a future, but a continuous present.
In his book ‘Saving Beauty’, the philosopher Byung-Chul Han describes the desire for frictionlessness as the aesthetics of ‘the smooth’. Han sees the pursuit of the elimination of any resistance, where digital technology is highly developed, as a social zeitgeist. It’s not just about collecting as much data as possible; the smooth embodies a society under the spell of positivity. ‘What is smooth does not injure. Nor does it offer any resistance. It is looking for Like. The smooth object deletes its Against. Any form of negativity is removed’, he writes.
In other words, frictionlessness can be understood as an elaborate design for the path of least resistance. But is that path a dead end? If I try to sit very still in order not to produce any data, I may look like the epitomy of ultimate surrender. I might appear to be giving in to total predictability, and it could seem that the friction has finally levelled out. Yet my body still houses thoughts. Turbulence rages within me. These are perhaps the greatest types of friction. I do not want to believe that such turbulence to be captured in data, brought under control and made predictable. Because nobody knows what’s going on inside my head. And this is exceptional: no one knows. Not even me.
In ‘The Ethics of Ambiguity’, the existentialist philosopher Simone de Beauvoir explains how friction, both within and between humans and the world, forms the basis of ethics. She compares the relationship between herself and the outside world with the image of a hiker in a snowy landscape. Between the two there is an unbridgeable distance. ‘I cannot appropriate the snow field where I slide. It remains foreign, forbidden.’ Though it is true that we long for an effortless, undemanding relationship with our environment, de Beauvoir says, perhaps even for union with it, this is something we are not granted. ‘I should like to be the landscape which I am contemplating,’ she writes. ‘I should like this sky, this quiet water to think themselves within me, that it is I whom they express in flesh and bone, and I remain at a distance.’ The distance doesn’t matter. On the contrary, it properly ensures that humankind has a place in the world, ‘that the sky and water exist before me’.
Distance may be a source of friction, but it is also fruitful. A relationship characterized by friction carries a degree of risk and exposes any notion of control (such as predictability) to be illusory. But that means triumph not defeat, joy not torment, de Beauvoir says. It is an ambiguity unthinkable in a society under the spell of positivity.
Friction lies at the very heart of ‘ambiguous morality’ and, for this reason, offers a useful tool for reflection on ethics and dataism. If friction is closely linked to ethics, then the elimination of friction undoubtedly produces an ethical dilemma. The landscape that extends before me (connected by cameras, sensors and my own screen addiction) seems a smooth icy surface. Set foot on it and you slip. Rather than be able to calculate a future with few or no surprises, I would prefer to imagine the future as an open space extending in all directions, including backwards and inwards. The hills, lakes, forests and mountains beyond the horizon cannot be seen.
Friction means rebellion. I keep my mouth shut to avoid the risk of being heard by eavesdroppers. Still the words long to escape, they want a way out. Within my inert body, resistance rages.
In his essay ‘The wild garden of the imagination’, cultural philosopher Kris Pint discusses the effects of imaginative resistance, which chips away at the prevailing view of humanity and the world, and unfolds a different landscape around us. Though it is true that this form of mental resistance is constantly distracted and lulled to sleep by objects, advertisements and notifications, it cannot easily be constrained. It exists in silence, in refusal and in listening to what stirs within our innermost self, where imaginative resistance can germinate, Pint writes. But it cannot be limited to this. Silence is ultimately unsatisfying.
Where is the turbulence to go? How can the counter-imagination begin to speak? There must be stories and strategies to help me navigate more consciously and freely through the data-producing landscape I am a part of. If the world, with me in it, is translated into data, there must be another language that counters it. One that isn’t only mine but can be shared, one that breaks open instead of closing up.
In her book Brain Beast, Marjan Slob pursues a similar mission. She subjects discourse about the brain to a hermeneutical examination and describes how the popularized image of the brain suppresses a ‘time-honoured language’ – the language of the humanities. Brain science, Slob writes, ‘provides profiles, not portraits’. The same applies to the process of translation into data, even if it eats up not just the brain but the world in its entirety.
As our ‘time-honoured language’ becomes crudely replaced by the scan, we lose a whole arsenal of options for thinking about ourselves and shaping our lives, says Slob. Brain scans and similar images will never fully map out our rich and manifold inner worlds, and instead run the risk of drawing up something that ‘will look suspiciously like an atlas for the world of experience’.
Must we then go back to our old, time-honoured language? In my view, that is just as doomed to fail. Whose language was it, after all? Was it not appropriated by scholars, men and the West? Should we not be exploring other languages, which, though embedded in tradition, the humanities, or other cultural conventions provide means of description appropriate to this particular historical period, possibilities for thinking about ourselves in the here and now, stories that specifically challenge the monopoly of the profile language?
By interpreting brain science as a profile language and emphasizing its linguistic features – which also apply to dataism, despite its number fetish –, Slob opens a path leading not only to an old language, but also potentially to a new one. Quoting the writer Herta Müller, she writes: ‘“Language was and is, nowhere and never an apolitical area. Each time you have to listen to what it intends.” I think the following is a good question: What does your language intend?’
Language is never neutral. The Jamaican poet Ishion Hutchinson says, for example, that he wants to reconquer the language he was taught to speak in order to reinvent it. He is referring, of course, to English – the language of his country’s oppressors. Hutchinson expresses ‘the wish never to remain passive, never to take language on a given platform but to break it, break into it’. He speaks of a faraway time, centuries ago, with an intensity and violence of feeling I am lucky to have been spared. Yet I feel moved and touched by his words. They speak to my silent yet raging inertia. Breaking the language, breaking it in, and assembling another story from the fragments – wouldn’t that also be possible with the language of data?
Data vacuums would suck the very stars from the sky if they could. Hutchinson imagines how Julius Caesar might have looked up at those stars and said: ‘“to whatever end they are, they are mine.”’ It is up to us to deny that rapacious emperor the sense that he has the ability to grab, possess and constrict even the stars in the night sky, Hutchinson writes. Or, in the words of Ta-Nehisi Coates in Between the World and Me, we must query the logic of the claim – instead of trying to change something within the existing order, we should question the order itself. Only then can the New emerge and escape from what you are attempting to oppose.
Coates says he is fighting against what he calls ‘the Dream’ – the illusion cherished by white Americans that they live in a world that is wholly just and good, in which it is self-evident that everyone prefers to go along with anything on offer. The Dream justifies the white American position, while at the same time wiping out any notion of an alternative way. It is ‘the enemy of all art, courageous thinking, and honest writing’, Coates writes. But enmity is inherent in all dreams, he pursues. Reactive dreams will exhibit the same defects as the Dream they replace. There is only one thing to do: go back to the drawing board and try to build something different.
Or is it now no longer possible to do something that is not immediately co-opted and converted into data? In ‘The wild garden of the imagination’, Kris Pint asks a similar question. Pint tries to liberate the homo economicus, a presence entangled in market or efficiency thinking and reduced to a being defined by its own needs, an algorithmic organism that has evolved in order solely to consume. Yet any attempt by this being to resist is met with the threat that it will be instantly appropriated and rendered harmless by the very adversary they are trying to resist. Socially conscious shopping becomes transformed into a brand strategy; the yoga retreat gives you the energy to get back to work; being lazy provides the modern worker with indispensable creativity. Is there anything one can think of that really escapes the logic of the market, all those dominant ideas about how the world and humanity function?
The imagination of resistance that Pint calls for is intended to create stories, performances and practices that challenge this monopoly of ideas. ‘The first necessary step,’ Pint writes, ‘is to be able to say “no.”’ That seems simple enough, but it isn’t easy to refuse to go along with what is offered to you. World literature offers us a number of such objectors, however. Take Yeong-Hye from Han Kang’s ‘The Vegetarian’, who one day stops eating meat. ‘I won’t eat it,’ she says simply, and, ‘I don’t eat meat.’ She doesn’t provide much more information or additional clarification. ‘I had a dream,’ she says. For her, that is enough.
Yeong-Hye is a member of what could be termed the ‘collectivity of the unwilling’. The most famous member of the club is Bartleby, the eponymous clerk from Herman Melville’s short story. Bartleby is employed as a scrivener in a small office on Wall Street. One day he simply stops working, while still continuing to show up smartly at his desk. He responds with the words ‘I would prefer not to’ to everything his boss asks him to do. Would you like to copy out this letter neatly? ‘I would prefer not to.’ Could you just get a box of paper? ‘I would prefer not to.’ It is both maddening and intriguing. How is it possible that a low-grade clerk should suddenly start behaving in such an outlandish way? Does he initially get away with it because he remains so quiet, polite and good-natured?
What Bartleby does is frustrate communication. He doesn’t even say ‘no’, as Pint recommends; he says neither yes nor no. He is compliant and recalcitrant at once. His refusal remains implicit, though stubbornly sustained and supported by physical resistance. However little he delivers, he keeps his options open. In doing so, he eludes the systematic determination that could be converted into something useful or valuable, such as profit or data.
As a literary character, Bartleby is beloved among philosophers, political activists and system critics. In his taciturn mysteriousness, he has become an archetypical figure. The Belgian philosopher Isabelle Stengers, for example, presents the reluctant clerk as the embodiment of ‘the idiot’, the out-of-step wanderer who has been turning up in literary fiction for centuries to disturb the existing order – evasive and stubborn, but also mild-mannered and calm.
The idiot is a conceptual character. A killjoy and a heresiarch of the hesitant kind, a sceptic and a murmurer, absent-mindedly asking for directions along a known route (which becomes unfamiliar through the question itself), swallowed up by seemingly insignificant things, slow – not in spirit but in choice. ‘The cosmic idiot’s murmur is indifferent to the argument of urgency, as to any other,’ Stengers says. You could also add that the idiot causes friction.
‘Faire l’idiot’ ‘Act like an idiot!’ Gilles Deleuze saw idiocy as a task for philosophy. Philosophers should be idiots, slowpokes who take their time yet suffer from chronic unrest. The idiot is that peculiar friend who causes a commotion – something the datified human could well use.
Deleuze finds a model of the primal idiot in Dostoevsky’s novel of the same name. Something strange goes on with Dostoevsky’s characters. They cannot sit still and are distracted by trivial happenings. As Deleuze says ‘A character goes out on the street and says: “Tanya, the woman I love, asks for my help. I am going, quickly, quickly, she will die if I don’t go to her!” He walks down the stairs and runs into a friend or sees a dog that’s been run over, and forgets everything. He completely forgets that Tanya is waiting for him, dying. He starts talking, meets another acquaintance, has tea with him and then suddenly says again: “Tanya is waiting for me, I have to go!”’
Dostoyevsky’s characters are constantly getting involved in all kinds of emergency situations, Deleuze points out, yet there is always something even more urgent. But what is it? This eludes them as much as it does us.
Is that then a shining example for us today? Running around all over the place, forgetting people in need, being in a constant state of excitement … it doesn’t sound very alluring. But what makes the idiot a figure worth following, is their fundamental inefficiency, a refusal to go along with the rationalization of choices and actions. Idiots trust intuition and the realization that intuition cannot be explained. Therein lies their innocence. For the idiot should not, under any circumstances, be confused with a charlatan – who lies, deceives, and seeks to keep the truth well hidden. Charlatans continue to communicate. They want to deliver a message and act with purpose. They provoke reactions and try to convince using any available techniques, which is exactly what the market loves. The idiot, on the other hand, ‘the idiot does not deny articulated knowledge, does not denounce it as lies, is not the hidden source of knowledge that transcends them,’ as Isabelle Stengers writes.
The idiot is not interested in truth, and does not proclaim any other truth of his or her own. To idiots, it is all about suspension of the truth. They have no message to proclaim, no causal inferences. ‘And so’ does not appear in their vocabulary. Everything is open. What follows need have nothing to do with what has occurred before: an encounter with an acquaintance, a dog that has been run-over, a cup of tea. In this way, the idiot puts a spoke in the wheels of the smooth machine of present day communication.
In Psychopolitics, Byung-Chul Han equates idiocy with one of the few ways of exercising freedom still available to us today. An idiot feels at liberty to ignore the requirement of intelligibility, as Bartleby does, and so remains indefinite, being neither this nor that.
The freedom of non-understanding and indeterminacy is a poetic freedom. ‘The idiot does not exist as a subject – he is more like a flower: an existence simply open to light,’ writes Han, referring to Botho Strauss. It is not an easy existence, because ‘the idiot spins about like a plucked rose in the whirling river of single-minded people – people in consent; those who have been incorporated and belong to a wondrous, common understanding.’ So be it. Making things harder, not easier, is simply what the idiot likes to do.
If things are too easy, then automatism is just around the corner. Walking from A to B, in accordance with the route shown on your screen, is like flying on autopilot. Buy what is offered, respond to every notification immediately, do what you are asked to do: idem.
What we need is a politics of de-automation. Hence my own idiot is a de-automaton. She simply refuses to do what is asked, to buy what is offered, to answer as soon as someone requests her attention. Idiots insist on thinking about meanings, possibilities and interpretations. They do not bid farewell to progress, but question the technology. They take their time, without knowing for certain if an answer will follow. They love the electric beast but, equally, they care about friction.
Back in the 1910s, the Russian formalist Viktor Shklovsky described how our perception of reality and its expression in language was becoming increasingly automated. The ideal is a mathematics that replaces things with symbols, he remarked in Art as technique: ‘through algebraization, the automation of things, we achieve the greatest possible economy in our efforts to perceive.’ In other words, real perception is occurring increasingly less often. We believe what we see, since we don’t have to look for it. Meanwhile, Shklovsky writes, ‘life is lost, disappearing into nothingness. Automation swallows things up, your clothes, your furniture, your wife and your fear of war.’
Art has the capacity to break through automation and renew perception. It does this through a process of ‘defamiliarization’, or making what we see appear unfamiliar again. Where automation is a feature of algebraic communication, de-automation begins with defamiliarization through the word. Literature, poetry in particular, is the language of de-automation. It disrupts the economy of perception and restores your sight. Life regains colour and meaning.
This is not in any sense a frivolous task. ‘Poetry is not a luxury,’ writes Audre Lorde. The language we use frames and illuminates the world. It influences how we live and what we do, refreshes our perception of what exists, and has creative power. Poetry, says Lorde, lets us ‘is the way we help give name to the nameless so it can be thought.’. What poetic language creates – the language creates a certain possible reality, so to speak – is not reductionist but complex. It mints a language of possibility, and of freedom.
Elsewhere, Lorde asked a question that would become famous: ‘What are the words you do not yet have? What do you need to say?’ Her text carries the revealing title: ‘The transformation of silence into language and action’. The task Lorde formulates is to think about alternative tactics for producing reality, and build up imaginative resistance. It is, as I read it, a task aimed at de-automation.
This essay is an excerpt from Friction – Ethics in Times of Dataism (Frictie: Ethiek in tijden van dataïsme), published in Dutch by De Bezige Bij, May 2020.
The author is a member of the Eurozine Board of Editors.