The new infinity

Merkur 4/2026

The challenge of regulating AI when it cannot be defined; AI and the devaluation of work; AI and the future of productivity; why engineered anthropomorphism is here to stay.

‘What do a chatbot, a smart fridge, a predictor of payment-default risk, an automated translator, a self-driving car, an email spam filter, and an earthquake predictor have in common?’

Artificial intelligence resists stable definition, writes Paola Lopez in Merkur (Germany), creating not only conceptual confusion but concrete legal problems. The term ‘AI’ encompasses a broad and shifting range of systems, from generative models to more limited algorithmic tools.

And it is not just in kind that AI systems vary. Technologies evolve rapidly and new versions of models supersede old ones with bewildering speed. The result is a moving target that resists clear categorization. This ‘mercurial’ fluidity has major legal implications. How can we regulate something without first defining ‘what is being regulated’?

Efforts to legislate risk being either too narrow and failing to capture emerging systems, or being too broad and lumping together fundamentally different technologies. Moreover, as the debate around regulation intensifies, firms are deliberately downplaying their use of AI.

PredPol, for example, was the market-leader in predictive policing until the city of Santa Cruz, where it is based, banned the use of such technologies. The company responded by changing its name to Geolitica and claiming that it had never offered predictive tools in the first place. Similar situations are bound to happen again, suggests Lopez: ‘At first, everyone wants to “use AI” and everything “has AI”, because it’s easy to ride the AI hype wave. But as soon as regulation kicks in … nobody will want to have been associated with AI.’

The devaluation of work

The rise of AI in the workplace does not simply jeopardize workers performing automatable tasks; it also reshapes the very value of work. Lisa Herzog distinguishes four dimensions of work value being undermined by AI: ‘didactic, community-building, meaning-creating, and political’.

At the didactic level, work is a space for acquiring and refining skills. By automating complex tasks, AI limits opportunities for learning through practice, making it harder to gain expert knowledge. This can also impact motivation, ‘if it was precisely the opportunity to practise and develop certain skills that attracted someone to a particular profession’.

Second, work drives social integration by bringing together people who might otherwise never have met, but algorithmic management isolates workers, making it harder to develop a sense of shared culture.

The third dimension of work is its meaning. ‘Human action is structurally polysemic’, with smaller, more tedious tasks made worthwhile when we know they contribute to a broader goal. AI-managed platforms strip work of this meaning by outsourcing numerous mini jobs to temporary workers with no knowledge of their ultimate purpose. Labour begins to feel like navigating ‘an obstacle course of tiny, intricate hurdles’ rather than achieving anything of real value.

Finally, workplaces ‘are important loci of politicization’. They enable conversations about working conditions and workers’ rights and so foster political consciousness and action. By restructuring labour into individualized tasks and pitting zero-hours workers against each other in a race to pick up jobs, AI negates this aspect of work.

The new infinity

Economics is ‘an attempt to overcome finite resources by multiplying possibilities of access’, writes Birger P. Priddat. Economic history can be understood as a succession of ‘field regimes’ that expand in different dimensions, starting with fields in a literal sense.

First, the move from horizontal agriculture to vertical mining marked a shift from domestic husbandry to the extraction of finite resources. Global trade then allowed European economies to enlarge their resource base without intensifying production. With the borderless, pathless sea as the new geometric field, ships functioned as vectors in the ‘spatial appropriation of the fruits of foreign continents’. The next field was the temporal: the industrial economy was driven by investment in future returns and a shift from seasonal growth cycles to constant productivity.

In the twenty-first century, with the world’s physical resources exhausted, the field has expanded internally, into human behaviour itself: ‘Just as Locke defined indigenous land as “empty”, Google, Meta and the like define our private data … as “raw” and “ownerless” until processed by their algorithms.’

And what of the future? Priddat suggests that the sixth field will be the biological. We have reached the point of no return with the climate: no amount of carbon capture can restore the system we have destroyed. Our only choice is to harness the power of AI to remake our world: solar geoengineering, heat-resistant corals, plastic-eating bacteria, lab-grown proteins.

‘The new infinity lies not in the expansion of space, but the density of design. We are not at the end of the history of productivity, but at its most dangerous and productive point: the transition from the unconscious destruction to the conscious composition of planetary life.’

Engineered anthropomorphism

LLMs are becoming increasingly human-like, producing responses that feel conversational, empathetic, and self-aware. Far from being incidental, this anthropomorphic quality is systematically introduced through a sequence of design choices, writes Max Beck. Even the decision to present interactions in the form of a ‘chat’, rather than, say, ‘node-based workflows or command-line tools’ is a deliberate design choice, as is the ‘the display of generated tokens in the chat interface as flowing text reminiscent of human typing’.

The process of creating an LLM begins with the base model, trained on vast text corpora to generate statistically plausible language. At this stage, ‘the form of the response is determined purely on the basis of probability theory from the disparate training data, which is not always conversational.’ Fine-tuning then adapts the model to more specific tasks and improves relevance and fluency.

The next step is ‘Reinforcement Learning from Human Feedback’ (RLHF), where human evaluators rank and compare outputs, rewarding those that appear helpful, polite or friendly. This process gives the model its ‘personality’ by embedding human communicative norms into its responses. The result is a style that often mimics emotional awareness.

This engineered anthropomorphism has clear financial advantages and is not likely to be going away soon. More human-like systems are easier and more pleasant to use, increasing ‘stickiness’ and prolonging interaction time: ‘Ultimately, use-time is the currency of all interactive platforms.’ Anthropomorphism is thus a strategy that aligns user experience with the economic priorities of AI developers and operators.

Review by Cadenza Academic Translations

Published 22 April 2026
Original in English
First published by Eurozine

Contributed by Merkur © Eurozine

PDF/PRINT

Newsletter

Subscribe to know what’s worth thinking about.

Related Articles

Cover for: Data deficiency

Radio waves may travel indefinitely through space, but maintaining a record of live transmissions requires dedicated archival practices. In Portugal, where an outdated legal deposit law only safeguards printed material, even historically important broadcasts are recorded over. Could a new law based on a French model be the answer to libraries saving priceless material from obscurity?

Cover for: Breaking at the seams

Breaking at the seams

L'Espill 74 (2024)

Spain’s communities, though autonomous, struggle with national cultural and economic hierarchy: Valencia calls for regional unity via federalism; Catalan tackles ‘ethnotypes’; and Basque defends its bid for reform.