Technology

The hidden cost of letting AI make your life easier

February 26, 2026 5 min read views
The hidden cost of letting AI make your life easier

Sven Nyholm already sees troubling signs among his students. As a Professor of Ethics of Artificial Intelligence at Ludwig Maximilian University of Munich, he’s noticed that many can’t be bothered to engage with demanding texts when an AI summary is just seconds away. “AI is designed to make people not think,” he tells Big Think. “But why study philosophy at university if you don’t want to think — if you don’t want to sharpen your critical abilities — and instead outsource them to a mindless AI program?” In these moments, he admits, both his students’ studies and his own role as a teacher feel less meaningful.

Nyholm has spent years contemplating where all this might be heading. As one of the earliest philosophers to examine how AI intersects with meaning in human life, he looks closely at the language big tech companies use to describe their AI goals. When companies like OpenAI or Google DeepMind say they aim to ensure that their AI products “benefit,” “transform,” or “improve” people’s lives, what vision of the good is being assumed? What is asked in return? 

For Nyholm, the deeper question is: Can AI improve our lives in the way that matters most — by deepening meaning — or might it diminish meaning in ways that remain largely unexamined?

“I don’t think that AI can be said to either be good or bad for meaning,” Nyholm says. He begins from a basic observation: Meaning is a complex idea, and we first need to be clear about what we mean when we speak of it. If meaning were entirely subjective, then whatever feels meaningful to someone would simply count as meaningful, and the discussion would end there. 

To take the question seriously, meaning has to be tied to criteria or practices that can be weighed and compared. Some ways of acting, creating, or relating can reasonably be said to carry more meaning than others. From there, another insight follows. Meaning is not a single achievement that settles everything once and for all: It spreads across different scales and moments. “It is not that there’s just one thing that’s meaningful, and if you do it, your life is meaningful, and if you don’t, your life is meaningless,” Nyholm says. We can ask whether reading an article like this is a meaningful way to spend an afternoon, whether a week carries meaning, whether a relationship with one’s parents does, and how all of this bears on the meaning of a life as a whole.  

How, then, do we judge meaning? Philosophy, Sven Nyholm points out, has developed a rich repertoire of criteria. One well-known strand links meaning in life to the good, the true, and the beautiful. On this account, a meaningful life is one shaped by doing good, seeking truth, and having the capacity to enjoy and appreciate beauty in its many expressions. Meaning has also been explored through what Nyholm describes as “developing the better sides of human nature, while learning to handle its less flattering aspects with dignity and care.”

Other perspectives locate meaning in being part of something larger than oneself. This can involve living in ways that reach beyond personal concerns, contributing to the good of others, forming deep relationships, and belonging to a community. These elements often gather around ideas of purpose, contribution, and achievement. Think of long-term projects that allow for progress over time, the cultivation of skills, and participation in communities of peers who recognize one another’s efforts and accomplishments.

Only once we have a grasp on what counts as meaningful activity can the central question take clearer shape: Is AI amplifying these domains, or undermining them? Nyholm draws the contrast with precision. “If AI takes over the meaningful ones and leaves us with only the activities and things that we find meaningless,” he argues, “then of course AI is by definition a threat to meaningfulness. Whereas if we can have AI take over the meaningless things from us, leaving the meaningful activities and things for us, then AI is a booster of meaning.” Everything hinges, then, on which tasks are being handed over, whether those tasks carry meaning for us, and whether there are other forms of activity left for human engagement.

This is where the terrain becomes unstable. Our familiar ideas about living well and acting virtuously were shaped long before AI began pressing against the boundaries of human life. They assume opportunities for contribution, for doing good, for achievement, and for the exercise of human ingenuity. Those assumptions may no longer hold in the world now unfolding. Even today, Nyholm notes, generative AI is opening a meaning gap: activities once experienced as meaningful are outsourced, while nothing equivalent takes their place.

The future of (meaningless) work

Nyholm gives the “gappy” character of generative AI a concrete face in his lucid and comprehensive book The Ethics of Artificial Intelligence: A Philosophical Introduction. He invites us back to the much-discussed moment of 2016, when the computer program AlphaGo, trained through millions of games against itself, defeated the Go world champion Lee Sedol in four out of five matches. The story is usually told as either a technological watershed or a human fall, echoed in Lee Sedol’s later admission that playing Go had lost its meaning for him. Nyholm, however, looks elsewhere. 

He directs attention to a marginal yet indispensable figure: the Google DeepMind employee seated beside the board, manually placing stones for a machine with no arms. This person did not understand AlphaGo’s strategies. He did not need to be a skilled Go player, and perhaps did not even fully grasp the rules of the game. He was never regarded as the new world champion, and no one credits him with even a fraction of the achievement. So who, in fact, won? The engineers behind AlphaGo, absent from the games themselves? The system alone? Or, unsettlingly, no one at all?

Nyholm sees the 2016 jolt that shook our sense of achievement as only the tremor before a larger quake. In a world with ever more AI, he suggests, increasing numbers of people may end up in roles resembling the Google DeepMind employee: carrying out tasks whose intelligence lies elsewhere. Would such work offer achievements one could genuinely feel proud of? Since work is, for many, a central source of meaning in life, this prospect already threatens the availability of meaningful human activity.

Nyholm argues that the danger is already at work. He calls it a special case of the meaning gap: the achievement gap. It surfaces whenever we rely on AI to perform tasks we would otherwise carry out ourselves, tasks that normally exercise our intelligence and skill, such as writing text, composing music, and making decisions. 

When we do so, Nyholm warns, we end up “limiting our own contributions to outcomes.” As our role diminishes, the resulting products increasingly fail to qualify as achievements we can genuinely claim. In this sphere of meaningful achievement, Nyholm insists, the stakes are clear: Unless we actively resist this drift, something essential to the meaning of human life will be lost.

Who took that photograph?

One striking example of the achievement gap appeared early in the era of generative AI. In 2023, the Sony World Photography Competition awarded artist Boris Eldagsen first prize for best black-and-white photograph. The image, curiously titled The Electrician, depicted two women in an enigmatic embrace, rendered in the style of an old-fashioned print. The moment of celebration did not last. Eldagsen declined the award and disclosed that the image was not a photograph, but a work co-produced with generative AI. With notable humility, he argued that anyone relying on image-generating AI deserves no credit for being a good photographer, or even for being a photographer at all. At most, he suggested, the proper label would be “promptographer.”

At first glance, this incident can look like a narrow ethical puzzle about credit and authorship, something that might be settled by regulation or by inventing a new category such as “promptcraft.” Nyholm thinks the issue runs deeper — what’s really at stake is meaning. “Do we just care about the outputs someone produces with whatever tools are available?” he asks. With certain values, he suggests, the answer is clearly no. What matters is not exhausted by the finished product. We also attend to how it came about.

When we praise someone’s work, we usually care about who actually did it. We ask whether the person possesses the skills that made the achievement possible. Nyholm gives a simple example. If you want to know whether someone can write poetry, and you discover that they asked a large language model to produce a poem and then signed their name, you learn nothing about their ability. The poem itself might be impressive. It could even win a prize. Still, something crucial is missing. 

“If you don’t have to put in the work,” Nyholm says, “if you can just push a button and something good comes out,” the situation becomes strange. Perhaps the outcome is valuable and even adds something meaningful to the world. And yet, paradoxically, it may be that neither the AI system nor the human involved really understands what is being done.

To sharpen the point, Nyholm proposes turning John Searle’s famous Chinese Room against ourselves. In 1980, Searle imagined a human being sealed in a room, producing flawless replies in Chinese by following a rulebook, without understanding a word. To those outside, the performance looks intelligent; inside, comprehension never arises. Searle used this image to illuminate how a computer program can manipulate symbols without grasping meaning.

Nyholm keeps the human in the room but shifts the lesson. Imagine that same person now receiving Chinese messages, feeding them into a ChatGPT-like system through prompts, and passing the replies back out. The scene resembles the Go assistant executing AlphaGo’s moves or the creator of an AI-generated photograph. Intelligence is performed, yet understanding is absent. In relying on such systems without reflection, Nyholm suggests, the allegory no longer describes machines. It describes us.

“So did someone do something in a meaningful and admirable way,” Nyholm asks, “so that the outcome truly counts as their achievement?” Only then, he suggests, can we say of a piece of art or a corporate policy: You were the architect behind this; you showed judgment, skill, and accomplishment. “The more you didn’t actually do anything,” he adds, “and you had AI do it for you, the harder it becomes to say, ‘I am the cause. I am the origin. I deserve the credit.’”

Nyholm points to familiar cases. Think of a novelist leaning heavily on a ghostwriter, or a political leader delivering a stirring speech written entirely by staff. Now replace the ghostwriter or speechwriter with AI. “At that point,” he suggests, “you might have to say that no one really deserves credit.” We admire AI as an engineering achievement, he explains, yet we do not treat it as accomplished in the human sense. Effort, skill, and authorship remain categories we reserve for people, not for technologies.

Many philosophers, Nyholm observes, connect meaning closely to effort and achievement. One distinction he finds especially illuminating comes from political theorist Rob Goodman: the difference between “process goods” and “outcome goods.” Outcome goods are the finished results of activity, a completed text, a painting, a research finding. Process goods lie in the doing itself: painting, researching, struggling through a problem. These, Nyholm suggests, are precisely what AI threatens most. Much of what makes achievement meaningful is found in the process of achieving.

Inspired by philosophers such as Gwen Bradford and Hannah Maslen, Nyholm adds that genuine achievement involves difficulty and sacrifice. It also requires competence. The person must produce the outcome through skills they actually possess, showing a form of excellence. “Relying on AI technologies that tell us what to do,” he stresses, “or that help us produce impressive outputs, seems insufficient to qualify as having made extra effort, shown particular talent, or displayed any special form of excellence.”

The unbearable lightness of AI

Generative AI does not erode meaningful achievement by accident. It does so by design, and in line with its business logic. “It is designed to take over tasks that are effortful for us,” Nyholm says. “This is the very idea of AI: the temptation to take the easy way out.” The difficulty, he adds, is that many effortful tasks are precisely the ones that carry meaning. Deep relationships require patience, friction, and vulnerability. Skills demand time, frustration, and persistence. Yet people increasingly say: My AI companion is always available, always supportive, and far easier than dealing with another human being. Or instead of struggling to develop a skill, I can let AI produce the outcome for me.

In such cases, the praise AI attracts is telling. We call it convenient, time-saving, and cheap. “But is it meaningful?” Nyholm asks. When effort, creativity, and skill fall away, meaningfulness no longer seems the right category.

His deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.

Nyholm is careful to say that how we preserve critical and analytical thinking, along with deep concentration, remains an open question. What worries him is the alternative: a gradual slide into lives shaped by impulse, short attention, and constant distraction. This challenge, he insists, cannot be placed on individuals alone. Public policy matters. Still, at the personal level, each of us has work to do. Nyholm speaks of finding an “AI and meaning sweet spot” — a way of living with technological assistance that still leaves room to contribute, play an active role in one’s community, and stay with difficult things long enough to actually understand them.

Above all, Nyholm pushes back against a seductive illusion. AI summaries and tidy three-point answers may free up time — perhaps, he adds wryly, for watching more TikTok videos — but they do not produce mastery. They can create the appearance of learning without its substance. When people later face situations that demand real judgment, they discover the gap. Excellence, Nyholm reminds us, grows from effort. He recalls an anecdote about violinist Itzhak Perlman. Told, “I would give my life to play like you,” Perlman replied, “I did give my life to play like I do.” Talent helps. What truly distinguishes people is time, practice, and enduring commitment.

This article The hidden cost of letting AI make your life easier is featured on Big Think.