Technology and Humans

October 25, 2025

Note: This is the beginning of a longer, unfinished essay.

AI systems are not neutral tools that simply substitute for human capabilities. They are optimization systems that pursue specific objectives, and those objectives may or may not align with human goals and values.

Social media provides an instructive case study. We automated content discovery and social connection, but instead of optimizing for genuine human flourishing, platforms optimize for metrics like engagement, retention, and advertising. Not the human is using the algorithm; The algorithm is using the human. The relationship has been inversed. The result is not enhanced human agency but its systematic erosion through addiction and attention fragmentation.

Freedom and Algorithms

This represents a particular form of loss of freedom which the philosopher Isaiah Berlin can help illustrate. He argued that there are two very different types of freedom: Negative freedom is "freedom from ..." - in example freedom from having to write your homework. Misaligned algorithms don't change this much. But they compromise positive freedom: "freedom to ...". The actual capacity for self-directed action. Positive freedom - or unfreedom - is often overlooked, because there is no bad external oppressor to blame. It is subtle. Unfortunately, it is also the more insidious one. Instead of removing the obstacles in the way, it weakens the actor itself. The illusion of choice is preserved, while the cognitive and social conditions to make genuine choices are undermined.

Berlin's framework assumes something we can no longer take for granted: that there is a clearly bounded agent whose freedom can be enhanced or constrained. But what if the boundary itself has dissolved? What if the systems eroding your positive freedom aren't external obstacles at all, but have become part of how you think?

Extended Agency

Andy Clark's Extended Mind thesis offers a way to see this. He argues that cognitive processes do not stop at the skull - they extend into whatever tools we rely on habitually and unreflectively. A notebook that stores your thoughts, a smartphone that organizes your memory, an AI that helps you reason through problems: if these systems are reliably and functionally integrated into your thinking, they're not separate tools. They are part of your cognitive architecture. What matters is not the material substrate or physical location, but the functional role in the overall process. The boundary of you shifts outward.

But here is what Clark does not address: not all cognitive extension is created equal. A calculator extends your mathematical capacity while preserving your sovereignty - you define the problem, it executes the solution. It's goal-preserving. But an engagement algorithm works differently. You think you're deciding what to watch, but the system has redefined the question: not "what do I want to see?" but "what maximizes my engagement?" It's goal-substituting. And when such a system becomes functionally integrated - when you habitually defer your attention to its recommendations - you don't just have a bad tool. You have a self working against itself. Internal incoherence.

This is why the depth of integration matters. A misaligned traffic light is inconvenient. A misaligned recommendation algorithm is corrosive to judgement itself. A misaligned neural implant would be existential. The closer technology penetrates to what we might call the Agentic Core - the capacity for goal-setting, deliberation and self-directed action - the more catastrophic misalignment becomes. Not because the technology is more powerful, but because the boundaries between self and system have blurred to the point where misalignment is no longer an external threat. It's a failure within the subject itself.