Scaling AI Without Scaling Entropy: The Case for Agent Skills
A Knowledge-Centric Perspective
AI Scales Output, Not Knowledge
Large Language Models are rapidly increasing the speed at which organizations produce code, documents, and decisions. Yet beneath this apparent productivity gain lies a deeper structural failure: information decays faster than it is accumulated.
In most AI-assisted workflows, intent is weakly specified, procedural knowledge is implicit, and execution varies from one interaction with the LLM to the next. Each use of an LLM becomes a fresh act of reconstruction of what the task is, how it should be approached, and what constitutes a correct outcome. As intent moves from idea to prompt, from prompt to model reasoning, and from reasoning to artifacts, meaning is repeatedly compressed, distorted, or lost altogether.
This manifests as a high Information Loss between intent and outcome. Better models reduce local errors, but they do not address the systemic loss of information across these handoffs. In fact, higher throughput often accelerates the information loss: more outputs are produced faster, but each is grounded in a fragile reconstruction of knowledge.
At the same time, organizations repeatedly rediscover the same procedural knowledge. Teams "figure out again" how to write tests, define specifications, research existing code, or structure architectural decisions despite having solved these problems before. From a knowledge-centric perspective, this is a decrease of Knowledge Discovery Efficiency (KEDE): knowledge exists, but it is not operationalized in a form that can be reused.
Compounding this, LLMs exhibit far more behavioral variety than most tasks require. Without explicit constraints, the model's response space overwhelms the problem space, leading to inconsistent outputs, unpredictable quality, and low trust. This is a classic mismatch with Ashby's Law of Requisite Variety: the system's variety is not regulated to match the variety of the task.
In addition, much of the missing information resides in tacit knowledge i.e. how experienced practitioners know what to do and how to do it, but never explicitly encode it. That knowledge remains locked in people's heads, inaccessible to AI systems and fragile under turnover. As a result, AI usage leaves no durable organizational memory. When individuals change teams or leave, the "way AI is used here" disappears with them.
The net effect is stark: organizations adopt AI at scale, yet fail to accumulate knowledge at scale. Output grows, but understanding does not. Capability stagnates while entropy compounds.
Faster Work, Fragile Capability
When information loss and rediscovery dominate AI-assisted work, organizations appear to move faster while in reality becoming structurally weaker.
Teams generate more artifacts like code, tests, documents, analyses but those artifacts are increasingly detached from the initial intent. Because meaning is reconstructed anew on each interaction, small differences in prompts, context, or personnel produce large differences in outcomes. Quality becomes variable, reviews become harder, and trust in AI-assisted outputs erodes. The organization compensates with more oversight, more rework, and more coordination overhead while quietly negating the promised productivity gains.
As procedural knowledge is repeatedly rediscovered rather than reused, learning fails to compound. New hires still take just as long to become effective. Senior staff remain bottlenecks, not because knowledge is scarce, but because it is not operationalized. From a knowledge-centric perspective, the organization remains inefficient at converting existing knowledge into effective action, regardless of how many AI tools are deployed.
Uncontrolled variation in AI behavior further amplifies this fragility. Without explicit constraints, the system's responses fluctuate beyond what the task requires. Outputs may look plausible yet differ subtly in assumptions, structure, or rigor. These inconsistencies accumulate across teams and projects, making systems harder to reason about and outcomes harder to predict. What appears as "creativity" at the task level becomes loss at the company level.
The absence of organizational memory compounds the problem over time. AI usage patterns are neither standardized nor retained; improvements remain local and temporary. Each team, and often each individual, evolves its own way of working with AI, leading to fragmentation rather than convergence. When people leave, their AI practices leave with them, resetting progress and reintroducing avoidable errors.
The result is a familiar but dangerous pattern: throughput increases while capability stagnates. Decisions are made faster, but with thinner grounding. Systems grow more complex, but less coherent. Risk rises quietly not because AI is unreliable, but because knowledge is not preserved.
In this state, AI does not function as a learning amplifier. It becomes an entropy accelerator.
Agent Skills as Executable Knowledge
The failure described above is not a modeling problem and not a tooling problem. It is a knowledge problem. Addressing it requires a mechanism that preserves intent, constrains behavior, and allows learning to accumulate across executions.
One solution is the concept of Agent Skills recently introduced by both Antropic and OpenAI
A skill is not a better prompt, and it is not another Agentic tool exposed to the LLM. A skill is a durable encoding of procedural knowledge, a structured description of how a recurring task should be performed, under what constraints, and how correctness is evaluated. In practice, a skill takes the form of explicit instructions, optional helper logic, and validation rules that an agent must consistently follow.
Tools expand what an LLM can do; Agent Skills define how it should do it here. From a knowledge-centric perspective, this distinction is crucial. By externalizing decision logic and execution structure, Agent Skills insert a control layer between intent and action — one that is stable across time, people, and contexts.
This control layer directly addresses information loss. By forcing intent to be expressed in a canonical form and carried through execution unchanged, Agent Skills reduce the residual entropy between intent and outcome. Meaning is no longer reconstructed from scratch on every interaction; it is preserved by design. As a result, the Information Loss drops - not because the model reasons better, but because less information is allowed to dissipate in the first place.
Agent Skills also eliminate rediscovery. Procedural knowledge that previously lived in experience, conventions, or tribal memory becomes an executable artifact. Each execution reuses that knowledge rather than re-deriving it, increasing Knowledge Discovery Efficiency. Learning compounds because discovery builds on prior discovery instead of restarting from zero.
Equally important, Agent Skills regulate variety. By explicitly constraining permissible actions, tool usage, and output structure, they align the behavioral variety of the LLM with the variety required by the task. This resolves the mismatch that plagues unconstrained AI use, replacing unpredictability with repeatability without sacrificing effectiveness.
Finally, Agent Skills create organizational memory. Because they are explicit, versionable, and shareable, they persist beyond individual contributors. Improvements can be captured, reviewed, and propagated. Over time, the organization accumulates not just outputs, but knowledge about how to produce good outputs.
In this sense, Agent Skills are best understood as executable knowledge artifacts. They transform AI from a stateless assistant into a regulated component of a learning system — one where intent is preserved, behavior is constrained, and knowledge is allowed to compound.
From AI Adoption to Knowledge-Centric Capability
When Agent Skills are introduced as a structural layer, the nature of AI usage changes fundamentally. The organization no longer relies on individual prompt craftsmanship or local experimentation to extract value from AI. Instead, it begins to engineer capability.
Information loss is no longer an accepted side effect of speed. Intent is preserved through explicit structure, execution is constrained by reusable procedure, and uncertainty is surfaced rather than erased. As a result, outcomes become more predictable, reviewable, and trustworthy — not because humans intervene more, but because entropy is prevented upstream.
Organizational knowledge also changes character. Instead of repeatedly rediscovering how work should be done, teams build on accumulated procedural knowledge. Each improvement to an Agent skill raises Knowledge Discovery Efficiency for everyone who uses it thereafter. Learning compounds across people, teams, and time. Onboarding accelerates, senior experts stop acting as bottlenecks, and expertise becomes a property of the system rather than the individual.
Crucially, organizational memory emerges where none existed before. Executable procedural knowledge persists in the form of Agent Skills. They can be refined, versioned, audited, and shared. When people move on, the knowledge embedded in their ways of working remains. AI usage stops being ephemeral and starts to resemble an evolving body of institutional practice.
At scale, this produces a decisive shift. AI is no longer an entropy amplifier that increases output while quietly degrading coherence. It becomes a regulated component of a learning system - one that aligns behavior with intent, matches variety to task, and converts prior knowledge into reliable action.
This is the real promise of Agent Skills: not better prompts, faster outputs, or clever automation, but the ability to scale knowledge itself.
Next Step
If your AI usage still depends on individual prompts and personal habits, start by identifying one recurring task where outcomes vary more than they should. Extract the tacit "how" behind successful executions, encode it as an executable procedural knowledge with explicit structure, constraints, and validation, and reuse it deliberately. Treat Agent Skills as living knowledge artifacts, measure their impact on rework, predictability, and learning speed, then refine them as you would any other core capability.
That is how AI stops scaling entropy and starts scaling productivity.
Dimitar Bakardzhiev
Getting started