Aligning Team Mental Models with AI Coding Agents

A Knowledge-Centric Perspective

Abstract

As GenAI coding agents become increasingly embedded in software development workflows, teams face a new kind of challenge — not just how to use AI, but how to make the most of it. Traditional developer productivity hinges on shared mental models: internal representations that help individuals and teams interpret, predict, and act with minimal friction. Now, those same principles must be extended to artificial collaborators.

This article introduces a knowledge-centric perspective on GenAI adoption, arguing that productive human–AI collaboration depends on mental model alignment. GenAI agents can act as powerful knowledge amplifiers but only when they understand the assumptions, conventions, and goals that govern a team's work. Without access to that context, they generate code that may be technically correct but cognitively/mentally misaligned leading to inefficiency, rework, and mistrust.

We explore:

  • How shared mental models have historically enabled human–human collaboration in software teams
  • Why GenAI tools help in knowledge-scarce contexts, but hinder when they lack alignment
  • What it means for an AI agent to “share” a mental model with a developer
  • How to codify tacit knowledge — through documentation, prompts, fine-tuning, and retrieval techniques — so AI agents can access and apply it
  • How feedback loops between developers and AI lead to emergent alignment over time
  • And why AI integration is ultimately a knowledge management problem, not just a tooling one

The article concludes that organizations will gain the most from GenAI not by chasing more automation, but by investing in structured knowledge sharing — making context accessible not just to teammates, but to machines. In this new environment, alignment is the new leverage, and the future belongs to teams who treat AI not as a tool, but as a teammate that learns.

1. Mental Models as Cognitive Bridges

AI coding agents are becoming part of the modern software development workflow—producing code, suggesting tests, writing documentation, and even refactoring functions. But as these tools take on increasingly active roles, a critical question emerges:

How can humans and AI work together effectively when they don't share the same understanding of the system?

In software development, success rarely comes from simply following a checklist. Progress depends on how well individuals and teams interpret the current situation, predict what might happen next, and choose what to do. Behind every judgment call—whether it’s estimating a task, naming a function, or debugging a production issue—is a mental model.

A mental model is an internal representation of how something works. It compresses complexity into a form we can reason about, enabling us to make decisions with incomplete information. Developers rely on mental models constantly: of how the system behaves, how users think, how the CI pipeline operates, or how teammates approach their work[1].

In a knowledge-centric view of software development, progress hinges on acquiring the right knowledge at the right time — whether through experience, documentation, code reading, or collaboration. Mental models serve as the bridges between what we already know and what we still need to figure out. They help reduce uncertainty and guide productive action.

But these models aren’t just individual. When teams share mental models — of architecture, workflows, or product intent — they coordinate more effectively and avoid costly misunderstandings. When models diverge, friction increases: decisions conflict, code diverges, and rework grows.

Now, AI agents are entering this space. They are no longer passive tools but active participants in knowledge work. Yet unlike human teammates, they lack access to the tacit knowledge that lives in people’s heads — naming conventions, architectural boundaries, team norms, and historical trade-offs. Without this shared context, their suggestions often misfire, introducing friction instead of flow.

This article argues that for GenAI coding agents to be genuinely useful, they must be aligned with the mental models of the humans they assist. That means teams must do more than adopt AI — they must externalize their internal knowledge, encode it in accessible forms, and deliberately cultivate shared assumptions between people and machines.

We will explore:

  • Why mental models are foundational to both human and AI effectiveness
  • When GenAI tools amplify knowledge — and when they add noise
  • What it means to share a mental model with an AI agent
  • How teams can codify and transmit their knowledge to AI systems
  • And why alignment — not automation — is the real frontier of AI-enhanced development

In a knowledge-centric view of software development, the challenge isn’t just building the right system — it’s aligning the right minds, human and artificial, around a shared understanding of how that system works.

2. The Knowledge-Centric Role of Mental Models

Software development is not just the act of producing code — it is fundamentally a form of knowledge work. Every meaningful action in development requires bridging a gap between what is known and what is needed to be known. This includes understanding the problem space, choosing appropriate abstractions, adapting to changing user requirements, and reasoning about the behavior of complex systems. In each case, progress depends not on output volume, but on acquiring, refining, and applying knowledge.

This is where mental models play a central role. A mental model provides a cognitive shortcut — a way to reason efficiently in uncertain environments. Rather than holding every detail in working memory, developers use mental models to compress complexity, reduce the number of possibilities to consider, and guide decision-making under time and attention constraints. Whether it’s imagining how data flows through a service or how a teammate might interpret a code review comment, mental models enable developers to act without needing full certainty.

Crucially, mental models are not static beliefs or rigid frameworks. They are dynamic approximations — tentative maps of reality that evolve as developers learn from experience, feedback, and collaboration. When a defect surfaces, when a test fails unexpectedly, or when a design decision has unforeseen consequences, mental models are updated. This continuous refinement is not a side effect of development — it is the core of the learning process.

In this sense, mental models are working knowledge. They represent the developer’s best current guess about how things function, what matters, and what tradeoffs are acceptable. And as with all knowledge work, the quality of output depends not just on individual effort, but on how well these internal models are aligned, updated, and shared across the team.

3. Examples of Mental Models in Software Development

Mental models are not abstract concepts — they show up in the everyday decisions developers make. From system architecture to sprint planning, software professionals rely on internal representations to interpret situations and act efficiently. Below are six domains where mental models are especially critical, along with examples of how they guide reasoning—and what happens when they diverge.

System Architecture

Mental models of system architecture compress knowledge about how components interact, where data flows, and where responsibilities lie. For example, a developer might think of the system as a cleanly layered stack: API → service → database.

  • When accurate: This model helps the developer quickly identify where a bug might originate, or where a new feature should be added.
  • When divergent: If the actual architecture is event-driven or polyglot, the developer’s assumptions may lead them to make ineffective changes or search for issues in the wrong place.
  • Knowledge compression: Reduces the mental cost of tracing behavior across components.

Code Design Principles

Developers carry mental models of what "good code" looks like — often informed by principles like separation of concerns, DRY (Don’t Repeat Yourself), or SOLID.

  • When aligned: A shared understanding of code quality leads to fewer debates in reviews and more maintainable systems.
  • When divergent: One developer may refactor for reuse while another optimizes for readability, leading to confusion and churn.
  • Knowledge compression: Guides naming, modularization, and responsibility allocation with minimal deliberation.

User Behavior

Product managers, designers, and developers alike form mental models of how users will interact with the system. These models influence everything from feature prioritization to error handling.

  • When accurate: Teams build interfaces that feel intuitive and anticipate edge cases.
  • When flawed: Developers may overestimate user expertise or underestimate user expectations, leading to frustrating experiences.
  • Knowledge compression: Substitutes for constant user feedback by enabling reasonable prediction of behavior.

Agile Workflow Assumptions

Agile teams operate based on models of how work flows — from ticket creation to deployment. One developer may imagine work as progressing linearly through a sprint, while another sees it as iterative refinement.

  • When aligned: Teams manage expectations around scope, velocity, and collaboration.
  • When divergent: Misunderstandings arise over what “done” means, how much planning is required, or how feedback should be integrated.
  • Knowledge compression: Coordinates team behavior without needing to restate process rules constantly.

When Mental Models Diverge

In each of these domains, friction emerges when mental models are misaligned:

  • A developer debugs the wrong layer.
  • Code reviews become battles over style vs. structure.
  • Teams misjudge user needs.
  • Merge conflicts multiply.
  • Sprint planning turns into micromanagement or chaos.

Such friction isn’t just annoying — it’s a symptom of knowledge divergence. When teams don’t share core models of how things work, coordination costs rise, and the likelihood of rework increases. This is why investing in shared understanding is not a luxury — it's a prerequisite for high-performing teams.

3. From Human–Human to Human–AI Collaboration

Software teams have always relied on more than just tools and processes to be productive—they rely on shared mental models. These internal representations allow developers to anticipate how their teammates write code, interpret requirements, or debug a system. When team members share these models, they coordinate with minimal friction: communication is smoother, handoffs are cleaner, and surprises are rare.

In this context, mental model alignment is a prerequisite for efficient collaboration. It’s how developers manage complexity without constantly re-explaining decisions or retracing each other’s steps. Whether it’s understanding the domain logic, navigating the codebase, or agreeing on what “done” means, shared mental models act as cognitive glue.

Now, with the introduction of GenAI coding agents, a new kind of collaborator has entered the workflow. These agents write code, suggest refactors, generate tests, and even propose design changes. But unlike human teammates, they lack firsthand experience, institutional memory, and tacit understanding. They don’t attend standups, participate in retrospectives, or absorb team culture over time. And yet, they operate inside the same workflows where shared mental models are essential.

This shift introduces a new challenge:

If GenAI agents are to contribute meaningfully, they must share key assumptions, styles, and goals with the humans they assist.

Without that alignment, developers and AI tools operate on divergent models of how the system should behave, how the code should look, and what constraints must be respected. This misalignment leads to:

  • Code that breaks architectural boundaries
  • Test cases that don't reflect real edge cases
  • Refactorings that violate naming conventions or implicit contracts

These are not technical errors—they are cognitive mismatches. The AI isn’t crap; it’s out of sync with the team’s model of the system.

We can call this problem cognitive misalignment between human and AI agents.

Just as misaligned mental models between teammates lead to misunderstandings, rework, and friction, the same applies to AI coding agents. When humans and machines operate with different internal models, trust erodes, and the promise of AI-assisted development is lost in a sea of irrelevant suggestions and corrective effort.

To avoid this, we must approach AI not as a generic tool, but as a collaborative agent — one that, like any teammate, needs context, clarity, and calibration. This begins with the recognition that mental model alignment is no longer just a human concern — it’s a human–AI requirement.

4. When GenAI Helps and When It Hurts

GenAI is often described as a productivity booster, but in practice, its impact is conditional. It doesn’t always help and sometimes it actively hinders. Understanding when GenAI creates value, and when it introduces friction, is essential for teams that want to integrate these tools effectively.

At its core, GenAI acts as a conditional knowledge amplifier. Its usefulness depends on the relationship between what the developer knows and what the AI can supply.

When GenAI Helps

GenAI delivers clear value when developers face genuine knowledge gaps — situations where they lack specific, contextual, or syntactic knowledge that the model can provide quickly and accurately. Examples include:

  • Using an unfamiliar API or library
    The AI can autocomplete function calls, suggest valid parameters, and provide idiomatic usage patterns.
  • Writing boilerplate or repetitive code
    The model can offload mechanical tasks, freeing the developer’s attention for higher-order thinking.
  • Generating test cases or configuration templates
    It speeds up work by filling in predictable patterns based on minimal input.

In these cases, the developer uses the AI to bridge a knowledge gap — treating it as a reference or an assistant that accelerates learning and execution.

When GenAI Hurts

Problems arise when GenAI enters areas where the developer already holds well-formed mental models that the AI doesn’t share. In these cases, the AI becomes more of a distraction than a support. This includes situations where:

  • The team has established coding conventions that the AI violates.
  • The project follows specific architectural constraints the AI isn’t aware of.
  • The AI suggests changes that would break implicit contracts or legacy quirks known only to the team.
  • The developer is deep in a complex reasoning task, and AI suggestions are off-topic, superficial, or irrelevant.

This is where AI friction emerges: the effort required to review, correct, or discard unhelpful output exceeds the value of the suggestion itself. Even worse, poor suggestions may subtly mislead developers, introducing bugs or inconsistencies that only appear later.

The Root Problem: Absence of Shared Context

It’s tempting to blame these issues on model limitations or training data, but the root problem is more fundamental:

The AI doesn’t share the developer’s context, assumptions, or goals.

This is not a problem of raw capability — it’s a problem of cognitive misalignment. The AI is operating with one mental model of the task, and the developer is operating with another. Just as two engineers with different mental models of the system would clash, so too does the AI clash when it lacks alignment with the human it’s trying to help.

This insight shifts the conversation:

  • The goal is not simply to improve model performance.
  • The goal is to establish a shared operational context between a human and AI.
  • Productivity gains come not from smarter models, but from better-aligned ones.

In the next section, we’ll explore what it really means to align mental models with an AI agent, and how teams can close the cognitive gap that often separates developers from their digital collaborators.

5. What It Means to Align Mental Models with AI

In human teams, shared mental models form the basis of smooth collaboration. They allow individuals to make assumptions about how others will interpret tasks, write code, or handle exceptions—without needing to spell everything out. This shared understanding reduces uncertainty, minimizes rework, and enables teams to move faster with less communication overhead.

As GenAI coding agents become collaborators in development workflows, the same principle applies:

To be effective, GenAI must operate with a mental model that aligns with the team’s own.

This means going beyond syntax and generic correctness. It requires the AI to reflect key team-specific assumptions—about how systems are structured, how code should be written, how testing works, and what constraints must be respected.

Expanding “Shared Understanding” to Human–AI Interaction

To align mental models between a developer and an AI assistant, the AI must internalize and respond to the same knowledge objects that guide the developer. This includes:

  • Architecture assumptions
    Does the AI understand the intended separation of services, boundaries between layers, and rules for cross-cutting concerns (e.g., auth, logging, metrics)?
  • Naming conventions and idioms
    Does it generate code that uses domain-relevant terms, follows existing naming patterns, and fits naturally into the codebase?
  • Testing strategy
    Does it suggest unit vs. integration tests correctly? Does it understand mocking strategies or the role of end-to-end tests?
  • Security constraints
    Does it avoid patterns that would violate team policies or regulatory requirements (e.g., hardcoding secrets, bypassing auth)?
  • Without access to these elements, the AI generates output that may be technically valid but cognitively misaligned with how the team thinks and works.

    Comparison: Human–Human vs. Human–AI Mental Model Alignment

    Aspect Human–Human Alignment Human–AI Alignment
    Architecture Understanding Shared through meetings, docs, and experience Must be encoded via documentation, context windows, or fine-tuning
    Naming Conventions Picked up implicitly via code reviews and pairing Requires exposure to project code or guidance via prompts
    Testing Practices Reinforced through culture and CI feedback Must be taught through examples, test patterns, and conventions
    Security Constraints Communicated via policies and team norms Needs explicit encoding in rules, linters, or fine-tuned safeguards
    Intent Interpretation Clarified through real-time conversation Must be inferred from prompt framing and code structure
    Feedback Loops Immediate correction via social interaction Happens asynchronously via edits, re-prompts, or reinforcement

    The table shows a key insight: humans align mental models through context-rich interaction, while AI systems require explicit encoding of that context in machine-readable form.

    Alignment Is a Design Task, Not Just an Engineering One

    Aligning mental models with AI tools isn’t just a question of prompt engineering or choosing the right model. It’s a design challenge:

    • What knowledge does the AI need to act like a competent teammate?
    • Where is that knowledge currently stored—code, docs, tribal memory?
    • How can we surface and encode it in ways the AI can access and use?

    Answering these questions reframes AI adoption as a knowledge modeling effort—an extension of software architecture and team cognition into the human–AI interface.

    In the next section, we’ll explore how teams can codify their internal mental models in practice — translating assumptions, conventions, and patterns into forms that GenAI can consume and learn from.

    6. Codifying Mental Models for AI Consumption

    GenAI coding agents cannot read minds. For them to behave as effective collaborators, teams must translate their internal mental models — often tacit and informal — into machine-consumable knowledge. This is not a matter of tuning prompts or tweaking APIs. It is a knowledge management challenge: How do we make the assumptions, constraints, and patterns that live in developers’ heads explicit and accessible to AI systems?

    In traditional teams, knowledge alignment happens through osmosis: pairing, reviews, conversations.
    With AI agents, that alignment must be engineered.

    Why Tacit Knowledge Must Be Made Explicit

    Tacit knowledge is what allows experienced developers to "just know" why something is named that way, why a workaround exists, or why a helper function shouldn’t be touched. It includes:

    • Naming conventions
    • Project idioms
    • Legacy quirks and constraints
    • Preferred patterns and anti-patterns
    • Deployment rituals
    • Error-handling philosophies

    AI has no access to this unless it is externalized—written down, encoded, demonstrated, or exposed through interfaces.

    Practical Methods for Codifying Mental Models

    1. Documentation

    • Use architecture decision records (ADRs), onboarding guides, and internal wikis to spell out “why things are the way they are.”
    • Include high-level reasoning, trade-offs, and design intentions—beyond what’s in the code.

    2. Examples and Inline Comments

    • Curate representative code snippets that illustrate how things should be done.
    • Use comments to explain non-obvious constraints (“Do not rename this—it’s used in legacy pipeline XYZ”).
    • GenAI models trained on codebases can pick up these cues as stylistic or functional norms.

    3. Prompt Engineering

    • Provide context-rich instructions in your prompts to simulate shared understanding.
      • Instead of: “Write a function to validate a user”
      • Try: “Write a function to validate a user in the context of a Django REST framework API, using our standard error response format.”
    • This adds "local knowledge" into the model’s decision boundary.

    4. Fine-Tuning and Retrieval-Augmented Generation (RAG)

    • Fine-tune language models on internal codebases, docs, or examples to encode team-specific patterns.
    • Use RAG pipelines to fetch relevant code snippets, design docs, or wiki entries during generation, enriching the model’s context window.

    From Prompt Engineering to Knowledge Engineering

    Most teams think of AI productivity in terms of prompt engineering. But the deeper, longer-term value comes from knowledge engineering—the intentional structuring and surfacing of organizational knowledge so it can be understood and used by non-human agents.

    • Structuring documentation with AI in mind
    • Making architectural decisions findable and reusable
    • Embedding semantic meaning into code structure and metadata
    • Creating reusable prompt patterns aligned with internal conventions

    Codification as an Ongoing Process

    Codifying mental models is not a one-time activity. As systems evolve and teams learn, so too must the externalized knowledge:

    • Outdated docs lead to misalignment.
    • Code without context leads to hallucinations.
    • Static prompts fail in dynamic systems.

    This means knowledge hygiene—reviewing, updating, and curating shared artifacts—becomes a core practice in AI-enhanced software development.

    Bottom line: You can’t collaborate with an AI agent on the basis of intuition.
    You must feed it the context your team already takes for granted.

    In the next section, we’ll examine how alignment improves over time through feedback loops between humans and GenAI—and how those loops reinforce shared mental models with the machine.

    7. Feedback Loops Between Developers and AI

    Alignment between humans and AI doesn’t happen all at once — it emerges over time through interaction. Just as developers refine their shared understanding through conversation, review, and experience, AI coding agents can be gradually steered toward alignment through feedback loops embedded in daily development activities.

    Every time a developer edits, rejects, or re-prompts an AI suggestion, they are signaling what “good” looks like.

    This feedback — though often informal — is a powerful source of cognitive calibration. It nudges the AI toward patterns, preferences, and boundaries that reflect the team’s mental model of how things should work.

    How Feedback Works in Practice

    Edits

    When a developer modifies AI-generated code — renaming variables, restructuring logic, or adapting error handling — they’re embedding corrections into the output. In systems that support learning from usage , these edits can be used to fine-tune future suggestions.

    Prompting

    Developers often prompt iteratively, refining their instructions based on prior responses. This chain of prompts acts as a real-time alignment process — each new instruction gives the AI more insight into the developer’s intent and context.

    Example:

    					"Write a function that retrieves a user."
    					 → "Actually, make it async and follow our logging format."
    					 → "Also include input validation using our custom validate_input() utility."
    					

    Each step teaches the AI what matters in this environment.

    Rejections

    Ignoring or deleting a suggestion is also a signal — especially when done consistently. Over time, rejected patterns reveal what styles, structures, or assumptions don’t fit the team’s model. If this signal can be captured (through telemetry, explicit feedback buttons, or fine-tuning data), it helps prevent future misalignment.

    Emergent Alignment Through Interaction

    These small interactions accumulate into emergent alignment. The AI begins to conform more closely to team norms, preferred idioms, and architectural expectations — not because it was told explicitly, but because it was nudged iteratively.

    This mirrors how junior developers learn: not from one document, but from working alongside experienced teammates and receiving continual guidance.

    With AI agents, the same principle applies:

    • The more contextual feedback they receive,
    • The more aligned their suggestions become,
    • And the more trust the team builds in their output.

    Designing for Feedback-Rich Interaction

    To harness this effect, teams can:

    • Choose tools that support learning from edits or team-level customization
    • Establish norms around iterative prompting and prompt re-use
    • Use feedback affordances (e.g., thumbs up/down, structured correction prompts)
    • Monitor where AI fails repeatedly and feed those gaps into documentation or fine-tuning pipelines

    The goal is to treat feedback not as error correction, but as model training — each interaction an opportunity to align the AI more closely with team mental models.

    Alignment is not a configuration setting — it’s a relationship. And like any productive relationship, it deepens with quality feedback over time.

    In the next section, we’ll consider the broader strategic implications: what this shift toward human–AI model alignment means for engineering practices, onboarding, and long-term team performance.

    8. Strategic Implications for Engineering Teams

    The rise of GenAI coding agents is not just a technological shift — it’s an inflection point for how engineering teams think about collaboration, knowledge, and productivity. As AI agents become embedded in daily workflows, the challenge is no longer simply how to use AI, but how to make the most of it.

    This means that engineering teams must now build shared mental models not only with each other but with their AI tools.

    AI Adoption Becomes a Design and Documentation Priority

    Historically, documentation and architectural rationale have been seen as secondary to "real" engineering work. But in a world where AI agents generate, review, and even propose code, these knowledge artifacts become critical interfaces — the primary means by which AI agents access the team’s context.

    If your design principles, naming conventions, or testing philosophy aren’t encoded, your AI won’t know them — and won’t respect them.

    This shifts the priority:

    • From “How can we write code faster?”
    • To “How can we make our knowledge accessible to both humans and machines?”

    Impacts Across the Engineering Lifecycle

    Aligning mental models with AI tools has far-reaching implications across the software development process:

    Onboarding

    New developers often struggle not with syntax, but with team-specific norms and constraints. A well-aligned AI can act as a guide — reinforcing those norms and accelerating ramp-up. But only if the relevant knowledge has been codified.

    Code Quality

    AI-generated code can either improve or erode quality, depending on how well it mirrors team conventions. Misaligned AI introduces subtle inconsistencies. Aligned AI becomes a force multiplier for consistency and maintainability.

    Velocity

    When developers trust the AI to generate code that "fits," they move faster with less hesitation. When they don’t, they slow down to review and correct. Alignment reduces this friction and creates true throughput gains.

    Cognitive Load

    The real value of AI is not in replacing human cognition but reducing unnecessary cognitive effort. Aligned AI handles the boilerplate, the lookups, and the repetitive decisions—freeing developers to focus on creative and strategic problem-solving.

    Knowledge Infrastructure as a Source of Competitive Advantage

    Ultimately, the return on investment (ROI) from GenAI tools does not depend primarily on model quality — it depends on organizational knowledge infrastructure.

    Organizations that can:

    • Encode what they know,
    • Maintain that knowledge over time,
    • And make it machine-consumable,

    ...will see greater performance gains, faster adoption, and fewer integration pitfalls. Those that treat AI as a drop-in solution will struggle with inconsistency, trust erosion, and underwhelming results.

    AI is not just a tool—it’s a collaborator.
    And like any collaborator, its effectiveness is only as good as the clarity of the knowledge it receives.

    In the final section, we’ll be summarizing why cognitive alignment, not raw automation, is the true unlock for next-generation software teams working with GenAI.

    9. Conclusion: From Tools to Teammates

    The introduction of GenAI coding agents marks a turning point in software development. These tools are no longer just productivity enhancers — they are active participants in the development process. But unlocking their full potential requires a fundamental shift in how we relate to them.

    The core insight of this article is simple but powerful:

    Mental model alignment is the foundation for productive human–AI collaboration.

    Just as high-performing teams depend on shared understanding to coordinate efficiently, human–AI collaboration requires that developers and AI agents operate with compatible assumptions, constraints, and expectations. Without alignment, even the most sophisticated AI introduces friction, inconsistency, and cognitive overhead. With alignment, AI becomes an extension of the team’s intent—an intelligent amplifier of human effort.

    The path to alignment is not technical alone. It is cognitive, cultural, and organizational. It requires that teams:

    • Externalize the tacit knowledge that governs their decisions,
    • Codify it in forms AI can interpret,
    • And engage with their tools through iterative, feedback-rich interaction.

    In this new landscape, teams that treat GenAI as a teammate — one that needs context, feedback, and shared understanding — will outperform those that treat it as a generic assistant. They will move faster, maintain quality, and reduce the mental tax of navigating misaligned suggestions.

    In a knowledge-centric world, alignment is the new leverage.

    Engineering leaders who recognize this will invest not just in AI tooling, but in knowledge modeling, communication practices, and feedback loops. Because ultimately, the value of AI depends not on how much it can do, but on how well it understands what we want it to do and why.

    Reference

    1. Robert W. Andrews, J. Mason Lilly, Divya Srivastava & Karen M. Feigh (2023) The role of shared mental models in human-AI teams: a theoretical review, Theoretical Issues in Ergonomics Science, 24:2, 129-175, DOI: 10.1080/1463922X.2022.2061080

    Dimitar Bakardzhiev

    Getting started