What AI Can Give Your Team — and What Only Coaching Provides
AI tools can already facilitate retrospectives, surface communication patterns, and generate coaching-style language. They do all of this at the level of knowing. What they cannot produce is KNOWING: the deep familiarity when a team's understanding of its pattern snaps together and comes to life. This is not a deficiency of the AI's language model — it is structural.
The question that came in the corridor
An Agile coach was leaving a retrospective when a director stopped her. The director had been reading about AI facilitation tools. "I've been looking at what these things can do," he said. "They can run retrospectives, analyse team communication patterns, give feedback on how meetings are going. They seem pretty good. What exactly is it that you're providing that they're not?"
It was a serious question, asked genuinely. The coach did not have a crisp answer. She knew the answer was not "warmth" or "human connection" — those are real but they are not a professional justification. She knew the answer was not "experience" — AI tools have been trained on more facilitation patterns than any single human practitioner will ever encounter. She knew the answer had something to do with the quality of what coaching produces versus what AI tools produce. She could not yet say it precisely.
Sedgwick's knowing/KNOWING distinction provides the framework for the precise answer. It is not an argument against AI tools. It is an account of what they can and cannot produce — and why the gap is structural rather than a capability limitation that further development will close.
What AI tools can do, and do well
Current AI facilitation and coaching tools are genuinely capable. They can structure retrospectives with consistency and patience. They can track sentiment across conversation data and surface patterns that a human facilitator might miss. They can generate contextually relevant prompts — the right question at a predictable moment, the appropriate reflective mirror when a team is stuck in a loop.
They can do all of this at the level of knowing. In Sedgwick's terms, knowing is bare command of facts — the ability to recognise, articulate, and work with information. An AI system trained on thousands of retrospectives has a vast command of knowing about team patterns, retrospective dynamics, and facilitation approaches. It can deploy that knowing with a consistency and breadth that no individual practitioner can match.
The limitation is not in the AI's performance at the level of knowing. It is that knowing is not the level at which the most significant coaching outcomes are achieved.
Why the gap is structural
KNOWING, in Sedgwick's account drawing on Brandom, is deep familiarity — the moment when a way of seeing things snaps together and comes to life, when a person can commit to its full meaning and resonance, not just its content. KNOWING carries what Brandom calls authority: the entitlement to act from a position, to navigate genuine uncertainty from within a framework that is not just intellectually held but inhabited. It is what distinguishes a practitioner who genuinely understands a domain from one who can reproduce its vocabulary.
Sedgwick's account of language shows that understanding is always tracked against a background of lived experience — of commitments and entitlements accumulated through inhabiting a form of life. A human coach working with a team brings their own history as a participant in organisational life: their embodied knowledge of what it feels like to be under delivery pressure, to navigate a difficult sponsor relationship, to be the person in the room whose voice doesn't carry. That embodied background is what gives the coach the authority to recognise, from within, what the team is experiencing.
An AI system has no such background. It can score-keep commitments and entitlements in a conversation — identifying when something has been said and responded to, tracking the logic of an exchange, noting patterns in what is spoken and what is avoided. But score-keeping is not the same as inhabiting. The system has no life to inhabit. It has no absences shaped by context, no frame of reference with gaps formed by experience. It processes the signals of what is present in the data. It cannot know — in the KNOWING sense — what those signals mean in the context of a real organisational life.
What this means for producing KNOWING in a team
KNOWING — the genuine shift in a team's understanding of its own pattern, the moment when a way of seeing snaps together and produces real navigational change — requires encounter with something that has navigated the same territory. Not information about the territory. Encounter with someone who has been there.
When a human coach names something in a team that the team has not been able to name for itself, the recognition that follows — "yes, that's exactly it" — is not just cognitive. It is the recognition of a fellow inhabitant. The coach is not delivering data; they are meeting the team at the level of a shared form of life. That meeting creates the conditions in which KNOWING can form.
An AI system can produce a structurally similar sentence. It can identify the pattern from data and generate a phrase that accurately describes it. But the phrase lands differently because it comes from a system that has not inhabited what it describes. The recognition — if it comes — is recognition of accuracy rather than of fellow inhabiting. That is a different experience, and it produces knowing rather than KNOWING.
This is not a limitation of current AI capability. It is a structural account of what understanding requires. KNOWING is not a higher resolution of information processing. It is a different kind of thing — one that requires a subject who has lived in a form of life, accumulated the commitments and entitlements that experience provides, and can meet another subject at that level. No further development of current AI architectures closes this gap, because the gap is not about processing capability. It is about what kind of entity can produce KNOWING in another.
How to articulate the value without defensiveness
The worst response to the director's question is a defensive one. "AI cannot replace human connection" sounds like a practitioner protecting their livelihood rather than offering a genuine account of value. "AI lacks empathy" is vague and not quite accurate — AI systems can model empathic responses effectively at the knowing level.
The precise answer, drawn from Sedgwick's framework, is this: AI tools are excellent at producing knowing — the shared understanding, surface articulation, and pattern recognition that makes knowing possible. They do this consistently, at scale, without the variability that human practitioners bring. For teams that need knowing — that need better articulation of their patterns, cleaner facilitation of their retrospectives, more systematic surfacing of communication data — AI tools are a genuinely valuable option.
What human coaches provide, at their best, is the conditions for KNOWING — the genuine pattern shift that knowing alone does not produce. This requires a subject who has inhabited similar territory, who can meet the team at the level of lived experience rather than at the level of processed data, and whose recognition of the team's pattern carries the authority of having been in the same landscape.
Not every coaching engagement requires KNOWING. If a team needs better facilitation, cleaner retrospectives, or more consistent pattern identification, AI tools deliver this more cheaply and at greater scale than human practitioners. The value of human coaching is most legible precisely when KNOWING is the outcome that matters — when the team has had the insight repeatedly and it is not producing change, when something deeper is stuck, when the moment that will shift the pattern requires recognition from a fellow inhabitant rather than data from a sophisticated pattern-matcher.
The coach's answer in the corridor
The coach might have answered the director like this: "AI tools can do very well what I do when I run a retrospective or surface a communication pattern. That's a real part of this work. What they cannot do is produce the moment when something that a team has understood repeatedly suddenly shifts — when the insight becomes something they can actually navigate from. That shift requires encounter with someone who has been in the same territory. I've been in this territory. The AI hasn't."
This is not a claim about human superiority. It is a precise claim about what kind of work produces what kind of outcome, and what kind of entity is capable of producing which. It is the kind of answer that is worth having when the question is asked seriously — and it is being asked seriously, with increasing frequency, across the entire coaching and facilitation field.
Practitioners who can make this distinction clearly — who understand what they uniquely provide and can say so without defensiveness or mystification — are better placed for the next decade of practice than those who compete with AI at the level of knowing, which is where AI will always win on consistency and scale.
Continue Exploring
Go deeper into the work
The Book
The Art of Creating Self-Organizing Teams
The full framework behind this article — contracting, team dynamics, and practical coaching tools for every stage of the journey.
Companion Toolkit
Resistance Radar & Resilience Scorecard
Practical tools for mapping resistance patterns and measuring whether interventions increased capacity — not just compliance.
TA for Agile
Co-creative TA in Agile Contexts
Ego states, psychological contracts, group imago, and the relational concepts that underpin this article — applied to real teams.