Your client shares something vulnerable in a session. An AI processes it. Who is responsible for what happens next? The answer is less clear than you think.
Your client finishes a session and asks ChatGPT to help them process what you discussed. They paste in quotes from your conversation. They describe the leadership challenge they are working on. They ask the AI for advice. You have no idea this is happening. And right now, you have no framework for thinking about whether it should.
AI is already in your practice
The conversation about AI in coaching tends to focus on whether coaches should use AI. That question is already outdated. Your clients are using it. Early research suggests a growing number of coaching clients are already using generative AI tools between sessions to reflect on their development. They are not waiting for permission.
This creates a governance gap. When a client uses ChatGPT to process a coaching session, who is responsible for the quality of that reflection? Who is responsible if the AI gives advice that contradicts your coaching approach? Who is responsible if the client shares sensitive information with a tool that uses it for model training?
The answer, right now, is nobody. And that is the problem.
What governance means in a coaching context
In regulated industries - finance, healthcare, insurance - AI governance is a formal discipline. It involves policies, oversight structures, risk assessments, and accountability frameworks. Coaching is not a regulated industry in the same way. But it is a profession built on ethical commitments, and those commitments do not stop at the edge of the session.
AI governance for coaching means answering three questions:
- Who decides what the AI does? If an AI tool sends your client a reflection prompt, who wrote it? Who approved it? Who is accountable if it takes the client in a direction that is unhelpful or harmful?
- Who sees the data? When your client interacts with an AI between sessions, where does that conversation go? Is it stored? Is it used for training? Can anyone else access it?
- Who maintains the relationship? Coaching works because of the relationship between coach and client. If an AI is part of that relationship, even informally, how do you ensure it supports the work rather than undermining it?
These are not theoretical questions. They are practical ones that every coach using AI - or whose clients are using AI - needs to think about.
The supervision gap
The ICF Code of Ethics requires coaches to "fulfil their ethical and legal obligations to their clients and other parties independently of the technology systems utilised, including AI." The EMCC Global Code of Ethics includes similar provisions around confidentiality and "demonstrating respect for the diverse ethically informed approaches to coaching, including the use of data technologies and AI".
The gap is not that these ethical frameworks are wrong. It is that they do not yet address how to manage the specific risks that AI introduces. When your client pastes session notes into ChatGPT, that is a confidentiality issue - but the ethics codes do not tell you how to handle it. When an AI tool generates coaching-style advice without your knowledge, that is a supervision issue - but existing frameworks do not cover how to work with unsupervised AI interactions.
This is where governance becomes practical. You need a framework that addresses what happens between sessions, not just during them.
The difference between AI tools
Not all AI in coaching is the same, and the governance implications depend on the architecture.
General Purpose AI tools - ChatGPT, Claude, Gemini - are general-purpose systems. They have no knowledge of your client's coaching history. They have no understanding of your methodology. They operate without supervision. When your client uses one of these tools to reflect on a session, the AI is improvising. It may give thoughtful responses. It may give terrible ones. You will never know.
AI coaching tools - the kind built into enterprise coaching platforms - is trained on aggregate data from many coaches and clients. It can be useful for pattern recognition at scale, but it dilutes individual coaching approaches into an average. The governance question here is: whose methodology is the AI following? If it is trained on thousands of different coaching conversations, the answer is "everyone's and no one's."
Coach-supervised personalised AI is a different model entirely. The AI is fine-tuned on your specific sessions. It generates content based on your coaching approach and sessions you had with your clients. And critically, nothing reaches the client without your review and approval. The governance is built into the architecture: you remain the decision-maker at every step.
The distinction matters because governance is not just about policy. It is about structure. A tool that requires coach supervision by design is fundamentally different from one that operates autonomously.
See how coach-supervised AI keeps you in control at every step.
What the EU AI Act means for coaching
The EU AI Act, which entered into force in August 2024 with provisions applying in stages through 2027, is the world's first comprehensive AI regulation. The EU AI Act classifies AI systems by risk level, ranging from minimal to high and unacceptable risk, and regulates general-purpose AI models. High-risk systems - those used in employment, education, and essential services - face strict requirements around transparency, human oversight, and data governance. Coaching AI is not per se classified as high-risk, unless used for work-related relationships, such as promotion, termination, assessing personal traits or behaviour.
More immediately, the EU AI Act establishes transparency obligations - primarily for providers of AI systems, but with implications for anyone deploying them. If your client is interacting with an AI, they have a right to know that. The AI generated content shall be marked as such. If the AI is influencing decisions or recommendations, the deployer must ensure human oversight. If the AI processes personal data (which they mostly do), GDPR applies in full.
EU AI Act builds on the European Commission's Ethical guidelines on Trustworthy AI ensuring that AI systems are designed and deployed with principles of fundamental rights, data security, privacy, ethics and inclusiveness at the forefront. Even, if the AI system does not qualify as high-risk under the EU AI Act, the common assumption is that in practice the requirements relating to such systems (among others - rigorous risk assessment, data governance, human oversight, security) will expand to all AI systems.
For coaches, the practical takeaway is this: the principles embedded in the regulatory environment are requiring exactly the kind of oversight that good coaching practice already demands. For example, transparency, ethics, non-bias and access to a human are one of the core principles in the ICF AI Coaching Framework.
Building AI governance into your practice
You do not need a legal team to implement AI governance. You need clarity on four things:
- Transparency. Tell your clients that you use AI in your practice, what AI tools you use, what they do, and how they handle data. This should be part of your coaching agreement, not a footnote.
- Boundaries. Define what the AI does and does not do. If you use AI for session notes, say so. If you use it to generate reflection prompts, explain how those prompts are created and that you review them before they reach the client.
- Human Oversight. There are two common approaches. In the first, no AI-generated content reaches your client without your review. In the second, you give your client access to your private session intelligence, so they can continue to reference insights through a private coaching chat built on your sessions. The coach stays in the loop by receiving reports on topics explored, with the option to engage personally when they see fit. This model can extend coaching impact beyond scheduled sessions. Be clear about human oversight upfront to manage boundaries and expectations. It will preserve the coaching relationship and protect both you and your client.
- Data policy. Know where your clients' data goes. If you use AI tools that store or process session data, ensure they comply with the data privacy laws where you and your clients are located, and do not use client data for AI model training.
The client conversation
One of the most valuable things you can do right now is have a direct conversation with your clients about AI. Not a policy document. A conversation.
Ask them whether they are using AI tools to reflect on coaching sessions. Many will say yes. That is not a problem - it is an opportunity. You can acknowledge the value of between-session reflection, explain the risks of using unsupervised generic tools, and offer a better alternative.
That alternative might be coach-supervised and personalised AI that carries your voice and methodology. It might be structured reflection exercises you design yourself. The point is not to control what your client does between sessions. It is to ensure that what happens between sessions supports the coaching, rather than running parallel to it without your knowledge.
The question is not whether AI will be part of coaching. It is whether coaches will govern it, or whether it will govern itself.
Try it free with your first client.
Full access, no credit card. Or join the free webinar first - 2 x 60 minutes, no pitch.
About the author
Laura Foltina is a certified AI Governance Professional (AIGP) and certified Information Privacy Professional (CIPP/E). She guides companies on AI governance frameworks, EU AI Act compliance and AI vendor governance and privacy. She serves as a Founding AI Governance Advisor to CoachNova.