AI Meeting Disclosure Guide: What to Tell People Before the Worker Joins
If you deploy an AI worker into a meeting, who needs to know? This guide covers what to say, when to say it, and when disclosure is legally required.

If you deploy an AI worker into a meeting, the first question most people ask is: do I have to tell anyone?
The short answer is yes. And in many cases, you are legally required to.
But disclosure is not just a legal obligation. It is also the right approach — and for most teams, it helps rather than hurts.
Here is a practical guide to what you need to say, who you need to tell, and when.
Why disclosure matters
AI workers are visible participants. They join meetings with a name and a role. So in most cases, the people in the meeting already know something is there. The question is whether they understand what it is and why it is in the room.
Clear disclosure does three things. It sets accurate expectations. It prevents the awkward moment when someone notices the participant list and asks who that is. And it builds the kind of trust that makes AI-assisted meetings more productive, not more tense.
Trying to hide an AI participant is also bad strategy. It creates suspicion that disclosure prevents. And in many jurisdictions, recording a conversation without consent is not just uncomfortable — it is illegal.
Who you need to tell
External participants — clients, candidates, prospects
Tell them before the meeting starts. A single sentence in the invite or at the opening of the call is enough. You do not need to write a legal paragraph. Confirm that an AI worker will be present, explain its role, and make space for questions.
Internal participants — your own team
If the worker is attending an internal sync, a brief note to the team before the first session is sufficient. After that, the worker's presence in the participant list is self-explanatory.
Interview candidates
This is one of the most important disclosure moments. Candidates who discover mid-interview that an AI worker was present — and were not told — will have a negative experience. In some regions, you may also have a legal problem. Tell candidates upfront, in writing, as part of the interview confirmation.
Clients on regulated calls
If you operate in financial services, legal, healthcare, or any regulated industry, check your obligations before deploying an AI worker on client calls. Requirements vary by jurisdiction. When in doubt, get explicit written consent before the meeting.
What to actually say
For an external call:
"I want to let you know that I use an AI worker to help manage follow-up and structured notes from this call. It will join as [worker name] in the participant list. Is that okay with you?"
For an interview:
"This screening includes an AI assistant called Rx · Recruiting. It asks structured questions and captures your responses so we can evaluate candidates consistently. Do you have any questions before we start?"
For an internal meeting:
"Starting this week, our recurring [meeting name] will include an AI worker. It handles notes and structured output so we can focus on the discussion. Let me know if you have questions about how it works."
These are short statements. They are not legal documents. They treat the other person like an adult, which is exactly the right tone.
When disclosure is legally required
Two-party and all-party consent laws govern recording in many US states and across the EU. California, Florida, and Illinois all require all-party consent for recorded conversations. The EU's GDPR applies to any personal data captured in a meeting, including voice.
If your AI worker speaks as well as listens — which is one of its core capabilities — it is a participant in the conversation. Most jurisdictions treat that differently from silent transcription tools. But the underlying recording of the conversation still triggers consent requirements in many places.
This is not legal advice. The practical rule is: when in doubt, disclose. The downside of disclosing too much is minimal. The downside of under-disclosing can be significant.
How DelegateWorker handles this by design
DelegateWorker workers are transparent by default. They join meetings as named AI participants. They do not join covertly, impersonate humans, or hide their presence.
That design choice is intentional. Visible participation builds more trust than invisible recording. Participants know the worker is there. They know its role. That clarity makes the meeting more productive — not less.
For situations where you need to manage disclosure carefully — regulated industries, candidate interviews, sensitive client calls — prepare your disclosure statement before the first session. One paragraph, sent in advance, handles most scenarios cleanly.
The bottom line
Disclose before the meeting. Keep it short. Focus on what the worker does, not on the technology behind it. And in regulated environments, check your legal obligations before deploying.
Transparency is not a liability in AI-assisted meetings. It is a design feature. The teams that approach this openly tend to get better outcomes than the ones that try to minimize the worker's presence.
→ Learn how DelegateWorker handles transparency — use cases: Zoom meetings
DelegateWorker
Deploy your first AI worker.
DelegateWorker turns AI models into named participants for Zoom meetings, live calls, and operational roles. Join the waitlist and start testing in under 10 minutes.
Related reading