EU AI Act, ACX, and the disclosure rules every TTS user should know in 2026

Synthetic voice is now regulated. The EU AI Act puts hard disclosure obligations on AI-generated audio starting August 2026, with penalties up to 15 million euros. ACX still bans AI narration for general distribution. Here is what changes, when, and how to comply without panic.

Og Image

For most of the synthetic-voice era, the legal framing was vague: "you should disclose AI-generated content as a best practice." That framing is over. The European Union's AI Act has hard disclosure requirements for synthetic audio that come into force on 2 August 2026, with penalties up to fifteen million euros or three percent of global turnover. ACX, the production platform for Audible, has already published a clear policy: AI-narrated audiobooks are not approved for general distribution. Several other distribution platforms have moved to disclosure-required regimes.

If you publish, distribute, or commercially use TTS-generated audio in 2026, you are in a regulated space. This is the practical guide for what changes, when, and how to comply without overreacting.

The headline regulatory facts

Three regulatory items dominate the landscape in 2026.

EU AI Act, Article 50. Article 50 of the EU AI Act sets transparency obligations for AI systems and their deployers. For audio specifically, two obligations apply. Providers of AI systems that generate or manipulate synthetic audio must ensure outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. Deployers (the people who use the AI to produce content) must disclose to recipients that the content has been artificially generated or manipulated, with the obligation strongest for content that constitutes a deepfake of a real person. The transparency obligations become applicable on 2 August 2026. The first draft of the European Commission's Code of Practice on AI-generated content transparency was published in December 2025 with a final expected in mid-2026. Penalties for breach range up to fifteen million euros or three percent of global annual turnover, whichever is higher.

ACX (Audible). ACX requires human narration for general audiobook distribution. AI-narrated content disclosed as such is not approved for distribution. A small narrator-replica beta is currently inviting U.S.-based professional narrators to opt in to having an AI replica of their own voice produced and used for narration jobs they would not otherwise have time to record live. This is the only AI-narration path to ACX distribution as of early 2026.

Other distributors. Findaway Voices and several smaller distributors accept AI-narrated content if it is disclosed in product metadata. Apple Books and Google Play Books have moved toward disclosure-required policies. Library distribution networks vary. The patchwork is still moving.

What "disclose" actually means under the EU AI Act

The Article 50 obligation is not "put it somewhere on your website." The Code of Practice is being drafted to specify what disclosure looks like in practice. The draft published in December 2025 indicates the following directions, subject to revision before the final code lands.

Visible disclosure at first exposure. The user encountering AI-generated content must see (or hear, in the case of audio) a clear indication that the content is AI-generated. The disclosure cannot be buried in fine print, in metadata only, or in terms of service.

Machine-readable marking. The audio file itself, or a structured metadata layer attached to it, must contain a machine-readable signal that the content is AI-generated. This is what the Article 50 provider obligation is about, the AI system must produce outputs that downstream tools can detect as synthetic.

Common labeling icon. The Code of Practice draft proposes a common icon to signal AI-generated content across platforms, used consistently and at first exposure.

Carve-out for evidently artistic, creative, satirical, fictional, or analogous works. For these, the obligation is limited to disclosure of the existence of generated or manipulated content "in a manner that does not hamper display or enjoyment of the work." This exception is narrower than it might sound; it does not exempt commercial advertising, editorial content, or anything that purports to represent a real person.

The enforcement model is administrative. National competent authorities in EU member states can investigate and fine non-compliant deployers and providers. The fine range is significant; Article 50 breaches sit in the EUR 15 million / 3 percent of global turnover band.

What this means in practice for someone using TTS to produce a podcast or a video voice-over: yes, you are likely a "deployer" if your audience includes EU users, and yes, you are required to disclose. The disclosure does not have to be obtrusive; it does have to be there, and the practical bar is "a reasonable user would understand the content includes AI-generated voice."

A timeline showing the EU AI Act phases relevant to synthetic audio: Article 50 publication, December 2025 first draft Code of Practice, planned mid-2026 finalization, August 2026 obligations applicable, with annotated guidance on what each milestone unlocks

What ACX's policy means for indie authors

ACX is the production gateway to Audible, which is the dominant audiobook retailer in English-speaking markets. ACX's stated policy: human narration is required, AI-narrated audiobooks are not approved for general distribution. A book that is disclosed as AI-narrated is rejected; a book that is undisclosed AI is grounds for removal if the platform discovers it.

This is not regulatory law (it is platform policy), but it has a similar practical effect for indie authors: if you want shelf space at the audiobook retailer where most listeners shop, you need a human narrator, or you need to be in the narrator-replica beta.

The narrator-replica beta is the platform's compromise. Narrators on the platform can opt in to having an AI replica of their voice produced; that replica is treated as the narrator's voice and counts as human narration for ACX distribution. The narrator gets paid, the audiobook ships, the platform avoids the labor and quality concerns that have driven its base policy. The beta is small and U.S.-based for now.

Other distribution platforms are more permissive. If you are willing to forgo the largest channel and publish through Findaway Voices, Apple Books, Google Play Books, and library networks, AI-narrated audiobooks are increasingly viable. The trade-off is real: you lose access to the platform with the most listeners, in exchange for a faster and cheaper production path.

A practical compliance framework

Here is what a TTS user should do, by use case, to stay on the right side of the rules.

Personal-use synthesis. No disclosure obligation. Audio you generate for your own listening, your own draft revision, your own internal development is not "deployed" in the regulatory sense and falls outside the disclosure obligations. Use freely.

Internal-only training, internal-only documentation. Probably no public-facing disclosure required, but as a matter of practice, label internal AI-generated audio as such in your asset library so that downstream uses do not lose track of what the content is.

Public commercial content (podcast, marketing video, course, audiobook). Disclose. Specifically:

  1. Add an audible or visible indication that the audio is AI-generated, at first exposure to the listener. For a podcast, this is a single line in the show notes ("Narration produced with AI text-to-speech") plus an audible disclosure in the first few minutes if your audience expects to hear human hosts. For a video, the "voice-over by AI" credit in the video's description and ideally on-screen during the first ten seconds. For a course, a visible note at module entry. For an audiobook on a platform that allows AI narration, the platform-required metadata flag plus a disclosure in the product description.
  2. Keep records. The deployer is responsible for being able to demonstrate compliance. A simple log of "this episode's voice-over was generated on date X using settings Y" is enough for most purposes.
  3. Do not impersonate real people. Generating a voice that closely resembles a specific real person and using it without consent is a separate problem from the general disclosure obligation. The deepfake provisions of Article 50 are stronger on impersonation; consent matters here.
  4. Match the platform's specific requirements. ACX rejects AI narration; do not submit. Findaway requires disclosure in a specific metadata field; populate it. Each platform has its own form of compliance. The Article 50 obligation is the floor, not the only requirement.

Audiobooks specifically. Decide your distribution strategy first. If Audible distribution matters, you need a human narrator or the replica beta. If Findaway and direct-to-listener channels are sufficient for your project, AI narration is workable with disclosure. Do not try to slip AI narration onto Audible; the policy is clear, the rejection is fast, and any past acceptance is being walked back.

Voice that resembles a real person. Special caution. Even with technical compliance on the marking and disclosure side, generating a voice that resembles a specific identifiable person without their consent runs into a separate body of law (publicity rights in the U.S., personality rights in many EU jurisdictions, the Article 50 deepfake-specific obligations) that is stricter than the general disclosure regime. Do not do this for commercial content without a clear consent path.

What providers are required to do

The Article 50 provider obligation is on the AI-system providers, not the end users. Providers of TTS systems that produce content reaching EU users are required to mark their outputs in a machine-readable, detection-friendly way starting in August 2026. The technical mechanism, watermarking, signed metadata, a standardized header, has not been universally adopted but is converging around content-credentials approaches and audio-domain watermarking schemes.

For end users, the provider obligation is mostly invisible: the audio file you receive will carry a marking layer you do not see in normal playback but that detection tools can read. Do not strip these markings (re-encoding through a lossy codec sometimes preserves them, sometimes does not, this is one of the technical questions the Code of Practice is working through). For most content uses, leave the audio as the provider gives it; let the marking carry through to the listener-facing platform that may surface or honor it.

A two-row diagram with the provider's obligations on the top (machine-readable marking, detection-friendly output, content-credentials or watermark metadata) and the deployer's obligations on the bottom (disclose at first exposure, keep production records, respect platform-specific requirements, special caution for impersonation), with a connecting arrow showing how the chain works end-to-end

What this does not change

It is worth stating what is not affected by these rules.

You can still use TTS for revising your manuscript, for prototyping a voice-over, for testing your script, for personal listening, for assistive playback. None of that requires disclosure; none of it triggers the regulatory frame.

You can still publish AI-narrated content commercially. The rules require disclosure, not abstention. A clearly disclosed AI-narrated podcast is fine; the audience that minds will self-select away, and the audience that does not mind is increasingly large.

You can still distribute on platforms that accept AI narration. Findaway, several direct-sale audiobook networks, podcast platforms generally, video platforms generally, and most marketing distribution paths accept AI narration with disclosure.

You can still work with human narrators. The market for human voice work is still healthy, particularly for narrative, soft-skills, brand-flagship, and audiobook content where the voice is the product. The regulatory shift does not make synthetic voice mandatory; it makes synthetic voice traceable.

The honest summary

The August 2026 deadline is real but not catastrophic. The disclosure obligations are achievable for any team that takes them seriously. The penalties are large but are aimed at non-compliant scale operators, not at podcasters who put a disclosure line in their show notes. The likeliest enforcement path in the first year is the obvious cases, undisclosed deepfake political audio, undisclosed brand impersonation, undisclosed mass-produced commercial content. A podcaster who discloses AI narration in the show notes, an L&D team that flags AI narration on course modules, an indie author who publishes AI-narrated through the platforms that allow it: these are not the targets.

What changes is that "disclosure as best practice" becomes "disclosure as legal obligation." That is a small operational shift for anyone already operating in good faith, and a large operational shift for anyone who was banking on opacity. Plan for the former; the latter strategy was never durable.

For most users of synthetic voice in 2026, the action item is simple: add the disclosure, keep the records, respect the platform-specific rules, and ship the content. The regulatory backdrop is taking shape; it does not have to be a barrier.

继续阅读