← Back to Blog
Trust

Protecting Your Voice — Our Identity Verification

Published 22 April 2026 · 8 min read

Quick answer. Four verification layers: identity verification of the creator (passport + liveness), voiceprint match (live recording compared to uploaded material), a voice-lookup against a registry of known voice actors and public figures, and a signed consent document. Stolen or deepfaked voices are refused and where possible reported to the person impersonated.

The threat model

Three classes of bad actor:

  • Uploader A uploads a voice that belongs to someone else (sibling, colleague, public figure) without consent.
  • Uploader B uploads a deepfake generated from off-platform material.
  • Uploader C is a legitimate creator, but a downstream user tries to abuse the persona (impersonation, explicit content).

Each class has a different defence layer.

Layer 1 — creator identity verification

All publishing creators complete identity verification (government-issued ID + liveness selfie). This ties a legal identity to the creator account. Anonymous publishing is not supported.

Verification is a one-time step, takes under 10 minutes, and is handled by a partner KYC provider under standard data-processing terms.

Layer 2 — voiceprint match

During publishing, the creator records a live phrase provided at submission time. The system compares the live recording’s acoustic fingerprint with the uploaded voice-model material. A mismatch flags the submission for review.

Exception: heritage-tier uploads (home videos, archival recordings of a deceased or incapacitated person). The creator cannot record a live phrase matching the deceased; instead, the consent manifest must include a verified death certificate or power-of- attorney document, and the listing is flagged “heritage” in the catalogue.

Layer 3 — public-voice lookup

The platform maintains a registry of:

  • Known voice actors (with their consent to be listed).
  • Major public figures (politicians, celebrities, athletes) who are protected against impersonation by default.
  • Known deepfaked voices reported by actor associations.

Every upload’s acoustic fingerprint is checked against this registry. A match to a registered voice actor without matching creator identity refuses the submission. A match to a public figure refuses without a verified permission document.

Layer 4 — signed consent manifest

Every upload carries a consent manifest (see the ethical checklist). The manifest is signed either by the voice owner directly (most cases) or by a verified surrogate (heritage cases).

What happens when a submission is flagged

A human reviewer examines: the identity match, the voiceprint match, the registry lookup, the consent manifest. Escalations:

  • Clear-cut rejection (non-owner upload of a registered voice) — rejected, uploader account reviewed.
  • Ambiguous case (legitimate-looking but close match to a registered voice) — held pending creator evidence (contract, performance history).
  • Heritage case without a valid death certificate — held pending documentation.

Reporting stolen voices

If you believe your voice has been uploaded without consent, file a takedown at persona.gera.services/takedown. We process takedowns within 48 hours for verified claims; an expedited path exists for active-harm cases.

Ongoing detection

Post-publish, we continue to match new uploads and new uses against the protected-voice registry. If a voice enters the registry after a match already exists on the platform, the existing match is re-reviewed.

Why this is not perfect

Deepfake technology improves continually. Voiceprint matching against off-platform material is not a solved problem. A well-resourced adversary with access to many clean recordings of a target voice may produce a submission that passes initial checks. Our defences shift over time; we document them transparently and invite third-party research.

What we refuse, regardless

  • Voice clones of public figures without explicit permission.
  • Voice clones intended for financial or political deception.
  • Explicit-content voice clones of any identifiable person.
  • Voices of minors in any commerce context.

Cross-links

See also: ethical checklist, persona API spec.

Find the right voice.

Browse personas