Secure verifications for every industry
We provide templated identity verification workflows for common industries and can further design tailored workflows for your specific business.

March 2026
Security researcher Celeste (vmfunc) and two colleagues shocked the identity verification industry on February 19, 2026, when they revealed that Persona, the verification vendor responsible for Discord's UK age verification trial, had exposed its entire government dashboard codebase on a public endpoint.
53 megabytes. 2,456 files. Sitting unprotected on a FedRAMP-authorised government server at withpersona.gov.com.
Within days, Discord announced it had ended its relationship with Persona. But the damage extended far beyond a single partnership. What the researchers uncovered raises fundamental questions about what happens to your identity data once you hand it to a centralized verification provider and who else might be looking at it?
The researchers discovered that a development configuration path (/vite-dev/) had somehow reached production on a Google Cloud server connected to the Federal Risk and Authorisation Management Programme (FedRAMP). This wasn't a sophisticated hack. The files were simply there, accessible to anyone who knew where to look.
But the technical exposure was only the beginning. What the codebase revealed about Persona's capabilities was far more consequential.
When users submitted their identity documents to Persona, whether for Discord age verification, Reddit account verification, or any of the other platforms that route through the service, they likely assumed they were undergoing a straightforward identity authentication process. Name, date of birth, document authenticity. Standard KYC.
The researchers found something very different.
Persona's platform performs 269 distinct verification checks on user data. These include screenings against adverse media databases across 14 categories, including terrorism, espionage, human trafficking, and organised crime. The platform can file Suspicious Activity Reports directly to FinCEN (the US Treasury's financial crimes unit) and Canada's FINTRAC.
Internal codenames found in the exposed codebase, including "Project SHADOW" and "Project LEGION", suggest capabilities that extend well beyond simple identity verification into active intelligence-gathering territory.
To be clear: users who submitted their passport or driver's licence for what they believed was a simple age check on Discord were potentially having their identity run through counter-terrorism and espionage screening databases, with the results reportable to federal law enforcement agencies.
The exposure also revealed Persona's data retention practices. The platform can retain identity data, including IP addresses, browser and device fingerprints, government ID numbers, phone numbers, names, faces, and a battery of biometric analytics (pose detection, age inconsistency checks, and suspicious entity detection) for up to three years.
For a user who spent 30 seconds verifying their age to access a Discord server, that's three years of retained biometric and identity data, held by a company they may never have heard of, potentially accessible to government agencies they never consented to share data with.
Both Persona and Discord confirmed their partnership lasted less than a month before dissolving. Discord stated that it will not be proceeding with Persona for identity verification.
But Discord's swift exit underscores a deeper problem that exists across the industry. When you integrate a centralized identity verification provider, you're not just outsourcing a compliance check; you're entrusting your users' most sensitive data to a third party whose full capabilities, data-sharing arrangements, and government relationships may not be fully visible to you.
Discord, a platform with hundreds of millions of users, apparently didn't have full visibility into what was happening with the identity data its users were submitting. If Discord can't see the full picture, can your organisation?
This incident exposes a critical gap in how consent works for centralised identity verification.
Users consented to verify their age on Discord. They did not consent to having their identity documents screened against terrorism databases. They did not consent to their biometric data being retained for three years. They did not consent to the possibility that their verification data could be reported to federal law enforcement agencies.
In a centralized model, consent is effectively binary: submit your data or don't use the service. Once submitted, the data enters a black box. The user has no visibility into how it's processed, who else sees it, or how long it's retained. They can't selectively share attributes (proving they're over 18 without revealing their full name and address), and they can't revoke access after the fact.
This isn't just a privacy concern; it's a compliance liability for every organisation that integrates these providers. Under GDPR, consent must be specific, informed, and freely given. If your verification vendor is performing 269 checks that your users didn't consent to, your organisation's compliance posture is built on a foundation of uninformed consent.
The Persona incident isn't an isolated failure. It's a structural consequence of how centralised identity verification works.
When identity data flows through a centralised provider, that provider becomes a single point of control and a single point of failure. They decide what checks to run. They decide who to share data with. They decide how long to retain it. And when their infrastructure is misconfigured, everyone's data is at risk simultaneously.
A decentralised approach to identity verification eliminates these problems by design.
In a decentralised architecture, users maintain control of their own identity data. Verified credentials are encrypted, sharded, and stored in the user's personal vault & not in a centralized database that can be exposed through a misconfigured development path. When an organisation needs to verify a user's identity, they receive a cryptographic proof and not a copy of the raw data.
Critically, this architecture supports selective disclosure. A user who needs to prove they're over 18 can do exactly that — without revealing their full name, address, government ID number, or biometric data. No surplus data collection. No 269 undisclosed checks. No three-year retention of biometric profiles.
If you're integrating a centralised identity verification provider, the Persona-Discord incident should prompt three urgent questions:
1. Do you know what your verification vendor actually does with the data?
Not what their marketing materials say. Not what their standard contract states. What their platform is technically capable of doing and what it's actually doing. The gap between the three can be significant.
2. Can your users exercise meaningful consent?
If your vendor is running hundreds of undisclosed checks, screening against government databases, and retaining biometric data for years, your users' "consent" is a legal fiction. That's your compliance risk, not just the vendor's.
3. Does your architecture protect against vendor failure?
When Persona's frontend was exposed, every organisation that routed through the platform was affected. In a decentralised model, there is no central frontend to expose, no central database to leak, and no single vendor whose misconfiguration can compromise your entire user base.
At Zyphe, we built our identity verification platform on a fundamentally different model. User data is decentralised by design, encrypted, sharded, and stored in a way that ensures no single entity (including Zyphe) can access it without the user's explicit, cryptographic consent.
We don't run undisclosed background checks. We don't retain biometric data in centralised repositories. We don't have a government dashboard codebase that could be left on a public endpoint because that's not how our architecture works.
The Persona-Discord incident is a wake-up call, but it shouldn't be a surprise. When you centralise the world's identity data, you centralise the world's identity risk. The answer isn't better perimeter security around the same flawed model. The answer is a model that doesn't create the risk in the first place.
Want to see how decentralised identity verification puts users in control of their own data? Book a demo with Zyphe and learn how our privacy-first architecture eliminates the risks exposed by the Persona-Discord incident.
We provide templated identity verification workflows for common industries and can further design tailored workflows for your specific business.