

Cheqd, DataHive, Nuklai, and Datagram have launched the Sovereign AI Alliance, a new initiative aimed at developing an open-source framework for decentralized artificial intelligence using user-owned data.
Announced on May 1 via a press release shared with crypto.news, the alliance will focus on building technical infrastructure that supports privacy-preserving AI systems. At the core of the project is the proposed Intention Network Protocol which aims to enable AI agents to collaborate securely without compromising user control over personal data.
INP is structured around three key components: “Intention Anchors” to capture user inputs with data ownership guarantees; the “Intention Mesh,” a decentralized environment for AI communication; and “Execution Nodes,” which act on user intentions with built-in privacy protections.
The alliance also outlined a broader roadmap that includes decentralized data storage, open-source AI models, and tools for AI agents to transact and collaborate autonomously. According to the founders, this effort is meant to move the data economy from “attention-based” to “intention-based,” with users at the center.
Why this alliance matters
SAIA enters the scene as scrutiny intensifies over centralized AI models’ use of personal data while regulators in Europe and beyond demand more transparency and user control.
Fraser Edwards, CEO of cheqd, told crypto.news in an interview that current AI models often trade in user data without meaningful consent. “Users pay with their personal data, which is extracted, aggregated, and commodified,” he said.
The alliance’s approach, according to Edwards, enables users to selectively share data, potentially monetize it, and maintain the ability to revoke access, offering a level of control that centralized platforms struggle to provide.
While previous efforts to “pay users for data” have stumbled, SAIA’s architecture is designed to treat user data as a reusable and verifiable asset within a consent-based ecosystem. It also includes compliance-focused tools, such as audit trails and selective disclosure, to help applications meet global privacy standards like GDPR.
A full Q&A with Fraser Edwards on SAIA’s plans, challenges, and user incentives follows below.
crypto.news: The Sovereign AI Alliance’s vision is to build open-source, decentralized AI frameworks with user-owned data. That sounds interesting in theory, but what advantages will this deliver over Big Tech’s AI offerings? AI services are offered for free “for free” by trading their data and this has been accepted as normal for years by the vast majority of people. Why would users and developers switch to a self-sovereign model? What real incentive (monetary, privacy, or otherwise) makes your user-owned data approach compelling enough to drive adoption?
Fraser Edwards: The core advantage of the Sovereign AI Alliance’s model is alignment. AI that works for you, not for platforms monetising your attention or data. Today’s “free” AI tools aren’t truly free. Users pay with their personal data, which is extracted, aggregated, and commodified, often in ways that can’t be controlled or audited. This fuels engagement driven systems that prioritise virality and retention over individual benefit.
A real world example is the backlash against Meta’s AI being embedded into WhatsApp. Despite Meta’s claims of respecting privacy, users can’t turn it off, and few trust the company to act in their best interests. This erodes confidence in Big Tech’s data stewardship.
In contrast, Sovereign AI is built around user-owned data, where individuals control how their information is used, shared, and monetised. The self-sovereign model empowers people to own their digital selves in a way that Big Tech can’t offer without dismantling its entire business model.
For AI to be truly personalised and useful, it needs access to rich, cross-contextual data with full permission. If data is siloed across platforms, AI remains limited and impersonal. Sovereign AI enables intelligent systems that are both more powerful and more respectful of the people they serve.
This model enables three incentives to drive adoption:
- Privacy by Design: Data isn’t siphoned into opaque corporate systems. Instead, it remains in users’ control via decentralised identity and storage infrastructure.
- Monetary Incentive: With frameworks like cheqd’s payment rails, users and developers can receive compensation for sharing verified data or training AI agents, creating a real data economy.
- Real Personalisation: Personal AIs and agents, like those being developed by DataHive, can act on your behalf based on your true intentions, not what maximises ad revenue.”
CN: A core idea here is that individuals control (and potentially monetize) their own data. However, many past projects promised “get paid for your data” and struggled, partly because an average user’s data isn’t worth much at an individual level or worth the time. How will SAIA change that equation? Are you planning to reward users directly for contributing data or AI training feedback, or is the benefit more indirect (e.g. better personalization and privacy)? If users can’t earn significant income, what’s the draw for them to proactively share and manage their data in this network?
FE: It’s true that past “get paid for your data” models often failed because they treated data like a one-off commodity rather than part of an ongoing, intent driven ecosystem. The Sovereign AI Alliance changes the equation by reimagining data as a reusable, high-value asset within a decentralised, intention-based AI framework.
For cheqd, we approach this by enabling individuals to control and reuse their own data, but just as importantly, we’re building the infrastructure that incentivises companies and data siloes to release that data back to individuals in the first place.
Rather than promising quick micro rewards for isolated data points, cheqd’s infrastructure supports a model where users can selectively share verified, reusable data, such as credentials, preferences, or consent, on their terms. This opens up the potential for more meaningful long term value, whether that’s in personalised AI services, selective monetisation, or simply better control. The real shift is in rebalancing power: making it viable for users to take their data with them and use it across different AI systems, without being trapped in siloes.”
CN: With user-owned data at the center, how are you addressing privacy and compliance from the start? If personal data is being used to train or inform AI models, do users have fine-grained control and the ability to opt-out or revoke data? Recent regulatory actions show this is critical – e.g. Italy’s data protection authority temporarily banned ChatGPT over the “absence of any legal basis” for its mass collection of personal data for training. How will SAIA’s approach differ in handling user data so that it remains compliant with privacy laws (GDPR and others) and ethical norms?
FE: Compliance and privacy are fundamental to SAIA’s architecture. The foundation of SAIA is user sovereignty by default, without the need for central data silos, in contrast to typical AI models that collect data in bulk without meaningful consent.
- Data stays under user control
Users can grant, revoke, or limit access to their data at any time leveraging decentralised identity (cheqd) and storage protocols (Datagram and others). This mechanism is in line with GDPR notions like data minimisation and the right to be forgotten.
- Consent is explicit and revocable
We are developing systems that allow users to express explicit consent to the usage of their data, such as for credential sharing, agent interactions, or AI training. With verifiable consent records and intention-specific use cases, legal foundation and traceability are guaranteed. - Built-in compliance tooling
We’re creating embedded compliance features, like audit trails, data provenance, and selective disclosure, so that AI agents and applications using SAIA’s framework are not just privacy-aligned but provably compliant with global regulations.”
CN: cheqd brings decentralized identity (self-sovereign identity / SSI) and verification infrastructure to the alliance. How will SSI be included into the AI framework? For instance, will Intention Anchors use digital identities to verify data provenance or the reputation of AI agents? And given that decentralized identity is still nascent, even identity experts note that adoption is the biggest challenge due to added complexity for users and organizations, how will you encourage and bootstrap the use of SSI within SAIA?
FE: “SSI plays a key role in the Sovereign AI Alliance’s architecture. It is the layer of trust that connects AI agents, people, and data. Here is how we intend to get beyond adoption obstacles and how cheqd’s decentralised identity infrastructure fits into the framework.
This also applies to AI agents themselves: by issuing verifiable credentials to agents, we can establish reputation, capability, and authenticity. This is vital for trust in agent-to-agent or agent-to-user interactions.
Adoption is indeed the biggest challenge for SSI today. Whilst cheqd is at the infrastructure layer, DataHive is directly customer facing, thus offering a solution to consumers themselves which they can then bring their data into.”

Source link