Free forever ยท No credit card
Loading...
Everything you've ever told your AI is stored on someone else's server. That company can read it, analyse it, sell insights from it, or lose it in a breach. It's time to ask who actually owns your relationship with your AI.
Nicholas Templeman
Founder, MEOK AI LABS ยท Built in England
Nicholas founded MEOK AI LABS because he believed your AI should serve you โ not harvest you. He is not a solicitor; this article is for informational purposes and does not constitute legal advice.
Think about the last conversation you had with an AI. Perhaps you described an argument with your partner. Maybe you talked through a health scare. You might have shared your salary, your location, your anxieties about the future. You probably typed it all without a second thought โ because the interface felt intimate, private, almost like a journal.
It is not a journal. Every word you have ever typed into a mainstream AI assistant is sitting on a server in a data centre you have never visited, owned by a company whose priorities are not yours. That company can read it. Their contractors can read it. A regulator can subpoena it. A hacker can steal it. And in most cases, if that company is acquired tomorrow, the new owners inherit every confession you have ever made.
This article is not here to frighten you. It is here to give you the facts โ the legal reality, the technical reality, and the questions you should be asking before you trust any AI with your inner life.
The legal answer, under most current terms of service, is: they do. Or more precisely, you retain nominal ownership of your content but you grant the AI company a broad, perpetual, irrevocable licence to use it for almost any purpose they care to define.
OpenAI's terms of service state that you own your input and output, but that you grant OpenAI โa worldwide, non-exclusive, royalty-free, transferable, sub-licensable licenceโ to use your content to provide and improve the service. The word โimproveโ is doing significant work there. Google's terms contain similar language. Anthropic's are somewhat narrower, but still reserve rights to use conversations for safety and quality purposes.
The practical consequence is this: if you disclose something sensitive to a mainstream AI today, you have, in effect, handed a copy to a corporation on terms that heavily favour them. You cannot take it back. You cannot audit how it was used. You can request deletion under GDPR, and they must comply โ but you have no way to verify that the deletion was complete, or that derived data (model weights adjusted by your words) was ever removed.
The key distinction
Owning your data means controlling it โ who can read it, where it lives, and when it is destroyed. A licence-back arrangement is not ownership. It is a gift with conditions attached.
Four things, roughly in order of how often they happen:
There is a fifth category that rarely gets discussed: regulatory disclosure. If a government body subpoenas an AI company for records relating to a user, and that company holds plaintext conversation logs, they have no technical defence against compliance. They hand over the data. This is not hypothetical โ it has happened in criminal investigations.
Let's go through the major players specifically, because the details matter.
OpenAI (ChatGPT). Free-tier users are opted in to model improvement by default. To opt out: Settings โ Data Controls โ toggle off โImprove the model for everyone.โ Even with opt-out, your conversations transit their servers in plaintext. ChatGPT Enterprise and API customers have stronger protections โ no training by default, and contractual guarantees. The 600 million people on the free tier do not have those protections.
Anthropic (Claude). Anthropic states it does not use Claude.ai conversations to train its models by default. However, conversations are retained for up to 90 days for safety and trust-and-safety review purposes. Anthropic's privacy policy is more restrained than OpenAI's, but your conversations still exist in plaintext on their servers during that window.
Google (Gemini). Google uses Gemini conversations to โimprove Google products and machine-learning technologiesโ unless you turn off Gemini Apps Activity in your Google account. Conversations can be reviewed by trained reviewers. Given Google's broader data ecosystem, Gemini conversations can also inform personalisation across Google's advertising infrastructure.
Microsoft (Copilot). Microsoft's data practices vary substantially by product tier. Consumer Copilot follows Microsoft's general privacy policy, which permits use of data to improve services. Microsoft 365 Copilot in an enterprise context has significantly stronger protections. If you are using Copilot through a personal Microsoft account, assume training opt-out is not the default.
The pattern
In every case, the privacy protection is a setting โ a policy instrument โ not a structural constraint. Policies change. Settings reset. Companies get acquired. Cryptography does not change retroactively.
UK GDPR โ retained post-Brexit as domestic law under the Data Protection Act 2018 โ gives you a meaningful set of rights over personal data held by any organisation processing data about UK residents. Here is what you are entitled to:
Right of access (Subject Access Request)
You can request a copy of all personal data an AI company holds about you. They must respond within one calendar month. This includes conversation logs, any profile or memory data, and records of how your data has been used.
Right to erasure ('right to be forgotten')
You can request that your personal data be deleted. AI companies must comply unless they have a compelling legal basis for retention. Crucially, this should cover derived data โ memories, embeddings, fine-tuning inputs โ not just the raw transcript.
Right to data portability
Under Article 20, you have the right to receive your personal data in a structured, commonly used, machine-readable format and to transmit it to another controller. This is the legal basis for AI memory portability โ but in practice, no mainstream AI platform honours it meaningfully.
Right to rectification
If an AI system holds inaccurate personal data about you โ incorrect memories, wrong profile information โ you can require it to be corrected.
These rights exist on paper. Exercising them is another matter. Filing a Subject Access Request with a US AI company as a UK citizen involves identifying the correct legal entity, understanding their transfer mechanisms, and waiting. The system works โ slowly, and with friction. It was not designed for an era in which 600 million people are having intimate daily conversations with AI.
Data portability, in the context of AI, should mean this: everything your AI has learned about you โ your preferences, your memories, your communication style, your history โ should be yours to take with you. If you leave ChatGPT for Claude, or Claude for MEOK, your AI should be able to greet you as a person it knows, not a stranger.
This does not happen. Your ChatGPT โmemoryโ โ the collection of facts the model has been instructed to remember about you โ cannot be exported to Claude. Your Gemini personalisation does not transfer to Copilot. The moment you leave a platform, you start again from zero. The AI knows everything about you. You own none of it.
The reason no mainstream AI offers genuine portability is straightforward: your personalised relationship with their AI is a retention mechanism. The longer you use a platform, the more it knows about you, and the harder it becomes to leave. This is deliberate lock-in, dressed up as a feature. Under GDPR Article 20, you have a legal right to portability โ but enforcing it against a company whose servers hold your data in a proprietary format requires legal action, not a download button.
True data sovereignty in AI has four properties:
This is not a fantasy. This is an engineering decision. The reason most AI companies have not built it is not that it is technically impossible โ it is that it is commercially inconvenient. Data is worth money. Giving users genuine control over their data means giving up a revenue stream.
MEOK was built on the premise that your AI companion should belong to you โ not to the company that made it. Here is what that means technically:
User-held encryption keys
MEOK derives encryption keys from factors held on your device. These keys are never transmitted to MEOK's servers. We store only ciphertext. Even with full database access, a MEOK engineer sees encrypted blobs โ not your conversations.
AES-256-GCM encryption
All memories and personal data are encrypted using AES-256-GCM, the same standard used by governments and financial institutions. The GCM mode provides authenticated encryption โ meaning any tampering with stored data is detectable.
Cryptographic deletion
When you delete your data, MEOK deletes the encryption key. The ciphertext becomes permanently unreadable โ not recoverable by a future data request, a court order, or an acquirer. This is deletion with mathematical proof.
Export in open format
Your memories are exportable as structured JSON โ a documented, open standard. You can import this into any compatible system. Your AI relationship is portable because it is yours.
Yes โ fully and at any time. MEOK provides a complete export of all your memories and personal data as structured JSON, including:
The export is GDPR-compliant and satisfies the Article 20 right to data portability in both letter and spirit. You do not need to file a Subject Access Request or wait 30 days. The download is available directly from your account dashboard.
This is how it should work for every AI. It is how it works for MEOK.
This is the question most AI companies hope you will not ask. Here is our answer.
If MEOK is ever acquired, the acquiring company inherits our servers and our infrastructure. What they do not inherit is the ability to read your data. Because encryption keys are held client-side โ on your device, derived from factors you control โ the ciphertext on our servers is mathematically useless without your key. An acquirer cannot decrypt your memories. They cannot read your conversations. They cannot train a new model on your personal data.
This is not a contractual promise. It is not โwe agree not to share your data with the acquirer.โ It is a structural property of the system. No contract can unlock ciphertext that was never encrypted with the acquirer's key.
The only way an acquisition changes your privacy is if MEOK changes the architecture before that point โ and that change would require your explicit re-consent to a new encryption model. We would rather close the company.
Eight data rights dimensions across ChatGPT, Claude, Gemini, and MEOK.
| Data right | ChatGPT | Claude | Gemini | MEOK |
|---|---|---|---|---|
| You legally own your data (not licensed back to you) Most platforms take a broad perpetual licence over submitted content. MEOK does not. | โ | โ | โ | โ |
| No training on conversations by default OpenAI and Google train by default. Anthropic retains for 90 days (safety). MEOK: architecturally impossible. | โ | โ | โ | โ |
| Encryption keys held by the user MEOK derives encryption keys client-side. The server stores only ciphertext. | โ | โ | โ | โ |
| Full memory export in open format ChatGPT and Gemini offer limited data exports. Neither exports structured AI memories in portable form. | โ | โ | โ | โ |
| Cryptographic deletion (provably irrecoverable) MEOK deletes the encryption key, rendering all stored ciphertext permanently unreadable. | โ | โ | โ | โ |
| GDPR Subject Access Request (30-day) All four comply with SAR obligations, though scope of data returned varies. | โ | โ | โ | โ |
| Memory portability (take your memories to another AI) MEOK exports memories as structured JSON. No mainstream competitor offers this. | โ | โ | โ | โ |
| Data remains private if company is acquired MEOK's cryptographic architecture means an acquirer inherits only ciphertext โ not your plaintext. | โ | โ | โ | โ |
Eight questions to ask any AI before trusting it with your life.
Who holds the encryption keys?
If the company holds the keys, they can read your data. Full stop. Ask specifically whether keys are client-side or server-side.
Is training opt-in or opt-out?
Opt-out means you are already contributing to training until you find the setting. Opt-in means your explicit consent is required. Demand the latter.
What happens to your data in a breach?
If conversations are stored in plaintext (or decryptable server-side), a breach exposes everything you have ever said. Ask whether the architecture limits breach impact.
Can you actually delete your data?
Not just remove it from the UI โ actually delete it, including derived data and model contributions. Ask whether deletion is cryptographic or merely administrative.
Can you export your memories?
A full, structured export โ not a PDF of conversation snippets. Your AI relationship should be portable. If it is not, you are locked in.
What happens to your data if the company is acquired?
Read the privacy policy for change-of-control language. If it says 'your data may be transferred as part of a business transaction,' your data is a business asset.
Who can see your conversations?
Employees? Contractors? Reviewers? Regulators? Ask specifically what categories of people have access to plaintext conversation data and under what circumstances.
Is the privacy claim architectural or contractual?
A contractual promise can be broken, renegotiated, or overridden by a court order. An architectural constraint โ like client-side encryption โ cannot. Know the difference.
MEOK is the only AI companion where you hold the encryption keys, can export every memory, and can delete everything โ cryptographically and permanently โ whenever you choose.
Meet your sovereign AI โWhy MEOK Can Never Be Trained on Your Conversations
It's not a policy. It's architecture. The technical proof.
UK LawSovereign AI in the UK: What the Data Protection Act Means for Your AI Companion
UK GDPR, the ICO, and the Children's Code โ and how MEOK meets every obligation.
ComparisonSovereign AI vs Cloud AI: What's the Difference?
A plain-English guide to the architectural choices that determine your privacy.
AI MemoryThe Memory Problem: Why AI Can't Remember You Safely (Yet)
Why AI memory and privacy are in tension โ and how MEOK resolves it.