Confer: Signal’s Founder Builds an AI Chatbot That Can’t Spy on You

Scott Dooley
6 min read · Jan 17, 2026

When you type a message into ChatGPT, Claude, or Google’s Gemini, you’re sharing information with a company that stores, analyses, and may use that data in ways you cannot control. This creates a problem that Moxie Marlinspike—the cryptographer behind Signal’s encryption protocol—believes we’re underestimating.

Marlinspike’s solution is Confer, an AI chatbot launched in late 2025 that encrypts your conversations so thoroughly that even the company running it cannot read them. It’s the same principle that made Signal the gold standard for private messaging, now applied to artificial intelligence.

This article explains how Confer works, why AI privacy matters, and what this means for the broader tech industry.

The Privacy Problem With AI Chatbots

Most AI chatbots feel private. You’re alone with your screen, typing questions you might never ask another person. But that perception is misleading. Every prompt you submit, every answer you receive, and every piece of context you provide typically ends up on a company’s servers—stored, logged, and potentially used for training future models.

Marlinspike puts it bluntly: AI represents “the first major tech medium that actively invites confession.” People share their anxieties with chatbots, ask for medical advice, discuss relationship problems, seek help with sensitive work documents, and explore thoughts they would never write in an email. The intimacy of these interactions makes the data extraordinarily valuable—and extraordinarily sensitive.

How This Data Could Be Misused

Today’s AI providers say they use your data to improve their services. But as Marlinspike argues, the information people share with AI chatbots could eventually enable new forms of manipulation.

Imagine advertisers with access to your therapy-like conversations. They would know your insecurities, your aspirations, your vulnerabilities. Marlinspike describes this potential as “a profoundly more powerful—and manipulative—form of advertising,” comparing it to paying a therapist to convince you of something.

The risk extends beyond advertising. AI chat histories could be subpoenaed in legal proceedings, exposed in data breaches, or accessed by governments. Once data exists on a company’s servers, the user loses control over its fate.

How Confer Encrypts AI Conversations

Confer applies end-to-end encryption to AI interactions using a combination of techniques developed for secure messaging and secure computing.

End-to-End Encryption

When you send a prompt to Confer, it’s encrypted on your device before transmission. The encrypted data travels to Confer’s servers, where it must be decrypted to generate a response. This is where Confer differs from traditional encrypted messaging—an AI model cannot process encrypted text directly.

To solve this, Confer uses confidential computing, a technology that processes data inside hardware-enforced secure enclaves called Trusted Execution Environments (TEEs). Your data is decrypted only inside this protected space, processed by the AI model, and the response is encrypted before leaving. The company’s engineers cannot access the data even while it’s being processed.

Passkeys Instead of Passwords

Confer doesn’t use traditional passwords. Instead, it relies on passkeys—authentication tied to your device’s biometrics (Face ID, Touch ID) or device unlock PIN. This approach derives encryption keys from your authentication, meaning no password can be phished or leaked.

Verifiable Privacy

One challenge with privacy claims is trust. How do you know a company actually does what it says? Confer addresses this through remote attestation—a way for users to verify exactly what software is running on the servers processing their data, rather than taking the company’s word for it.

Confer publishes its full software stack openly and digitally signs every release. Any user can cryptographically verify that the code running on Confer’s servers matches the published code. This isn’t perfect—you still need to trust the hardware manufacturers—but it’s a significant step toward provable privacy.

Why Major AI Providers Won’t Do This

Google, OpenAI, Microsoft, and Anthropic are unlikely to adopt Confer’s approach. Their business models depend on access to user data. Training AI models requires vast datasets, and user interactions provide valuable signals for improvement. End-to-end encryption would eliminate this feedback loop.

There’s also a regulatory dimension. Governments increasingly want the ability to access AI conversations for law enforcement and national security purposes. Companies that cannot access their users’ data cannot comply with such requests—a position that invites regulatory pressure.

Marlinspike’s bet is that a privacy-first alternative can succeed despite these forces. Signal proved that encrypted messaging could attract hundreds of millions of users even when competing against services with far more resources. Confer attempts the same disruption in AI.

Trade-offs and Limitations

Privacy comes with costs. Confer charges $34.99 per month for its premium tier—significantly more expensive than competitors. The higher price reflects the additional infrastructure required for confidential computing and the inability to subsidise costs through data monetisation.

The service uses open-source AI models rather than the proprietary models from OpenAI or Anthropic. While open-source models have improved dramatically, they may not match the capabilities of the most advanced commercial offerings. Confer compensates by using different models for different tasks, but some users may find the responses less capable than those from competitors.

There’s also the question of trust. Confer’s architecture is designed to minimise trust requirements, but using any cloud service requires some level of confidence in the provider. Remote attestation helps, but most users will not verify the cryptographic proofs themselves.

What This Means for AI Privacy

Confer’s launch signals growing awareness that AI privacy is a distinct and serious issue. The intimate nature of AI interactions—combined with the permanence of digital records—creates risks that existing privacy frameworks weren’t designed to address.

For businesses handling sensitive information, Confer offers an alternative to either prohibiting AI use or accepting the privacy trade-offs of mainstream providers. Legal professionals, healthcare workers, journalists, and anyone dealing with confidential data may find value in a service that cannot access their conversations.

For individuals, Confer represents a choice. You can continue using free or cheaper AI services that collect your data, or you can pay for privacy. That trade-off has always existed in technology, but AI makes it more consequential. The information you share with a chatbot over months or years paints a detailed psychological portrait—one that most people would prefer to keep private.

Conclusion

Moxie Marlinspike built Signal because he believed private communication should be accessible to everyone, not just those with technical expertise. With Confer, he’s making the same argument about AI: your conversations with machines deserve the same protection as your conversations with people.

Whether Confer succeeds commercially matters less than the question it raises. As AI becomes embedded in daily life—helping us work, think, and make decisions—do we accept that every interaction will be recorded and analysed? Or do we demand something better?

For now, Confer offers a proof of concept. Private AI is technically possible. The question is whether enough people will value it.

Sources

Author

  • Scott Dooley is a seasoned entrepreneur and data protection expert with over 15 years of experience in the tech industry. As the founder of Measured Collective and Kahunam, Scott has dedicated his career to helping businesses navigate the complex landscape of data privacy and GDPR compliance.

    With a background in marketing and web development, Scott brings a unique perspective to data protection issues, understanding both the technical and business implications of privacy regulations. His expertise spans from cookie compliance to implementing privacy-by-design principles in software development.

    Scott is passionate about demystifying GDPR and making data protection accessible to businesses of all sizes. Through his blog, he shares practical insights, best practices, and the latest developments in data privacy law, helping readers stay informed and compliant in an ever-changing regulatory environment.

    View all posts