Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source
-
How much longer until the AI bubbles pops? I'm tired of this.
Time to face the facts, this utter shit is here to stay, just like every other bit of enshitification we get exposed to.
-
My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don't.
First off, you don't even know the terminology. A local LLM is one YOU run on YOUR machine.
Lumo apparently runs on Proton servers - where their email and docs all are as well. So I'm not sure what "Their AI is not local!" even means other than you don't know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy....just...no.
Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That's just a fact. Google's business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.
There is no such thing as e2ee LLMs. That's not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that's unacceptable for you, then don't use it. But don't brandish your ignorance like you're some expert, and that everyone on earth needs to adhere to whatever "standards" you think up that seem ill-informed.
Also, clearly you aren't using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text "This breaks the e2ee! Are you REALLY sure you want to do this?" So your complaint about warnings is just a flag saying you don't actually know and are just guessing.
A local LLM is one YOU run on YOUR machine.
Yes, that is exactly what I am saying. You seem to be confused by basic English.
Look, Proton can at any time MITM attack your email
They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.
There is no such thing as e2ee LLMs. That's not how any of this works.
I know. When did I say there is?
-
A local LLM is one YOU run on YOUR machine.
Yes, that is exactly what I am saying. You seem to be confused by basic English.
Look, Proton can at any time MITM attack your email
They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.
There is no such thing as e2ee LLMs. That's not how any of this works.
I know. When did I say there is?
So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.
What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."
Beyond the fact that
They are not supposed to be able to and well designed e2ee services can’t be.
So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.
-
So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.
What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."
Beyond the fact that
They are not supposed to be able to and well designed e2ee services can’t be.
So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.
What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.
I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.
-
How much longer until the AI bubbles pops? I'm tired of this.
Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.
-
What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.
I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.
They compare it to proton mail and drive that are supposedly e2ee.
Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.
-
This post did not contain any content.
Proton has my vote for fastest company ever to completely enshittify.
-
How much longer until the AI bubbles pops? I'm tired of this.
It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.
-
What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.
I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.
It is e2ee -- with the LLM context window!
When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?
-
Both your take, and the author, seem to not understand how LLMs work. At all.
At some point, yes, an LLM model has to process clear text tokens. There's no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don't HAVE to use it. It's not being forced down your throat like Gemini or CoPilot.
And their LLM. - it's Mistral, OpenHands and OLMO, all open source. It's in their documentation. So this article is straight up lies about that. Like.... Did Google write this article? It's simply propaganda.
Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it's basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that's obviously their bridge. But it's not a default setup. It's an option you have to set up. It's not for everyone. Some users want that. It's not forced on everyone. Chill TF out.
If an AI can work on encrypted data, it's not encrypted.
-
It is e2ee -- with the LLM context window!
When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?
It is e2ee
It is not. Not in any meaningful way.
When you email someone outside Proton servers, doesn't the same thing happen anyway?
Yes it does.
But the LLM is on Proton servers, so what's the actual vulnerability?
Again, the issue is not the technology. The issue is deceptive marketing. Why doesn't their site clearly say what you say? Why use confusing technical terms most people won't understand and compare it to drive that is fully e2ee?
-
They compare it to proton mail and drive that are supposedly e2ee.
Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.
Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.
And even if you disagree about mail, there is no excuse for comparing to proton drive.
-
It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.
Is it like crypto where cpus were good and then gpus and then FPGAs then ASICs? Or is this different?
-
I'm just saying Andy sucking up to Trump is a red flag. I'm cancelling in 2026 🫠
What are you considering as alternatives?
-
What are you considering as alternatives?
I highly suggest Tuta, https://tuta.com/, or other conventional mail boxes like https://mailbox.org/en/
-
A local LLM is one YOU run on YOUR machine.
Yes, that is exactly what I am saying. You seem to be confused by basic English.
Look, Proton can at any time MITM attack your email
They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.
There is no such thing as e2ee LLMs. That's not how any of this works.
I know. When did I say there is?
They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.
You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.
-
They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.
You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.
If you insist on being a fanboy than go ahead. But this is like arguing a bulletproof vest is useless because it does not cover your entire body.
-
Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.
You're saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?
Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn't have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn't that the web/tech wasn't valuable, it's now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.
-
Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.
And even if you disagree about mail, there is no excuse for comparing to proton drive.
Email is almost always zero-access encryption (like live chats), considering the % of proton users and the amount of emails between them (or the even smaller % of PGP users). Drive is e2ee like chat history.
Basically I see email : chats = drive : history.Anyway, I agree it could be done better, but I don't really see the big deal. Any user unable to understand this won't get the difference between zero-access and e2e.
-
You're saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?
Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn't have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn't that the web/tech wasn't valuable, it's now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.
I literally said exactly what you're explaining. I'm not sure what you're trying to accomplish here....