Skip to content

Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source

Technology
86 44 1
  • This was it for me, cancelled my account. Fuck this Andy moron

    Well, I'm keeping mine. I'm actually very happy with it. This article is full slop, with loads of disinformation, and an evident lack of research. It looks like it was made with some Ai bullshit and the writer didn't even check what that thing vomited.

  • For a critical blog, the first few paragraphs sound a lot like they're shilling for Proton.

    I'm not sure if I'm supposed to be impressed by the author's witty wording, but "the cool trick they do" is - full encryption.

    Moving on.

    But that’s misleading. The actual large language model is not open. The code for Proton’s bit of Lumo is not open source. The only open source bit that Proton’s made available is just some of Proton’s controls for the LLM. [GitHub]

    In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure!

    oof.

    Over the years I've heard many people claim that proton's servers being in Switzerland is more secure than other EU countries - well there's also this now:

    Proton is moving its servers out of Switzerland to another country in the EU they haven’t specified. The Lumo announcement is the first that Proton’s mentioned this.

    No company is safe from enshittification - always look for, and base your choices on, the legally binding stuff, before you commit. Be wary of weasel wording. And always, always be ready to move* on when the enshittification starts despite your caution.


    * regarding email, there's redirection services a.k.a. eternal email addresses - in some cases run by venerable non-profits.

    Switzerland has a surveillance law in the works that will force VPNs, messaging apps, and online platforms to log users' identities, IP addresses, and metadata for government access

  • This post did not contain any content.

    I'm just saying Andy sucking up to Trump is a red flag. I'm cancelling in 2026 🫠

  • How much longer until the AI bubbles pops? I'm tired of this.

    Time to face the facts, this utter shit is here to stay, just like every other bit of enshitification we get exposed to.

  • My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don't.

    First off, you don't even know the terminology. A local LLM is one YOU run on YOUR machine.

    Lumo apparently runs on Proton servers - where their email and docs all are as well. So I'm not sure what "Their AI is not local!" even means other than you don't know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy....just...no.

    Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That's just a fact. Google's business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

    There is no such thing as e2ee LLMs. That's not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that's unacceptable for you, then don't use it. But don't brandish your ignorance like you're some expert, and that everyone on earth needs to adhere to whatever "standards" you think up that seem ill-informed.

    Also, clearly you aren't using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text "This breaks the e2ee! Are you REALLY sure you want to do this?" So your complaint about warnings is just a flag saying you don't actually know and are just guessing.

    A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

  • So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

  • How much longer until the AI bubbles pops? I'm tired of this.

    Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

  • This post did not contain any content.

    Proton has my vote for fastest company ever to completely enshittify.

  • How much longer until the AI bubbles pops? I'm tired of this.

    It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    It is e2ee -- with the LLM context window!

    When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?

  • Both your take, and the author, seem to not understand how LLMs work. At all.

    At some point, yes, an LLM model has to process clear text tokens. There's no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don't HAVE to use it. It's not being forced down your throat like Gemini or CoPilot.

    And their LLM. - it's Mistral, OpenHands and OLMO, all open source. It's in their documentation. So this article is straight up lies about that. Like.... Did Google write this article? It's simply propaganda.

    Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it's basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that's obviously their bridge. But it's not a default setup. It's an option you have to set up. It's not for everyone. Some users want that. It's not forced on everyone. Chill TF out.

    If an AI can work on encrypted data, it's not encrypted.

  • It is e2ee -- with the LLM context window!

    When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?

    It is e2ee

    It is not. Not in any meaningful way.

    When you email someone outside Proton servers, doesn't the same thing happen anyway?

    Yes it does.

    But the LLM is on Proton servers, so what's the actual vulnerability?

    Again, the issue is not the technology. The issue is deceptive marketing. Why doesn't their site clearly say what you say? Why use confusing technical terms most people won't understand and compare it to drive that is fully e2ee?

  • They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

    Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.

    And even if you disagree about mail, there is no excuse for comparing to proton drive.

  • It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

    Is it like crypto where cpus were good and then gpus and then FPGAs then ASICs? Or is this different?

  • I'm just saying Andy sucking up to Trump is a red flag. I'm cancelling in 2026 🫠

    What are you considering as alternatives?

  • What are you considering as alternatives?

    I highly suggest Tuta, https://tuta.com/, or other conventional mail boxes like https://mailbox.org/en/

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

    You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.

  • They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

    You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.

    If you insist on being a fanboy than go ahead. But this is like arguing a bulletproof vest is useless because it does not cover your entire body.

  • 440 Stimmen
    104 Beiträge
    1k Aufrufe
    P
    I'm pretty sure I disabled/removed it when I got this phone. I don't specifically remember doing it but when I get a new phone, I watch some YouTube videos on how to purge all the crap I don't want. I read an article that mentioned using command line stuff to eliminate it and it kind looked familiar. I think I did this. I really should write stuff down.
  • 518 Stimmen
    97 Beiträge
    796 Aufrufe
    I
    Fine, here is my pornhub account smh.
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    772 Stimmen
    205 Beiträge
    6k Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • OpenAI wins $200m contract with US military for ‘warfighting’

    Technology technology
    42
    1
    283 Stimmen
    42 Beiträge
    373 Aufrufe
    gadgetboy@lemmy.mlG
    [image: 8aff8b12-7ed7-4df5-b40d-9d9d14708dbf.gif]
  • Teachers Are Not OK

    Technology technology
    18
    1
    252 Stimmen
    18 Beiträge
    190 Aufrufe
    curious_canid@lemmy.caC
    AI is so far from being the main problem with our current US educational system that I'm not sure why we bother to talk about it. Until we can produce students who meet minimum standards for literacy and critical thinking, AI is a sideshow.
  • The hidden cost of Georgia’s online casino boom

    Technology technology
    1
    18 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • 7 Stimmen
    14 Beiträge
    122 Aufrufe
    G
    A carrot perhaps... Or a very big stick.
  • 0 Stimmen
    4 Beiträge
    45 Aufrufe
    K
    Only way I'll want a different phone brand is if it comes with ZERO bloatware and has an excellent internal memory/storage cleanse that has nothing to do with Google's Files or a random app I'm not sure I can trust without paying or rooting. So far my A series phones do what I need mostly and in my opinion is superior to the Motorola's my fiancé prefers minus the phone-phone charge ability his has, everything else I'm just glad I have enough control to tweak things to my liking, however these days Samsungs seem to be infested with Google bloatware and apps that insist on opening themselves back up regardless of the widespread battery restrictions I've assigned (even was sent a "Stop Closing my Apps" notif that sent me to an article ) short of Disabling many unnecessary apps bc fully rooting my devices is something I rarely do anymore. I have a random Chinese brand tablet where I actually have more control over the apps than either of my A series phones whee Force Stopping STAYS that way when I tell them to! I hate being listened to for ads and the unwanted draining my battery life and data (I live off-grid and pay data rates because "Unlimited" is some throttled BS) so my ability to control what's going on in the background matters a lot to me, enough that I'm anti Meta-apps and avoid all non-essential Google apps. I can't afford topline phones and the largest data plan, so I work with what I can afford and I'm sad refurbished A lines seem to be getting more expensive while giving away my control to companies. Last A line I bought that was supposed to be my first 5G phone was network locked, so I got ripped off, but it still serves me well in off-grid life. Only app that actually regularly malfunctions when I Force Stop it's background presence is Roku, which I find to have very an almost insidious presence in our lives. Google Play, Chrome, and Spotify never acts incompetent in any way no matter how I have to open the setting every single time I turn Airplane Mode off. Don't need Gmail with Chrome and DuckDuckGo has been awesome at intercepting self-loading ads. I hope one day DDG gets better bc Google seems to be terrible lately and I even caught their AI contradicting itself when asking about if Homo Florensis is considered Human (yes) and then asked the oldest age of human remains, and was fed the outdated narrative of 300,000 years versus 700,000+ years bipedal pre-humans have been carbon dated outside of the Cradle of Humanity in South Africa. SO sorry to go off-topic, but I've got a big gripe with Samsung's partnership with Google, especially considering the launch of Quantum Computed AI that is still being fine-tuned with company-approved censorships.