Skip to content

Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source

Technology
86 44 1
  • My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don't.

    First off, you don't even know the terminology. A local LLM is one YOU run on YOUR machine.

    Lumo apparently runs on Proton servers - where their email and docs all are as well. So I'm not sure what "Their AI is not local!" even means other than you don't know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy....just...no.

    Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That's just a fact. Google's business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

    There is no such thing as e2ee LLMs. That's not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that's unacceptable for you, then don't use it. But don't brandish your ignorance like you're some expert, and that everyone on earth needs to adhere to whatever "standards" you think up that seem ill-informed.

    Also, clearly you aren't using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text "This breaks the e2ee! Are you REALLY sure you want to do this?" So your complaint about warnings is just a flag saying you don't actually know and are just guessing.

    A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

  • So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

  • How much longer until the AI bubbles pops? I'm tired of this.

    Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

  • This post did not contain any content.

    Proton has my vote for fastest company ever to completely enshittify.

  • How much longer until the AI bubbles pops? I'm tired of this.

    It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    It is e2ee -- with the LLM context window!

    When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?

  • Both your take, and the author, seem to not understand how LLMs work. At all.

    At some point, yes, an LLM model has to process clear text tokens. There's no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don't HAVE to use it. It's not being forced down your throat like Gemini or CoPilot.

    And their LLM. - it's Mistral, OpenHands and OLMO, all open source. It's in their documentation. So this article is straight up lies about that. Like.... Did Google write this article? It's simply propaganda.

    Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it's basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that's obviously their bridge. But it's not a default setup. It's an option you have to set up. It's not for everyone. Some users want that. It's not forced on everyone. Chill TF out.

    If an AI can work on encrypted data, it's not encrypted.

  • It is e2ee -- with the LLM context window!

    When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?

    It is e2ee

    It is not. Not in any meaningful way.

    When you email someone outside Proton servers, doesn't the same thing happen anyway?

    Yes it does.

    But the LLM is on Proton servers, so what's the actual vulnerability?

    Again, the issue is not the technology. The issue is deceptive marketing. Why doesn't their site clearly say what you say? Why use confusing technical terms most people won't understand and compare it to drive that is fully e2ee?

  • They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

    Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.

    And even if you disagree about mail, there is no excuse for comparing to proton drive.

  • It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

    Is it like crypto where cpus were good and then gpus and then FPGAs then ASICs? Or is this different?

  • I'm just saying Andy sucking up to Trump is a red flag. I'm cancelling in 2026 🫠

    What are you considering as alternatives?

  • What are you considering as alternatives?

    I highly suggest Tuta, https://tuta.com/, or other conventional mail boxes like https://mailbox.org/en/

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

    You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.

  • They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

    You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.

    If you insist on being a fanboy than go ahead. But this is like arguing a bulletproof vest is useless because it does not cover your entire body.

  • Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.

    You're saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?

    Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn't have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn't that the web/tech wasn't valuable, it's now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.

  • Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.

    And even if you disagree about mail, there is no excuse for comparing to proton drive.

    Email is almost always zero-access encryption (like live chats), considering the % of proton users and the amount of emails between them (or the even smaller % of PGP users). Drive is e2ee like chat history.
    Basically I see email : chats = drive : history.

    Anyway, I agree it could be done better, but I don't really see the big deal. Any user unable to understand this won't get the difference between zero-access and e2e.

  • You're saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?

    Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn't have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn't that the web/tech wasn't valuable, it's now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.

    I literally said exactly what you're explaining. I'm not sure what you're trying to accomplish here....

  • It is e2ee

    It is not. Not in any meaningful way.

    When you email someone outside Proton servers, doesn't the same thing happen anyway?

    Yes it does.

    But the LLM is on Proton servers, so what's the actual vulnerability?

    Again, the issue is not the technology. The issue is deceptive marketing. Why doesn't their site clearly say what you say? Why use confusing technical terms most people won't understand and compare it to drive that is fully e2ee?

    It is deceptive. This thread is full of people who know enough to not be deceived and they think it should be obvious to everyone... but it's not.

  • 149 Stimmen
    4 Beiträge
    1 Aufrufe
    D
    I think the writer meant “complicitness.” Conservatives and their ilk are all low key or blatant kid fuckers.
  • 109 Stimmen
    11 Beiträge
    53 Aufrufe
    th3dogcow@lemmy.worldT
    Reader view on most browsers will bypass articles like this. It worked for me.
  • Canadian telecom hacked by suspected China state group

    Technology technology
    3
    1
    57 Stimmen
    3 Beiträge
    37 Aufrufe
    M
    While this news is both expected and unsettling, I'm pretty keen on how our gov has this info available to the public. And the site itself - such a vast resource for security info, tools, etc. Not all of our gov nor all departments are something to behold, but our cyber teams are top notch. And holy shit: https://github.com/CybercentreCanada
  • 68 Stimmen
    17 Beiträge
    150 Aufrufe
    H
    Set up arrs, you basically set it and forget it.
  • The FDA Is Approving Drugs Without Evidence They Work

    Technology technology
    69
    1
    506 Stimmen
    69 Beiträge
    549 Aufrufe
    L
    Now you hit me curious too. This was my source on Texas https://www.texasalmanac.com/place-types/town Also the total number of total towns is over 4,000 with only 3k unincorporated, I did get the numbers wrong even in Texas. I had looked at Wikipedia but could not find totals, only lists
  • 6 Stimmen
    1 Beiträge
    16 Aufrufe
    Niemand hat geantwortet
  • AI model collapse is not what we paid for

    Technology technology
    20
    1
    84 Stimmen
    20 Beiträge
    172 Aufrufe
    A
    I share your frustration. I went nuts about this the other day. It was in the context of searching on a discord server, rather than Google, but it was so aggravating because of the how the "I know better than you" is everywhere nowadays in tech. The discord server was a reading group, and I was searching for discussion regarding a recent book they'd studied, by someone named "Copi". At first, I didn't use quotation marks, and I found my results were swamped with messages that included the word "copy". At this point I was fairly chill and just added quotation marks to my query to emphasise that it definitely was "Copi" I wanted. I still was swamped with messages with "copy", and it drove me mad because there is literally no way to say "fucking use the terms I give you and not the ones you think I want". The software example you give is a great example of when it would be real great to be able to have this ability. TL;DR: Solidarity in rage
  • Microsoft's AI Secretly Copying All Your Private Messages

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    50 Aufrufe
    S
    Forgive me for not explaining better. Here are the terms potentially needing explanation. Provisioning in this case is initial system setup, the kind of stuff you would do manually after a fresh install, but usually implies a regimented and repeatable process. Virtual Machine (VM) snapshots are like a save state in a game, and are often used to reset a virtual machine to a particular known-working condition. Preboot Execution Environment (PXE, aka ‘network boot’) is a network adapter feature that lets you boot a physical machine from a hosted network image rather than the usual installation on locally attached storage. It’s probably tucked away in your BIOS settings, but many computers have the feature since it’s a common requirement in commercial deployments. As with the VM snapshot described above, a PXE image is typically a known-working state that resets on each boot. Non-virtualized means not using hardware virtualization, and I meant specifically not running inside a virtual machine. Local-only means without a network or just not booting from a network-hosted image. Telemetry refers to data collecting functionality. Most software has it. Windows has a lot. Telemetry isn’t necessarily bad since it can, for example, help reveal and resolve bugs and usability problems, but it is easily (and has often been) abused by data-hungry corporations like MS, so disabling it is an advisable precaution. MS = Microsoft OSS = Open Source Software Group policies are administrative settings in Windows that control standards (for stuff like security, power management, licensing, file system and settings access, etc.) for user groups on a machine or network. Most users stick with the defaults but you can edit these yourself for a greater degree of control. Docker lets you run software inside “containers” to isolate them from the rest of the environment, exposing and/or virtualizing just the resources they need to run, and Compose is a related tool for defining one or more of these containers, how they interact, etc. To my knowledge there is no one-to-one equivalent for Windows. Obviously, many of these concepts relate to IT work, as are the use-cases I had in mind, but the software is simple enough for the average user if you just pick one of the premade playbooks. (The Atlas playbook is popular among gamers, for example.) Edit: added explanations for docker and telemetry