Skip to content

Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source

Technology
81 41 1
  • My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don't.

    First off, you don't even know the terminology. A local LLM is one YOU run on YOUR machine.

    Lumo apparently runs on Proton servers - where their email and docs all are as well. So I'm not sure what "Their AI is not local!" even means other than you don't know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy....just...no.

    Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That's just a fact. Google's business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

    There is no such thing as e2ee LLMs. That's not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that's unacceptable for you, then don't use it. But don't brandish your ignorance like you're some expert, and that everyone on earth needs to adhere to whatever "standards" you think up that seem ill-informed.

    Also, clearly you aren't using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text "This breaks the e2ee! Are you REALLY sure you want to do this?" So your complaint about warnings is just a flag saying you don't actually know and are just guessing.

    A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

  • So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

  • How much longer until the AI bubbles pops? I'm tired of this.

    Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

  • This post did not contain any content.

    Proton has my vote for fastest company ever to completely enshittify.

  • How much longer until the AI bubbles pops? I'm tired of this.

    It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    It is e2ee -- with the LLM context window!

    When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?

  • Both your take, and the author, seem to not understand how LLMs work. At all.

    At some point, yes, an LLM model has to process clear text tokens. There's no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don't HAVE to use it. It's not being forced down your throat like Gemini or CoPilot.

    And their LLM. - it's Mistral, OpenHands and OLMO, all open source. It's in their documentation. So this article is straight up lies about that. Like.... Did Google write this article? It's simply propaganda.

    Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it's basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that's obviously their bridge. But it's not a default setup. It's an option you have to set up. It's not for everyone. Some users want that. It's not forced on everyone. Chill TF out.

    If an AI can work on encrypted data, it's not encrypted.

  • It is e2ee -- with the LLM context window!

    When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?

    It is e2ee

    It is not. Not in any meaningful way.

    When you email someone outside Proton servers, doesn't the same thing happen anyway?

    Yes it does.

    But the LLM is on Proton servers, so what's the actual vulnerability?

    Again, the issue is not the technology. The issue is deceptive marketing. Why doesn't their site clearly say what you say? Why use confusing technical terms most people won't understand and compare it to drive that is fully e2ee?

  • They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

    Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.

    And even if you disagree about mail, there is no excuse for comparing to proton drive.

  • It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

    Is it like crypto where cpus were good and then gpus and then FPGAs then ASICs? Or is this different?

  • I'm just saying Andy sucking up to Trump is a red flag. I'm cancelling in 2026 🫠

    What are you considering as alternatives?

  • What are you considering as alternatives?

    I highly suggest Tuta, https://tuta.com/, or other conventional mail boxes like https://mailbox.org/en/

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

    You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.

  • They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

    You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.

    If you insist on being a fanboy than go ahead. But this is like arguing a bulletproof vest is useless because it does not cover your entire body.

  • Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.

    You're saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?

    Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn't have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn't that the web/tech wasn't valuable, it's now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.

  • Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.

    And even if you disagree about mail, there is no excuse for comparing to proton drive.

    Email is almost always zero-access encryption (like live chats), considering the % of proton users and the amount of emails between them (or the even smaller % of PGP users). Drive is e2ee like chat history.
    Basically I see email : chats = drive : history.

    Anyway, I agree it could be done better, but I don't really see the big deal. Any user unable to understand this won't get the difference between zero-access and e2e.

  • You're saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?

    Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn't have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn't that the web/tech wasn't valuable, it's now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.

    I literally said exactly what you're explaining. I'm not sure what you're trying to accomplish here....

  • It is e2ee

    It is not. Not in any meaningful way.

    When you email someone outside Proton servers, doesn't the same thing happen anyway?

    Yes it does.

    But the LLM is on Proton servers, so what's the actual vulnerability?

    Again, the issue is not the technology. The issue is deceptive marketing. Why doesn't their site clearly say what you say? Why use confusing technical terms most people won't understand and compare it to drive that is fully e2ee?

    It is deceptive. This thread is full of people who know enough to not be deceived and they think it should be obvious to everyone... but it's not.

  • 172 Stimmen
    29 Beiträge
    133 Aufrufe
    K
    Guys hi, just looking for some support share, a Fantasy Adventure Story, for all ages and just some entertain with some storyes: https://www.youtube.com/watch?v=_mVIvQ1wsgg - maybe you are curious
  • 76 Stimmen
    4 Beiträge
    57 Aufrufe
    R
    Where exactly are they changing it there, negative offense? Another wonder weapon? LOL
  • New Grads Hit AI Job Wall as Market Flips Upside Down

    Technology technology
    1
    1
    29 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • Comment utiliser ChatGPT : le guide complet - BDM

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • 106 Stimmen
    25 Beiträge
    330 Aufrufe
    tryenjer@lemmy.worldT
    In short, we will need an open-source alternative to these implants, of course.
  • AI model collapse is not what we paid for

    Technology technology
    20
    1
    84 Stimmen
    20 Beiträge
    172 Aufrufe
    A
    I share your frustration. I went nuts about this the other day. It was in the context of searching on a discord server, rather than Google, but it was so aggravating because of the how the "I know better than you" is everywhere nowadays in tech. The discord server was a reading group, and I was searching for discussion regarding a recent book they'd studied, by someone named "Copi". At first, I didn't use quotation marks, and I found my results were swamped with messages that included the word "copy". At this point I was fairly chill and just added quotation marks to my query to emphasise that it definitely was "Copi" I wanted. I still was swamped with messages with "copy", and it drove me mad because there is literally no way to say "fucking use the terms I give you and not the ones you think I want". The software example you give is a great example of when it would be real great to be able to have this ability. TL;DR: Solidarity in rage
  • CrowdStrike Announces Layoffs Affecting 500 Employees

    Technology technology
    8
    1
    242 Stimmen
    8 Beiträge
    69 Aufrufe
    S
    This is where the magic of near meaningless corpo-babble comes in. The layoffs are part of a plan to aspirationally acheive the goal of $10b revenue by EoY 2025. What they are actually doing is a significant restructuring of the company, refocusing by outside hiring some amount of new people to lead or be a part of departments or positions that haven't existed before, or are being refocused to other priorities... ... But this process also involves laying off 500 of the 'least productive' or 'least mission critical' employees. So, technically, they can, and are, arguing that their new organizational paradigm will be so succesful that it actually will result in increased revenue, not just lower expenses. Generally corpos call this something like 'right-sizing' or 'refocusing' or something like that. ... But of course... anyone with any actual experience with working at a place that does this... will tell you roughly this is what happens: Turns out all those 'grunts' you let go of, well they actually do a lot more work in a bunch of weird, esoteric, bandaid solutions to keep everything going, than upper management was aware of... because middle management doesn't acknowledge or often even understand that that work was being done, because they are generally self-aggrandizing narcissist petty tyrants who spend more time in meetings fluffing themselves up than actually doing any useful management. Then, also, you are now bringing on new, outside people who look great on paper, to lead new or modified apartments... but they of course also do not have any institutional knowledge, as they are new. So now, you have a whole bunch of undocumented work that was being done, processes which were being followed... which is no longer being done, which is not documented.... and the new guys, even if they have the best intentions, now have to spend a quarter or two or three figuring out just exactly how much pre-existing middle management has been bullshitting about, figuring out just how much things do not actually function as they ssid it did... So now your efficiency improving restructuring is actually a chaotic mess. ... Now, this 'right sizing' is not always apocalyptically extremely bad, but it is also essentially never totally free from hiccups... and it increases stress, workload, and tensions between basically everyone at the company, to some extent. Here's Forbes explanation of this phenomenon, if you prefer an explanation of right sizing in corpospeak: https://www.forbes.com/advisor/business/rightsizing/
  • You Can't Look at Porn on Any Reddit Third-Party App Now

    Technology technology
    2
    1 Stimmen
    2 Beiträge
    35 Aufrufe
    V
    3rd party apps were still working ?