Skip to content

Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source

Technology
86 44 1
  • For a critical blog, the first few paragraphs sound a lot like they're shilling for Proton.

    I'm not sure if I'm supposed to be impressed by the author's witty wording, but "the cool trick they do" is - full encryption.

    Moving on.

    But that’s misleading. The actual large language model is not open. The code for Proton’s bit of Lumo is not open source. The only open source bit that Proton’s made available is just some of Proton’s controls for the LLM. [GitHub]

    In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure!

    oof.

    Over the years I've heard many people claim that proton's servers being in Switzerland is more secure than other EU countries - well there's also this now:

    Proton is moving its servers out of Switzerland to another country in the EU they haven’t specified. The Lumo announcement is the first that Proton’s mentioned this.

    No company is safe from enshittification - always look for, and base your choices on, the legally binding stuff, before you commit. Be wary of weasel wording. And always, always be ready to move* on when the enshittification starts despite your caution.


    * regarding email, there's redirection services a.k.a. eternal email addresses - in some cases run by venerable non-profits.

    Really? This article reads like it's AI slop reproducing Proton copy then pivoting to undermine them with straight up incorrect info.

    You know how Microsoft manages to make LibreOffice pulls errors on Windows 11? You really didn't stop to think that Google might contract out some slop farms to shit on Proton?

  • Both your take, and the author, seem to not understand how LLMs work. At all.

    At some point, yes, an LLM model has to process clear text tokens. There's no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don't HAVE to use it. It's not being forced down your throat like Gemini or CoPilot.

    And their LLM. - it's Mistral, OpenHands and OLMO, all open source. It's in their documentation. So this article is straight up lies about that. Like.... Did Google write this article? It's simply propaganda.

    Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it's basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that's obviously their bridge. But it's not a default setup. It's an option you have to set up. It's not for everyone. Some users want that. It's not forced on everyone. Chill TF out.

    Their AI is not local, so adding it to your email means breaking e2ee. That's to some extent fine. You can make an informed decision about it.

    But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the "zero trust" bullshit on protons own page.

  • This post did not contain any content.

    This was it for me, cancelled my account. Fuck this Andy moron

  • How much longer until the AI bubbles pops? I'm tired of this.

    depends on what and with whom. based on my current jobs with smaller companies and start ups? soon. they can't afford the tech debt they've brought onto themselves. big companies? who knows.

  • This post did not contain any content.

    Who Proton???? Nooo come on… who could ever seen this coming? 🐸🍲

  • For a critical blog, the first few paragraphs sound a lot like they're shilling for Proton.

    I'm not sure if I'm supposed to be impressed by the author's witty wording, but "the cool trick they do" is - full encryption.

    Moving on.

    But that’s misleading. The actual large language model is not open. The code for Proton’s bit of Lumo is not open source. The only open source bit that Proton’s made available is just some of Proton’s controls for the LLM. [GitHub]

    In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure!

    oof.

    Over the years I've heard many people claim that proton's servers being in Switzerland is more secure than other EU countries - well there's also this now:

    Proton is moving its servers out of Switzerland to another country in the EU they haven’t specified. The Lumo announcement is the first that Proton’s mentioned this.

    No company is safe from enshittification - always look for, and base your choices on, the legally binding stuff, before you commit. Be wary of weasel wording. And always, always be ready to move* on when the enshittification starts despite your caution.


    * regarding email, there's redirection services a.k.a. eternal email addresses - in some cases run by venerable non-profits.

    Over the years I've heard many people claim that proton's servers being in Switzerland is more secure than other EU countries

    Things change. They are doing it because Switzerland is proposing legislation that would definitely make that claim untrue.
    Europe is no paradise, especially certain countries, but it still makes sense.

    From the lumo announcement:

    Lumo represents one of many investments Proton will be making before the end of the decade to ensure that Europe stays strong, independent, and technologically sovereign. Because of legal uncertainty around Swiss government proposals(new window) to introduce mass surveillance — proposals that have been outlawed in the EU — Proton is moving most of its physical infrastructure out of Switzerland. Lumo will be the first product to move.

    This shift represents an investment of over €100 million into the EU proper. While we do not give up the fight for privacy in Switzerland (and will continue to fight proposals that we believe will be extremely damaging to the Swiss economy), Proton is also embracing Europe and helping to develop a sovereign EuroStack(new window) for the future of our home continent. Lumo is European, and proudly so, and here to serve everybody who cares about privacy and security worldwide.

  • Their AI is not local, so adding it to your email means breaking e2ee. That's to some extent fine. You can make an informed decision about it.

    But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the "zero trust" bullshit on protons own page.

    Where does it say "zero trust" 'on Protons own page'? It does not say "zero-trust" anywhere, it says "zero-access". The data is encrypted at rest, so it is not e2ee. They never mention end-to-end encryption for Lumo, except for ghost mode, and they are talking about the chat once it's complete and you choose to leave it there to use later, not about the prompts you send in.

    Zero-access encryption

    Your chats are stored using our battle-tested zero-access encryption, so even we can’t read them, similar to other Proton services such as Proton MailProton Drive, and Proton Pass. Our encryption is open source and trusted by over 100 million people to secure their data.

    Which means that they are not advertising anything they are not doing or cannot do.

    By posting this disinformation all you're achieving is getting people to pedal back to all the shit services out there for "free" because many will start believing that privacy is way harder than it actually is so 'what's the point' or, even worse, no alternative will help me be more private so I might as well just stop trying.

  • Their AI is not local, so adding it to your email means breaking e2ee. That's to some extent fine. You can make an informed decision about it.

    But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the "zero trust" bullshit on protons own page.

    Scribe can be local, if that's what you are referring to.

    They also have a specific section on it at https://proton.me/support/proton-scribe-writing-assistant#local-or-server

    Also emails for the most part are not e2ee, they can't be because the other party is not using encryption. They use "zero-access" which is different. It means proton gets the email in clear text, encrypts it with your public PGP key, deletes the original, and sends it to you.

    See https://proton.me/support/proton-mail-encryption-explained

    The email is encrypted in transit using TLS. It is then unencrypted and re-encrypted (by us) for storage on our servers using zero-access encryption. Once zero-access encryption has been applied, no-one except you can access emails stored on our servers (including us). It is not end-to-end encrypted, however, and might be accessible to the sender’s email service.

  • Their AI is not local, so adding it to your email means breaking e2ee. That's to some extent fine. You can make an informed decision about it.

    But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the "zero trust" bullshit on protons own page.

    My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don't.

    First off, you don't even know the terminology. A local LLM is one YOU run on YOUR machine.

    Lumo apparently runs on Proton servers - where their email and docs all are as well. So I'm not sure what "Their AI is not local!" even means other than you don't know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy....just...no.

    Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That's just a fact. Google's business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

    There is no such thing as e2ee LLMs. That's not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that's unacceptable for you, then don't use it. But don't brandish your ignorance like you're some expert, and that everyone on earth needs to adhere to whatever "standards" you think up that seem ill-informed.

    Also, clearly you aren't using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text "This breaks the e2ee! Are you REALLY sure you want to do this?" So your complaint about warnings is just a flag saying you don't actually know and are just guessing.

  • This was it for me, cancelled my account. Fuck this Andy moron

    Well, I'm keeping mine. I'm actually very happy with it. This article is full slop, with loads of disinformation, and an evident lack of research. It looks like it was made with some Ai bullshit and the writer didn't even check what that thing vomited.

  • For a critical blog, the first few paragraphs sound a lot like they're shilling for Proton.

    I'm not sure if I'm supposed to be impressed by the author's witty wording, but "the cool trick they do" is - full encryption.

    Moving on.

    But that’s misleading. The actual large language model is not open. The code for Proton’s bit of Lumo is not open source. The only open source bit that Proton’s made available is just some of Proton’s controls for the LLM. [GitHub]

    In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure!

    oof.

    Over the years I've heard many people claim that proton's servers being in Switzerland is more secure than other EU countries - well there's also this now:

    Proton is moving its servers out of Switzerland to another country in the EU they haven’t specified. The Lumo announcement is the first that Proton’s mentioned this.

    No company is safe from enshittification - always look for, and base your choices on, the legally binding stuff, before you commit. Be wary of weasel wording. And always, always be ready to move* on when the enshittification starts despite your caution.


    * regarding email, there's redirection services a.k.a. eternal email addresses - in some cases run by venerable non-profits.

    Switzerland has a surveillance law in the works that will force VPNs, messaging apps, and online platforms to log users' identities, IP addresses, and metadata for government access

  • This post did not contain any content.

    I'm just saying Andy sucking up to Trump is a red flag. I'm cancelling in 2026 🫠

  • How much longer until the AI bubbles pops? I'm tired of this.

    Time to face the facts, this utter shit is here to stay, just like every other bit of enshitification we get exposed to.

  • My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don't.

    First off, you don't even know the terminology. A local LLM is one YOU run on YOUR machine.

    Lumo apparently runs on Proton servers - where their email and docs all are as well. So I'm not sure what "Their AI is not local!" even means other than you don't know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy....just...no.

    Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That's just a fact. Google's business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

    There is no such thing as e2ee LLMs. That's not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that's unacceptable for you, then don't use it. But don't brandish your ignorance like you're some expert, and that everyone on earth needs to adhere to whatever "standards" you think up that seem ill-informed.

    Also, clearly you aren't using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text "This breaks the e2ee! Are you REALLY sure you want to do this?" So your complaint about warnings is just a flag saying you don't actually know and are just guessing.

    A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

  • So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

  • How much longer until the AI bubbles pops? I'm tired of this.

    Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

  • This post did not contain any content.

    Proton has my vote for fastest company ever to completely enshittify.

  • How much longer until the AI bubbles pops? I'm tired of this.

    It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

  • 31 Stimmen
    6 Beiträge
    76 Aufrufe
    moseschrute@piefed.socialM
    While I agree, everyone constantly restating this is not helpful. We should instead ask ourselves what’s about BlueSky is working and what can we learn? For example, I think the threadiverse could benefit from block lists, which auto update with new filter keywords. I’ve seen Lemmy users talk about how much time they spend crafting their filters to get the feed of content they want. It would be much nicer if you could choose and even combine block lists (e.g. US politics).
  • 372 Stimmen
    172 Beiträge
    5k Aufrufe
    swelter_spark@reddthat.comS
    No problem. If that doesn't work for you, ComfyUI is also a popular option, but it's more complicated.
  • 138 Stimmen
    28 Beiträge
    303 Aufrufe
    1
    Not our. i talk, and you talk. it is our discussion. It’s a discussion you are trying to have i am not trying to have, i am having it. here you are, replying to me. why are you trying so hard to prove that a discussion is not a discussion? it does not make sense. I labeled as a layman’s guess. yeah. and since i am more knowledgeable than you in this particular regard, i contributed some information you might not have had. now you do and your future layman's guess can be more educated. that is how the discussion works. and for some strange reason, you seem to be pissed about it.
  • Uber, Lyft oppose some bills that aim to prevent assaults during rides

    Technology technology
    12
    94 Stimmen
    12 Beiträge
    124 Aufrufe
    F
    California is not Colorado nor is it federal No shit, did you even read my comment? Regulations already exist in every state that ride share companies operate in, including any state where taxis operate. People are already not supposed to sexually assault their passengers. Will adding another regulation saying they shouldn’t do that, even when one already exists, suddenly stop it from happening? No. Have you even looked at the regulations in Colorado for ride share drivers and companies? I’m guessing not. Here are the ones that were made in 2014: https://law.justia.com/codes/colorado/2021/title-40/article-10-1/part-6/section-40-10-1-605/#%3A~%3Atext=§+40-10.1-605.+Operational+Requirements+A+driver+shall+not%2Ca+ride%2C+otherwise+known+as+a+“street+hail”. Here’s just one little but relevant section: Before a person is permitted to act as a driver through use of a transportation network company's digital network, the person shall: Obtain a criminal history record check pursuant to the procedures set forth in section 40-10.1-110 as supplemented by the commission's rules promulgated under section 40-10.1-110 or through a privately administered national criminal history record check, including the national sex offender database; and If a privately administered national criminal history record check is used, provide a copy of the criminal history record check to the transportation network company. A driver shall obtain a criminal history record check in accordance with subparagraph (I) of paragraph (a) of this subsection (3) every five years while serving as a driver. A person who has been convicted of or pled guilty or nolo contendere to driving under the influence of drugs or alcohol in the previous seven years before applying to become a driver shall not serve as a driver. If the criminal history record check reveals that the person has ever been convicted of or pled guilty or nolo contendere to any of the following felony offenses, the person shall not serve as a driver: (c) (I) A person who has been convicted of or pled guilty or nolo contendere to driving under the influence of drugs or alcohol in the previous seven years before applying to become a driver shall not serve as a driver. If the criminal history record check reveals that the person has ever been convicted of or pled guilty or nolo contendere to any of the following felony offenses, the person shall not serve as a driver: An offense involving fraud, as described in article 5 of title 18, C.R.S.; An offense involving unlawful sexual behavior, as defined in section 16-22-102 (9), C.R.S.; An offense against property, as described in article 4 of title 18, C.R.S.; or A crime of violence, as described in section 18-1.3-406, C.R.S. A person who has been convicted of a comparable offense to the offenses listed in subparagraph (I) of this paragraph (c) in another state or in the United States shall not serve as a driver. A transportation network company or a third party shall retain true and accurate results of the criminal history record check for each driver that provides services for the transportation network company for at least five years after the criminal history record check was conducted. A person who has, within the immediately preceding five years, been convicted of or pled guilty or nolo contendere to a felony shall not serve as a driver. Before permitting an individual to act as a driver on its digital network, a transportation network company shall obtain and review a driving history research report for the individual. An individual with the following moving violations shall not serve as a driver: More than three moving violations in the three-year period preceding the individual's application to serve as a driver; or A major moving violation in the three-year period preceding the individual's application to serve as a driver, whether committed in this state, another state, or the United States, including vehicular eluding, as described in section 18-9-116.5, C.R.S., reckless driving, as described in section 42-4-1401, C.R.S., and driving under restraint, as described in section 42-2-138, C.R.S. A transportation network company or a third party shall retain true and accurate results of the driving history research report for each driver that provides services for the transportation network company for at least three years. So all sorts of criminal history, driving record, etc checks have been required since 2014. Colorado were actually the first state in the USA to implement rules like this for ride share companies lol.
  • 376 Stimmen
    51 Beiträge
    562 Aufrufe
    L
    I believe that's what a write down generally reflects: The asset is now worth less than its previous book value. Resale value isn't the most accurate way to look at it, but it generally works for explaining it: If I bought a tool for 100€, I'd book it as 100€ worth of tools. If I wanted to sell it again after using it for a while, I'd get less than those 100€ back for it, so I'd write down that difference as a loss. With buying / depreciating / selling companies instead of tools, things become more complex, but the basic idea still holds: If the whole of the company's value goes down, you write down the difference too. So unless these guys bought it for five times its value, they'll have paid less for it than they originally got.
  • France considers requiring Musk’s X to verify users’ age

    Technology technology
    20
    1
    142 Stimmen
    20 Beiträge
    171 Aufrufe
    C
    TBH, age verification services exist. If it becomes law, integrating them shouldn't be more difficult than integrating a OIDC login. So everyone should be able to do it. Depending on these services, you might not even need to give a name, or, because they are separate entities, don't give your name to the platform using them. Other parts of regulation are more difficult. Like these "upload filters" that need to figure out if something shared via a service is violating any copyright before it is made available.
  • 66 Stimmen
    9 Beiträge
    92 Aufrufe
    F
    HE is amazing. their BGP looking glass tool is also one of my favorite troubleshooting tools for backbone issues. 10/10 ISP
  • 13 Stimmen
    6 Beiträge
    66 Aufrufe
    rinse@lemmy.worldR
    Protocol implementation plebbit-js is separated from client like Seedit