Skip to content

Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source

Technology
81 41 1
  • How much longer until the AI bubbles pops? I'm tired of this.

    Time to face the facts, this utter shit is here to stay, just like every other bit of enshitification we get exposed to.

  • My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don't.

    First off, you don't even know the terminology. A local LLM is one YOU run on YOUR machine.

    Lumo apparently runs on Proton servers - where their email and docs all are as well. So I'm not sure what "Their AI is not local!" even means other than you don't know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy....just...no.

    Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That's just a fact. Google's business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

    There is no such thing as e2ee LLMs. That's not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that's unacceptable for you, then don't use it. But don't brandish your ignorance like you're some expert, and that everyone on earth needs to adhere to whatever "standards" you think up that seem ill-informed.

    Also, clearly you aren't using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text "This breaks the e2ee! Are you REALLY sure you want to do this?" So your complaint about warnings is just a flag saying you don't actually know and are just guessing.

    A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

  • So then you object to the premise any LLM setup that isn't local can ever be "secure" and can't seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a "brand issue" because....why? It sounds like a very emotional argument as it's not backed by any technical discussion beyond "local only secure, nothing else."

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can't figure out TLS and flushing logs for an LLM on their own servers? If anything, it's not even a complicated setup. TLS to the context window, don't keep logs, flush the data. How do you think no-log VPNs work? This isn't exactly all that far off from that.

    What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

  • How much longer until the AI bubbles pops? I'm tired of this.

    Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

  • This post did not contain any content.

    Proton has my vote for fastest company ever to completely enshittify.

  • How much longer until the AI bubbles pops? I'm tired of this.

    It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

  • What exactly is dishonest here? The language on their site is factually accurate, I've had to read it 7 times today because of you all.

    I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.

    It is e2ee -- with the LLM context window!

    When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?

  • Both your take, and the author, seem to not understand how LLMs work. At all.

    At some point, yes, an LLM model has to process clear text tokens. There's no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don't HAVE to use it. It's not being forced down your throat like Gemini or CoPilot.

    And their LLM. - it's Mistral, OpenHands and OLMO, all open source. It's in their documentation. So this article is straight up lies about that. Like.... Did Google write this article? It's simply propaganda.

    Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it's basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that's obviously their bridge. But it's not a default setup. It's an option you have to set up. It's not for everyone. Some users want that. It's not forced on everyone. Chill TF out.

    If an AI can work on encrypted data, it's not encrypted.

  • It is e2ee -- with the LLM context window!

    When you email someone outside Proton servers, doesn't the same thing happen anyway? But the LLM is on Proton servers, so what's the actual vulnerability?

    It is e2ee

    It is not. Not in any meaningful way.

    When you email someone outside Proton servers, doesn't the same thing happen anyway?

    Yes it does.

    But the LLM is on Proton servers, so what's the actual vulnerability?

    Again, the issue is not the technology. The issue is deceptive marketing. Why doesn't their site clearly say what you say? Why use confusing technical terms most people won't understand and compare it to drive that is fully e2ee?

  • They compare it to proton mail and drive that are supposedly e2ee.

    Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.

    Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.

    And even if you disagree about mail, there is no excuse for comparing to proton drive.

  • It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

    Is it like crypto where cpus were good and then gpus and then FPGAs then ASICs? Or is this different?

  • I'm just saying Andy sucking up to Trump is a red flag. I'm cancelling in 2026 🫠

    What are you considering as alternatives?

  • What are you considering as alternatives?

    I highly suggest Tuta, https://tuta.com/, or other conventional mail boxes like https://mailbox.org/en/

  • A local LLM is one YOU run on YOUR machine.

    Yes, that is exactly what I am saying. You seem to be confused by basic English.

    Look, Proton can at any time MITM attack your email

    They are not supposed to be able to and well designed e2ee services can't be. That's the whole point of e2ee.

    There is no such thing as e2ee LLMs. That's not how any of this works.

    I know. When did I say there is?

    They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

    You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.

  • They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.

    You're using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.

    If you insist on being a fanboy than go ahead. But this is like arguing a bulletproof vest is useless because it does not cover your entire body.

  • Here's the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP's are also starting to pick up some steam as a way to refine prompt engineering. The basic AI "bubble" popped already, what we're seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It's really an interesting thing to watch, but honestly I don't think we're going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI "breakthroughs" with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don't believe what he says.

    You're saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?

    Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn't have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn't that the web/tech wasn't valuable, it's now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.

  • Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.

    And even if you disagree about mail, there is no excuse for comparing to proton drive.

    Email is almost always zero-access encryption (like live chats), considering the % of proton users and the amount of emails between them (or the even smaller % of PGP users). Drive is e2ee like chat history.
    Basically I see email : chats = drive : history.

    Anyway, I agree it could be done better, but I don't really see the big deal. Any user unable to understand this won't get the difference between zero-access and e2e.

  • You're saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?

    Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn't have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn't that the web/tech wasn't valuable, it's now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.

    I literally said exactly what you're explaining. I'm not sure what you're trying to accomplish here....

  • 325 Stimmen
    63 Beiträge
    0 Aufrufe
    A
    Nobody seems to have noticed that the business model here is to funnel as much traffic and spend to the big AI corporations as possible with no foreseeable return (except vague nonsense about "productivity gains"). Just wait until someone requires one of these things to make a profit, that's when if you're a corporation that integrated this shit deeply into your business, you'll be covered top to bottom in rug burn from the inevitable rug pull of price increases.
  • 204 Stimmen
    32 Beiträge
    0 Aufrufe
    S
    No need for good computers to train agents. They don't need to play crysis to train as hackers. Something on the level of a Pi (or more accurately of a 2010 laptop) is good enough.
  • 0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    772 Stimmen
    205 Beiträge
    6k Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • It is OutfinityGift project better then all NFTs?

    Technology technology
    1
    2
    1 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 12 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • Meta publishes V-Jepa 2 – an AI world model

    Technology technology
    3
    1
    9 Stimmen
    3 Beiträge
    40 Aufrufe
    K
    Yay more hype. Just what we needed more of, it's hype, at last
  • 880 Stimmen
    356 Beiträge
    3k Aufrufe
    communist@lemmy.frozeninferno.xyzC
    Is that useful for completing tasks?