Skip to content

An analysis of X(Twitter)'s new XChat features shows that X can probably decrypt users' messages, as it holds users' private keys on its servers

Technology
48 32 68
  • This post did not contain any content.

    That's not what "private" means. If they have both keys, the wording "might be able to" is at best extremely misleading.

  • No it doesn’t, and I defined E2EE exactly one way. E2EE stands for “End to end encryption”, which means it’s encrypted at one end, decrypted at the other end, and not in the middle.

    It doesn’t matter if they store a copy of your message on an intermediary server, the keyword there is intermediary. They are not the recipient, so they should not have the ability to decrypt the content of the message, only the recipient should. If they are able to decrypt your message, despite not being the recipient, it’s not E2EE.

    A cloud drive is an entirely different case because the cloud drive is not an intermediary. They literally are the second E in E2EE. A cloud drive can have the ability to decrypt your data and still be E2EE because they are the recipient. You both seem to be under the impression that a cloud drive is an “intermediary” between your devices but it’s not. It’s a destination.

    To explain it a bit simpler, imagine we’re in elementary school sitting at our desks and you’re sitting two desks away from me with one person between us:

    E2EE = I encrypt my note with a simple cipher that I shared with you and only you before class. I pass my note to the kid between us to pass to you. He can’t read the note, and if he writes down a copy of my note before passing it to you it doesn’t matter because he still won’t be able to read it because he’s doesn’t have the cipher because he’s not the recipient, you are. He passes you the note and you can do whatever you want with it, including decrypting it, because you know the cipher. All the E2EE has done is ensured the kid in the middle can’t read the note. It has nothing to do with whether or not you can read the note.

    Zero Access Encryption = I encrypt my note with a cipher that only I know. The kid in the middle can’t read this note, and neither can you. Then I use E2EE to encrypt that with a different cipher, the one that you do know, and hand the note to the kid in the middle to hand to you. The kid in the middle can’t read the note, and neither can you.

    You probably didn't understand me. I'm saying that a company can just arbitrarily decide (like you did) that the server is the "end" recipient (which I disagree with). That can be done for chat messages too.

    You send the message "E2EE" to the server, to be stored there (like a file, unencrypted), so that the recipient(s) can - sometime in the future - fetch the message, which would be encrypted again, only during transport. This fully fits your definition for the cloud storage example.

    By changing the recipient "end", we can arbitrarily decode the message then.

    I would argue that the cloud provider is not the recipient of files uploaded there. In the same way a chat message meant for someone else is not meant for the server to read, even if it happens to be stored there.

  • You probably didn't understand me. I'm saying that a company can just arbitrarily decide (like you did) that the server is the "end" recipient (which I disagree with). That can be done for chat messages too.

    You send the message "E2EE" to the server, to be stored there (like a file, unencrypted), so that the recipient(s) can - sometime in the future - fetch the message, which would be encrypted again, only during transport. This fully fits your definition for the cloud storage example.

    By changing the recipient "end", we can arbitrarily decode the message then.

    I would argue that the cloud provider is not the recipient of files uploaded there. In the same way a chat message meant for someone else is not meant for the server to read, even if it happens to be stored there.

    Alternatively, we need to stop saying E2EE is safe at all, for any type of data, because or the arbitrary usage.

  • You probably didn't understand me. I'm saying that a company can just arbitrarily decide (like you did) that the server is the "end" recipient (which I disagree with). That can be done for chat messages too.

    You send the message "E2EE" to the server, to be stored there (like a file, unencrypted), so that the recipient(s) can - sometime in the future - fetch the message, which would be encrypted again, only during transport. This fully fits your definition for the cloud storage example.

    By changing the recipient "end", we can arbitrarily decode the message then.

    I would argue that the cloud provider is not the recipient of files uploaded there. In the same way a chat message meant for someone else is not meant for the server to read, even if it happens to be stored there.

    I'm saying that a company can just arbitrarily decide (like you did) that the server is the "end" recipient (which I disagree with).

    They cannot. Thats not how E2EE works. If they can arbitrarily decide that, then it isn’t E2EE.

    That can be done for chat messages too.

    It cannot, if you’re using E2EE.

    You send the message "E2EE" to the server, to be stored there (like a file, unencrypted), so that the recipient(s) can - sometime in the future - fetch the message, which would be encrypted again, only during transport.

    That’s not how E2EE works. What you are describing is encryption that is not end-to-end. E2EE was designed the solve the issue you’re describing.

    This fully fits your definition for the cloud storage example.

    It does not. Cloud storage is a product you’d use to store your data for your own use at your own discretion.

    I would argue that the cloud provider is not the recipient of files uploaded there

    It is if you uploaded files to it, like on purpose.

    You’re confusing E2EE and non E2EE encryption.

  • Alternatively, we need to stop saying E2EE is safe at all, for any type of data, because or the arbitrary usage.

    We don’t need to stop saying E2EE is safe, because it is. There is no arbitrary usage. Either it’s E2EE. If a company lies to you and tells you it’s E2EE and it’s not E2EE that’s not arbitrary usage, it’s just a lie.

  • We don’t need to stop saying E2EE is safe, because it is. There is no arbitrary usage. Either it’s E2EE. If a company lies to you and tells you it’s E2EE and it’s not E2EE that’s not arbitrary usage, it’s just a lie.

    You are obviously not interested in listening to a word I'm saying. Goodbye.

  • You are obviously not interested in listening to a word I'm saying. Goodbye.

    You’re talking about things that you don’t understand on a fundamental level. Maybe stick things you do understand?

  • Are you so sure Apple doesn't have your keys? How are they migrating the keys to your new device? It's all closed source

    The actual key management and encryption protocols are published. Each new device generates a new key and reports their public key to an Apple-maintained directory. When a client wants to send a message, it checks the directory to know which unique devices it should send the message to, and the public key for each device.

    Any newly added device doesn't have the ability to retrieve old messages. But history can be transferred from old devices if they're still working and online.

    Basically, if you've configured things for maximum security, you will lose your message history if you lose or break your only logged-in device.

    There's no real way to audit whether Apple's implementation follows the protocols they've published, but we've seen no indicators that they aren't doing what they say.

  • Are you so sure Apple doesn't have your keys? How are they migrating the keys to your new device? It's all closed source

    The actual key management and encryption protocols are published. Each new device generates a new key and reports their public key to an Apple-maintained directory. When a client wants to send a message, it checks the directory to know which unique devices it should send the message to, and the public key for each device.

    Any newly added device doesn't have the ability to retrieve old messages. But history can be transferred from old devices if they're still working and online.

    Basically, if you've configured things for maximum security, you will lose your message history if you lose or break your only logged-in device.

    There's no real way to audit whether Apple's implementation follows the protocols they've published, but we've seen no indicators that they aren't doing what they say.

  • The actual key management and encryption protocols are published. Each new device generates a new key and reports their public key to an Apple-maintained directory. When a client wants to send a message, it checks the directory to know which unique devices it should send the message to, and the public key for each device.

    Any newly added device doesn't have the ability to retrieve old messages. But history can be transferred from old devices if they're still working and online.

    Basically, if you've configured things for maximum security, you will lose your message history if you lose or break your only logged-in device.

    There's no real way to audit whether Apple's implementation follows the protocols they've published, but we've seen no indicators that they aren't doing what they say.

    That's good to know, thanks.

  • 32 Stimmen
    6 Beiträge
    30 Aufrufe
    G
    Yes. I can't imagine that they will go after individuals. Businesses can't be so cavalier. But if creators don't pay the extra cost to make their models compliant with EU law, then they can't be used in the EU anyway. So it probably doesn't matter much. The Llama models with vision have the no-EU clause. It's because Meta wasn't allowed to train on European's data because of GDPR. The pure LLMs are fine. They might even be compliant, but we'll have to see what the courts think.
  • US Senate strikes AI regulation ban from Trump megabill

    Technology technology
    3
    73 Stimmen
    3 Beiträge
    13 Aufrufe
    C
    No one likes little teddy, it appears.
  • Pornaroma Review a Detailed Comparison with Top Adult Sites

    Technology technology
    1
    2
    4 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • 9 Stimmen
    6 Beiträge
    20 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • The Trump Mobile T1 Phone looks both bad and impossible

    Technology technology
    42
    1
    139 Stimmen
    42 Beiträge
    92 Aufrufe
    S
    "Components" means in this case the phone and the sticker.
  • 54 Stimmen
    7 Beiträge
    6 Aufrufe
    F
    After some further reading it seems obvious that the two incidents are entirely unrelated, but it was a fun rabbit hole for a sec!
  • 119 Stimmen
    10 Beiträge
    31 Aufrufe
    S
    Active ISA would be a disaster. My fairly modern car is unable to reliably detect posted or implied speed limits. Sometimes it overshoots by more than double and sometimes it mandates more than 3/4 slower. The problem is the way it is and will have to be done is by means of optical detection. GPS speed measurement can also be surprisingly unreliable. Especially in underground settings like long pass-unders and tunnels. If the system would be based on something reliable like local wireless communications between speed limit postings it would be a different issue - would also come with a significant risc of abuse though. Also the passive ISA was the first thing I disabled. And I abide by posted speed limits.
  • 0 Stimmen
    4 Beiträge
    6 Aufrufe
    K
    I wish the batteries were modular/interchangeable. You could just pull into a station, remove the spent battery and replace it with a full one, the spent one can then just get recharged and stored at the station for the next user to change out. You could even bring some extra ones in the trunk for a long trip!