Skip to content

An analysis of X(Twitter)'s new XChat features shows that X can probably decrypt users' messages, as it holds users' private keys on its servers

Technology
48 32 68
  • Xchat is an irc client though.

    This is the first thing that came to mind. I used that for ao many years, then went on to Hexchat.

  • This post did not contain any content.

    No way. Impossible. Of course convenience never has a price tag.

    /s for typical users of today's Web

  • The third paragraph contradicts your other point. You define E2EE in two wildly different ways.

    The chat messages are most likely stored on an intermediary server, which would qualify for the same loophole you pointed out in the cloud storage example.

    No it doesn’t, and I defined E2EE exactly one way. E2EE stands for “End to end encryption”, which means it’s encrypted at one end, decrypted at the other end, and not in the middle.

    It doesn’t matter if they store a copy of your message on an intermediary server, the keyword there is intermediary. They are not the recipient, so they should not have the ability to decrypt the content of the message, only the recipient should. If they are able to decrypt your message, despite not being the recipient, it’s not E2EE.

    A cloud drive is an entirely different case because the cloud drive is not an intermediary. They literally are the second E in E2EE. A cloud drive can have the ability to decrypt your data and still be E2EE because they are the recipient. You both seem to be under the impression that a cloud drive is an “intermediary” between your devices but it’s not. It’s a destination.

    To explain it a bit simpler, imagine we’re in elementary school sitting at our desks and you’re sitting two desks away from me with one person between us:

    E2EE = I encrypt my note with a simple cipher that I shared with you and only you before class. I pass my note to the kid between us to pass to you. He can’t read the note, and if he writes down a copy of my note before passing it to you it doesn’t matter because he still won’t be able to read it because he’s doesn’t have the cipher because he’s not the recipient, you are. He passes you the note and you can do whatever you want with it, including decrypting it, because you know the cipher. All the E2EE has done is ensured the kid in the middle can’t read the note. It has nothing to do with whether or not you can read the note.

    Zero Access Encryption = I encrypt my note with a cipher that only I know. The kid in the middle can’t read this note, and neither can you. Then I use E2EE to encrypt that with a different cipher, the one that you do know, and hand the note to the kid in the middle to hand to you. The kid in the middle can’t read the note, and neither can you.

  • To extend this, that includes YOU giving your key to another application to decrypt those messages.

    For example if you use an app or browser extension, that app or browser extension has access to that key. Additionally the browser itself or operating system had access to the key.

    Now they may be fully audited. They may have a great reputation. You may trust them. But they are part of the decryption (and if sending encryption) process.

    It's a chain of trust, you have to trust the whole chain.

    It's a chain of trust, you have to trust the whole chain.

    Including the entire other side of the conversation. E2EE in a group chat still exposes the group chat if one participant shares their own key (or the chats themselves) with something insecure. Obviously any participant can copy and paste things, archive/log/screenshot things. It can all be automated, too.

    Take, for example, iMessage. We have pretty good confidence that Apple can't read your chats when you have configured it correctly: E2EE, no iCloud archiving of the chats, no backups of the keys. But do you trust that the other side of the conversation has done the exact same thing correctly?

    Or take for example the stupid case of senior American military officials accidentally adding a prominent journalist to their war plans signal chat. It's not a technical failure of signal's encryption, but a mistake by one of the participants inviting the wrong person, who then published the chat to the world.

  • It's a chain of trust, you have to trust the whole chain.

    Including the entire other side of the conversation. E2EE in a group chat still exposes the group chat if one participant shares their own key (or the chats themselves) with something insecure. Obviously any participant can copy and paste things, archive/log/screenshot things. It can all be automated, too.

    Take, for example, iMessage. We have pretty good confidence that Apple can't read your chats when you have configured it correctly: E2EE, no iCloud archiving of the chats, no backups of the keys. But do you trust that the other side of the conversation has done the exact same thing correctly?

    Or take for example the stupid case of senior American military officials accidentally adding a prominent journalist to their war plans signal chat. It's not a technical failure of signal's encryption, but a mistake by one of the participants inviting the wrong person, who then published the chat to the world.

    Are you so sure Apple doesn't have your keys? How are they migrating the keys to your new device? It's all closed source

  • This post did not contain any content.

    I mean, no yes man would enforce the fascist technocrat' order of reading all those messages. You know, the same technocrat who bought Twitter with Saudi money to cripple resistance movements and steer the public toward the alt right. The one with a thing for eugenics.

  • This post did not contain any content.

    That's not what "private" means. If they have both keys, the wording "might be able to" is at best extremely misleading.

  • No it doesn’t, and I defined E2EE exactly one way. E2EE stands for “End to end encryption”, which means it’s encrypted at one end, decrypted at the other end, and not in the middle.

    It doesn’t matter if they store a copy of your message on an intermediary server, the keyword there is intermediary. They are not the recipient, so they should not have the ability to decrypt the content of the message, only the recipient should. If they are able to decrypt your message, despite not being the recipient, it’s not E2EE.

    A cloud drive is an entirely different case because the cloud drive is not an intermediary. They literally are the second E in E2EE. A cloud drive can have the ability to decrypt your data and still be E2EE because they are the recipient. You both seem to be under the impression that a cloud drive is an “intermediary” between your devices but it’s not. It’s a destination.

    To explain it a bit simpler, imagine we’re in elementary school sitting at our desks and you’re sitting two desks away from me with one person between us:

    E2EE = I encrypt my note with a simple cipher that I shared with you and only you before class. I pass my note to the kid between us to pass to you. He can’t read the note, and if he writes down a copy of my note before passing it to you it doesn’t matter because he still won’t be able to read it because he’s doesn’t have the cipher because he’s not the recipient, you are. He passes you the note and you can do whatever you want with it, including decrypting it, because you know the cipher. All the E2EE has done is ensured the kid in the middle can’t read the note. It has nothing to do with whether or not you can read the note.

    Zero Access Encryption = I encrypt my note with a cipher that only I know. The kid in the middle can’t read this note, and neither can you. Then I use E2EE to encrypt that with a different cipher, the one that you do know, and hand the note to the kid in the middle to hand to you. The kid in the middle can’t read the note, and neither can you.

    You probably didn't understand me. I'm saying that a company can just arbitrarily decide (like you did) that the server is the "end" recipient (which I disagree with). That can be done for chat messages too.

    You send the message "E2EE" to the server, to be stored there (like a file, unencrypted), so that the recipient(s) can - sometime in the future - fetch the message, which would be encrypted again, only during transport. This fully fits your definition for the cloud storage example.

    By changing the recipient "end", we can arbitrarily decode the message then.

    I would argue that the cloud provider is not the recipient of files uploaded there. In the same way a chat message meant for someone else is not meant for the server to read, even if it happens to be stored there.

  • You probably didn't understand me. I'm saying that a company can just arbitrarily decide (like you did) that the server is the "end" recipient (which I disagree with). That can be done for chat messages too.

    You send the message "E2EE" to the server, to be stored there (like a file, unencrypted), so that the recipient(s) can - sometime in the future - fetch the message, which would be encrypted again, only during transport. This fully fits your definition for the cloud storage example.

    By changing the recipient "end", we can arbitrarily decode the message then.

    I would argue that the cloud provider is not the recipient of files uploaded there. In the same way a chat message meant for someone else is not meant for the server to read, even if it happens to be stored there.

    Alternatively, we need to stop saying E2EE is safe at all, for any type of data, because or the arbitrary usage.

  • You probably didn't understand me. I'm saying that a company can just arbitrarily decide (like you did) that the server is the "end" recipient (which I disagree with). That can be done for chat messages too.

    You send the message "E2EE" to the server, to be stored there (like a file, unencrypted), so that the recipient(s) can - sometime in the future - fetch the message, which would be encrypted again, only during transport. This fully fits your definition for the cloud storage example.

    By changing the recipient "end", we can arbitrarily decode the message then.

    I would argue that the cloud provider is not the recipient of files uploaded there. In the same way a chat message meant for someone else is not meant for the server to read, even if it happens to be stored there.

    I'm saying that a company can just arbitrarily decide (like you did) that the server is the "end" recipient (which I disagree with).

    They cannot. Thats not how E2EE works. If they can arbitrarily decide that, then it isn’t E2EE.

    That can be done for chat messages too.

    It cannot, if you’re using E2EE.

    You send the message "E2EE" to the server, to be stored there (like a file, unencrypted), so that the recipient(s) can - sometime in the future - fetch the message, which would be encrypted again, only during transport.

    That’s not how E2EE works. What you are describing is encryption that is not end-to-end. E2EE was designed the solve the issue you’re describing.

    This fully fits your definition for the cloud storage example.

    It does not. Cloud storage is a product you’d use to store your data for your own use at your own discretion.

    I would argue that the cloud provider is not the recipient of files uploaded there

    It is if you uploaded files to it, like on purpose.

    You’re confusing E2EE and non E2EE encryption.

  • Alternatively, we need to stop saying E2EE is safe at all, for any type of data, because or the arbitrary usage.

    We don’t need to stop saying E2EE is safe, because it is. There is no arbitrary usage. Either it’s E2EE. If a company lies to you and tells you it’s E2EE and it’s not E2EE that’s not arbitrary usage, it’s just a lie.

  • We don’t need to stop saying E2EE is safe, because it is. There is no arbitrary usage. Either it’s E2EE. If a company lies to you and tells you it’s E2EE and it’s not E2EE that’s not arbitrary usage, it’s just a lie.

    You are obviously not interested in listening to a word I'm saying. Goodbye.

  • You are obviously not interested in listening to a word I'm saying. Goodbye.

    You’re talking about things that you don’t understand on a fundamental level. Maybe stick things you do understand?

  • Are you so sure Apple doesn't have your keys? How are they migrating the keys to your new device? It's all closed source

    The actual key management and encryption protocols are published. Each new device generates a new key and reports their public key to an Apple-maintained directory. When a client wants to send a message, it checks the directory to know which unique devices it should send the message to, and the public key for each device.

    Any newly added device doesn't have the ability to retrieve old messages. But history can be transferred from old devices if they're still working and online.

    Basically, if you've configured things for maximum security, you will lose your message history if you lose or break your only logged-in device.

    There's no real way to audit whether Apple's implementation follows the protocols they've published, but we've seen no indicators that they aren't doing what they say.

  • Are you so sure Apple doesn't have your keys? How are they migrating the keys to your new device? It's all closed source

    The actual key management and encryption protocols are published. Each new device generates a new key and reports their public key to an Apple-maintained directory. When a client wants to send a message, it checks the directory to know which unique devices it should send the message to, and the public key for each device.

    Any newly added device doesn't have the ability to retrieve old messages. But history can be transferred from old devices if they're still working and online.

    Basically, if you've configured things for maximum security, you will lose your message history if you lose or break your only logged-in device.

    There's no real way to audit whether Apple's implementation follows the protocols they've published, but we've seen no indicators that they aren't doing what they say.

  • The actual key management and encryption protocols are published. Each new device generates a new key and reports their public key to an Apple-maintained directory. When a client wants to send a message, it checks the directory to know which unique devices it should send the message to, and the public key for each device.

    Any newly added device doesn't have the ability to retrieve old messages. But history can be transferred from old devices if they're still working and online.

    Basically, if you've configured things for maximum security, you will lose your message history if you lose or break your only logged-in device.

    There's no real way to audit whether Apple's implementation follows the protocols they've published, but we've seen no indicators that they aren't doing what they say.

    That's good to know, thanks.

  • The End of Publishing as We Know It

    Technology technology
    10
    1
    51 Stimmen
    10 Beiträge
    28 Aufrufe
    beejjorgensen@lemmy.sdf.orgB
    Lol.. I wanted "DRM". But it's been a long day.
  • Websites Are Tracking You Via Browser Fingerprinting

    Technology technology
    41
    1
    296 Stimmen
    41 Beiträge
    130 Aufrufe
    M
    Lets you question how digital stalking is still allowed?
  • 396 Stimmen
    24 Beiträge
    82 Aufrufe
    devfuuu@lemmy.worldD
    Lots of people have kids nowadays in their houses, we should ban all of that and out them all in a specialized center or something. I can't imagine what all those people are doing with kids behind close doors under he guise of "family". Truly scary if you think about it.
  • 1k Stimmen
    95 Beiträge
    16 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 241 Stimmen
    175 Beiträge
    188 Aufrufe
    N
    I think a generic plug would be great but look at how fragmented USB specifications are. Add that to biology and it's a whole other level of difficulty. Brain implants have great potential but the abandonment issue is a problem that exists now that we have to solve for. It's also not really a tech issue but a societal one on affordability and accountability of medical research. Imagine if a company held the patents for the brain device and just closed down without selling or leasing the patent. People with that device would have no support unless a government body forced the release of the patent. This has already happened multiple times to people in clinical trials and scaling up deployment with multiple versions will make the situation worse. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2818077 I don't really have a take on your personal desires. I do think if anyone can afford one they should make sure it's not just the up front cost but also the long term costs to be considered. Like buying an expensive car, it's not if you can afford to purchase it but if you can afford to wreck it.
  • The mystery of $MELANIA

    Technology technology
    13
    1
    25 Stimmen
    13 Beiträge
    14 Aufrufe
    geekwithsoul@lemm.eeG
    Archive
  • 0 Stimmen
    6 Beiträge
    28 Aufrufe
    P
    I applaud this, but I still say it's not far enough. Adjusted, the amount might match, but 121.000 is still easier to cough up for a billionaire than 50 is for a single mother of two who can barely make ends meet
  • Discord co-founder and CEO Jason Citron is stepping down

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet