• Quantum Cog@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    185
    arrow-down
    5
    ·
    edit-2
    4 months ago

    I understand Signal’s stance on this. For this vulnerability, the attacker needs physical access to computer. If the attacker has already gained physical access, the attacker can already access your messages, crypto wallets, password managers.

    Many password managers also have this flaw. For example, Someone can change Keepass master password if the user is already logged in to the session, if they have physical access to the PC and lock you out of all your accounts.

    • Thurstylark@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 months ago

      Yeah, this is why I added a hardware key to my db. The hardware key is required not just for reading the db, but writing to it as well.

      Another tip: use something like an OnlyKey that has its own locking and self-destruct mechanisms so this method isn’t foiled by simply acquiring the key.

    • partial_accumen@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      4 months ago

      For example, Someone can change Keepass master password if the user is already logged in to the session, if they have physical access to the PC and lock you out of all your accounts.

      This seems like easy fix is available. On Windows, Access Shadow copies, restore previous version from $DayBeforeLockout. Or on Linux, specific file systems have automatic volume level snapshotting available. Or on either…restore the keepass file from a backup before the change.

    • uiiiq@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      17
      ·
      4 months ago

      They don’t need physical access (hold the device in their hand), they just need a command execution, which is a much lower bar. I expect some defence in depth for an application that holds some of the most private information there is about me.

      • Quantum Cog@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        35
        arrow-down
        1
        ·
        edit-2
        4 months ago

        The argument still holds. If they have remote execution access, they already have your data. Encryption can’t protect your data here because encrypted data will automatically become unencrypted once the user logs into the computer. If the attacker has remote access they can log into your account and the data will be unencrypted.

        • ooterness@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          4 months ago

          No, defense in depth is still important.

          It’s true that full-disk encryption is useless against remote execution attacks, because the attacker is already inside that boundary. (i.e., As you say, the OS will helpfully decrypt the file for the attacker.)

          However, it’s still useful to have finer-grained encryption of specific files. (Preferably in addition to full-disk encryption, which remains useful against other attack vectors.) i.e., Prompt the user for a password when the program starts, decrypt the data, and hold it in RAM that’s only accessible to that running process. This is more secure because the attacker must compromise additional barriers. Physical access is harder than remote execution with root, which is harder than remote execution in general.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            4 months ago

            You don’t need root (dump memory). You need the user password or to control the binary. Both of them relatively easy if you have user access. For example, change ENV variable to point to a patched binary first, spoof the password prompt, and then continue execution as the normal binary does.

            • ooterness@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 months ago

              Sure, but there’s still no excuse for “store the password in plaintext lol”. Once you’ve got user access, files at rest are trivial to obtain.

              You’re proposing what amounts to a phishing attack, which is more effort, more time, and more risk. Anything that forces the attacker to do more work and have more chances to get noticed is a step in the right direction. Don’t let perfect be the enemy of good.

              • sudneo@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                4 months ago

                I am not proposing anything actually, I am implying that this change won’t modify the threat model in any substantial way. Your comment implied that it kind of did, requiring root access - which is a slightly different tm, not so much on single user machines…

                So my point is that “The data is safe until your user password is safe” is a very tiny change compared to “your data is safe until your device is safe”. There are tons of ways to get the password once you have local access, and what I strongly disagree with is that it requires more work or risk. A sudo fake prompt requires a 10-lines bash script since you control the shell configuration, for example. And you don’t even need to phish, you can simply create a SUID shell and use “sudo chmod +s shell” to any local configuration or script where the user runs a sudo command, and you are root, or you dump the keyring or…etc. Likewise, 99.9% of the users don’t run integrity monitoring tools, or monitor and restrict egress access, so these attacks simply won’t be noticed.

                So what I am saying is that an encrypted storage is better than a plaintext storage for the key, but if this requires substantial energies from the devs that could have been put on work that substantially improved the security posture, it is a net negative in terms of security (I don’t know if it is the case), and that nobody after this change should feel secure about their signal data in case their device would be compromised.

        • douglasg14b@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          4 months ago

          They don’t necessarily need RCE access.

          Also this isn’t how security works. Please refer to the Swiss cheese model.

          Unless you can guarantee that every application ever installed on every computer will always be secure under every circumstances then you’re already breaking your security model.

          An application may expose a vulnerable web server which may allow read only file system access without exposing the user to any direct control of their computer from an attacker. Now your lack of security posture for your application (signal) now has a shared fate to any other application anyone else built.

          This is just one of many easy examples that are counter to your argument here.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    6
    ·
    edit-2
    4 months ago

    The whole drama seems to be pushing for Electron’s safeStorage API, which uses a device’s secrets manager. But aren’t secrets stored there still accessible when the machine is unlocked anyway? I’m not sure what this change accomplishes other than encryption at rest with the device turned off - which is redundant if you’re using full disk encryption.

    I don’t think they’re downplaying it, it just doesn’t seem to be this large security concern some people are making it to be.

    This is like the third time in the past two months I’ve seen someone trying to spread FUD around Signal.

    • priapus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      2
      ·
      4 months ago

      Yeah they are, this problem is super overblown. Weirdly I’ve seen articles about this coming up for other apps too, like the ChatGPT app for MacOS storing conversation history in plain text on the device. Weird that this is suddenly a problem.

      If someone wants better security, the can use full disk encryption and encrypt their home directory and unlock it on login.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      But aren’t secrets stored there still accessible when the machine is unlocked anyway?

      I think the OS prevents apps from accessing data in those keychains, right? So there wouldn’t be an automated/scriptable way to extract the key in as easy of a way.

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        But that’s the thing: I haven’t found anything that indicates it can differentiate a legitimate access from a dubious one; at least not without asking the user to authorize it by providing a password and causing the extra inconvenience.

        If the wallet asked the program itself for a secret - to verify the program was legit and not a malicious script - the program would still have the same problem of storing and retrieving that secret securely; which defeats the use of a secret manager.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          I haven’t found anything that indicates it can differentiate a legitimate access from a dubious one

          It’s not about legitimate access versus illegitimate access. As I understand it, these keychains/wallets can control which specific app can access any given secret.

          It’s a method of sandboxing different apps from accessing the secrets of other apps.

          In function, browser access to an item can be problematic because browsers share data with the sites that it visits, but that’s different from a specific app, hardcoded to a specific service being given exclusive access to a key.

          • Eager Eagle@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 months ago

            upon reading a bit how different wallets work, it seems macos is able to identify the program requesting the keychain access when it’s signed with a certificate - idk if that’s the case for signal desktop on mac, and I don’t know what happens if the program is not signed.

            As for gnome-keyring, they ackowledge that doing it on Linux distros this is a much larger endeavor due to the attack surface:

            An active attack is where the attacker can change something in your security context. In the context of gnome-keyring an active attacker would have access to your user session in some way. An active attacker might install an application on your computer, display a window, listen into the X events going to another window, read through your memory, snoop on you from a root account etc.

            While it’d be nice for gnome-keyring to someday be hardened against active attacks originating from the user’s session, the reality is that the free software “desktop” today just isn’t architected with those things in mind. We need completion and integration things like the following. Kudos to the great folks working on parts of this stuff:

            - Trusted X (for prompting)
            - Pervasive use of security contexts for different apps (SELinux, AppArmor)
            - Application signing (for ACLs) 
            

            We’re not against the goal of protecting against active attacks, but without hardening of the desktop in general, such efforts amount to security theater.

            Also

            An example of security theater is giving the illusion that somehow one application running in a security context (such as your user session) can keep information from another application running in the same security context.

            In other words, the problem is beyond the scope of gnome-keyring. Maybe now with diffusion of Wayland and more sandboxing options reducing this context becomes viable.

        • AProfessional@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          You are absolutely correct. This can help in a world where every app is well sandboxed (thus can be reliably identified and isolated).

    • m-p{3}@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      edit-2
      4 months ago

      Security comes in layers, still better than storing the keys in plaintext, and FDE is also important.

    • woelkchen@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      4 months ago

      This is like the third time in the past two months I’ve seen someone trying to spread FUD around Signal.

      If any other messenger had the same issue, Moxie Marlinspike and fans would have an outcry on biblical proportions.

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      4 months ago

      Yes but it pushes it to an operating system level and that means everyone wins as the operating system solutions to improve as vulnerabilities are found and resolved.

      You also don’t need rce access to exfiltrate data. If decrypted keys are held in memory, that mitigates an entire class of vulnerabilities from other applications causing your private chats from leaking.

      Full disk encryption is not a solution here. Any application that’s already running which can provide read only file system access to an attacker is not going to be affected by your full disk encryption.

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Full disk encryption is not a solution here. Any application that’s already running which can provide read only file system access to an attacker is not going to be affected by your full disk encryption.

        that’s my point

    • Tja@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 months ago

      It’s an equation. One of those “left for the reader”. Please start solving it.

    • NicoCharrua@lemmy.ca
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      1
      ·
      4 months ago

      Microsoft was claiming that the data would be inaccessible to hackers (which is not true).

      Signal claimed the exact opposite: that once it’s on your computer, messages can be seen by malicious programs on your computer.

      Signal was caught having less than ideal security. Microsoft was caught lying.

      • OfficerBribe@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        4 months ago

        Could not find much info about that claim, but context probably was that data is not possible to be accessed without compromising device, e.g., not possible to get info over network or by compromising some central location on remote server because there is none and all that data is stored locally.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      4 months ago

      let me just highlight that if someone has access only to your signal desktop conversations, they have access to your signal desktop conversations.

      if someone has access to your windows recall db, they have access to your signal desktop conversations, the pages you’ve browsed including in private windows, documents you’ve written, games you’ve played, social media posts you’ve seen, and pretty much anything you’ve done using that machine.

      perhaps that does demand a slightly different level of concern.

      • OfficerBribe@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        10
        ·
        4 months ago

        True that Recall collects more than Signal, but copying actual files, browser session cookies / passwords, mailbox content if desktop mail client is used makes more sense if you have access to device. Recall is also not supposed to collect data from private sessions from popular web browsers. I assume for that it uses some hard coded list of exceptions with an option to add your own.

        Both should have protected that kind of data with additional safeguards so that merely copying that data without authentication would have no value.

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 months ago

          Recall is also not supposed to collect data from private sessions from popular web browsers.

          it makes one wonder how well that works; if it’s based on OCR, does it “redact” the bounding box corresponding to the private window? What happens with overlapping windows; how does it handle windows with transparency; I can’t help to think there’s a high probability their solution is flaky.

          • OfficerBribe@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            Here is a video demonstration. Snapshots contain window that is in focus not the whole desktop and for exclusions I assume it would just base it on process name + additional parameters (private browser windows have same process name so must be something additional). You can also add websites for exclusions. Here is an article that lists other things that are not being captured like DRM protected content and one time WhatsApp images.

            Also from support article:

            In two specific scenarios, Recall will capture snapshots that include InPrivate windows, blocked apps, and blocked websites. If Recall gets launched, or the Now option is selected in Recall, then a snapshot is taken even when InPrivate windows, blocked apps, and blocked websites are displayed. However, these snapshots are not saved by Recall. If you choose to send the information from this snaps

      • spiderman@ani.social
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        4 months ago

        the point is they could have fixed it by the time it was reported and not waited around until the issue was blown bigger.

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          A security company should prioritize investments (I.e. development time) depending on a threat model and risk management, not based on what random people think.

            • sudneo@lemm.ee
              link
              fedilink
              English
              arrow-up
              5
              ·
              4 months ago

              I am saying that based on the existing risks, effort should be put on the most relevant ones for the threat model you intend to assume.

              In fact the “fix” that they are providing is not changing much, simply because on single-user machines there is borderline no difference between compromising your user (i.e., physical access, you installing malware unknowingly etc.) and compromising the whole box (with root/admin access).

              On Windows it’s not going to have any impact at all (due to how this API is implemented), on Linux/Mac it adds a little complexity to the exploit. Once your user is compromised, your password (which is what protects the keychain) is going to be compromised very easily via internal phishing (i.e., a fake graphical prompt, a fake sudo prompt etc.) or other techniques. Sometimes it might not be necessary at all. For example, if you run signal-desktop yourself and you own the binary, an attacker with local privileges can simply patch/modify/replace the binary. So then you need other controls, like signing the binary and configuring accepted keys (this is possible and somewhat common on Mac), or something that anyway uses external trust (root user, remote server, etc.).

              So my point is: if their threat model assumed that if your client device was compromised, your data was not protected, it doesn’t make much sense to reduce 10/20% the risk for this to happen, and focus on other work that might be more impactful.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            Privacy is not anonimity though. Privacy simply means that private data is not disclosed or used to parties and for purposes that the data owner doesn’t explicitly allow. Often not collecting data is a way to ensure no misuse (and no compromise, hence security), but it’s not necessarily always the case.

            • Victor@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Privacy simply means that private data is not disclosed or used

              Right, and often for that to be the case, the transferring and storing of data should be secure.

              I’m mostly just pointing out the fact that when you do x ≠ y ≠ z, it can still be the case that x = z, e.g. 4 ≠ 3 ≠ 4.

              Just nitpicking, perhaps.

        • wildbus8979@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          4 months ago

          It’s better now, but for years and years all they used for contact discovery was simple hashing… problem is the dataset is very small, and it was easy to generate a rainbow table of all the phone number hashes in a matter of hours. Then anyone with access to the hosts (either hackers, or the US state via AWS collaboration) had access to the entire social graph.

          • 9tr6gyp3@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Yeah the way I remember it, they put a lot of effort into masking that social graph. That was a while back too, not recent.

            • wildbus8979@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              4 months ago

              What I’m saying though is that for the longest time they didn’t, and when they changed the technique they hardly acknowledge that it was a problem in the past and that essentially every users social graph had been compromised for years.

              • ᗪᗩᗰᑎ@lemmy.ml
                link
                fedilink
                English
                arrow-up
                18
                arrow-down
                1
                ·
                4 months ago

                Signal, originally known as TextSecure, worked entirely over text messages when it first came out. It was borne from a different era and and securing communication data was the only immediate goal because at the time everything was basically viewable by anyone with enough admin rights on basically every platform. Signal helped popularize end-to-end encryption (E2EE) and dragged everyone else with them. Very few services at the time even advertised E2EE, private metadata or social graph privacy.

                As they’ve improved the platform they continue to make incremental changes to enhance security. This is not a flaw, this is how progress is made.