Disk Encryption: How can I mitigate SSD/flash media security risk caused by unreliable physical deletion?



  • Firstly, I do not think my question is answered by SSD full disk encryption as I am not looking for a binary "is FDE secure" - nothing is perfectly secure; rather I am looking for possible solutions to the specific SSD/FDE encryption conundrum.

    Theoretical attack model: My computer's root disk (all mounted partitions) is an SSD that has been encrypted using LUKS (let's assume V1) with a 6 character dictionary word as a password.

    I'm aware that LUKS doesn't use passwords directly to (en/de)crypt data, rather it stores the actual key (or "Underlying Key") in a header located at or near the first (logical) sectors of the drive. It encrypts the Underlying Key with the password (salty hashed several times etc.); thus knowledge of the password becomes the only way (assuming no crypto breakthrough/software bugs/side-channel attacks) in which the Underlying Key and consequently the disk can be decrypted.

    The weakness of this first password is supposedly protected by LUKS deploying password strengthening techniques (the aforementioned salted hash). I do not have a high degree of trust that these techniques will really hold off a well-funded and determined adversary. It is not completely implausible that well-funded specialists possess classified hardware that can perform some silly large number of [insert hash function here] ops/second. My example in the brackets is perhaps a shade far-fetched, but hopefully, it communicates my concern; I know that a password of a certain length derived from a true entropy source is impractical to brute force (even post-quantum) I'm not a cryptographer; I cannot mathematically demonstrate it; however I know it to be true as surely as I know that I cannot fly.

    Going back to the attack scenario, I decide to upgrade my password to a stronger one. I fire off a crypsetup command (I'm on Arch), wipe the old keyslot, write the Underlying Key encrypted by a salted hash of the new password into said keyslot. So, password changed? Maybe. The drive reports that the logical sectors have been set to zero but the opaque drive controller and its antics means there is a complete disconnect between logical and physical sectors. To my knowledge there is no reliable way of confirming this using commodity hardware, without degrading the lifespan of the drive by, for example, repeatedly writing zeroes to the disk, like grenades, hoping that one of them will hit the target and actually zero out the Underlying Key ciphertext encrypted using the old password.

    It is known that SSD and flash drives in general deploy a technique called wear-leveling to increase product lifespan. These products also use other 'optimisation techniques' that create unpredictability and opaqueness with respect to the state of the physical media: The device is not honest with the OS/BIOS about what physical sectors are actually being altered. In other words- if I tell an SSD drive to write zeroes to the first 10k sectors, it may not actually overwrite the first (by physical location) 10k.

    Back to LUKS- this potentially creates a (well-known/acknowledged) risk that the Underlying Key, encrypted using an old password could be physically 'lurking' in the SSD media and readable assuming a sizeable budget and a serious data forensics specialist.

    I'm aware that LUKS supports storing the header on a separate disk- perhaps a thumb drive that, given the tiny space requirement- could be cost-effective to physically destroy it with thermite plasma and a sander and replace upon every password change.

    ACTUAL QUESTION

    What specific methods, steps or tools can I use to mitigate the risk that the Underlying Key (encrypted by an old- let's assume weak or compromised- passphrase) that has been wiped/deleted/discarded/etc by LUKS is actually lurking on the flash storage because the drive's controller never actually overwrote the bits at a physical level?



  • Withdrawn in favor of better information

    Edit for additional Research

    I went looking for some more detail and found some interesting results.

    Discard/TRIM support for solid state drives

    Solid state drive users should be aware that, by default, TRIM commands are not enabled by the device-mapper, i.e. block-devices are mounted without the discard option unless you override the default.

    The device-mapper maintainers have made it clear that TRIM support will never be enabled by default on dm-crypt devices because of the potential security implications.

    Minimal data leakage in the form of freed block information, perhaps sufficient to determine the filesystem in use, may occur on devices with TRIM enabled.

    By default the LUKS header is stored at the beginning of the device and using TRIM is useful to protect header modifications. If for example a compromised LUKS password is revoked, without TRIM the old header will in general still be available for reading until overwritten by another operation; if the drive is stolen in the meanwhile, the attackers could in theory find a way to locate the old header and use it to decrypt the content with the compromised password.

    Milan Broz's blog (2014)

    Ciphertext device without using TRIM from Figure 2 would be just black box (all sectors have random data characteristic, disk was wiped before use).

    Conclusion?

    If there is a strong requirement that information about unused sectors must not be available to attacker, TRIM must be always disabled.

    TRIM must not be used if there is a hidden device on the disk. (In this case TRIM would either erase the hidden data or reveal its position.) If TRIM is enabled and executed later (even only once by setting option and calling fstrim), this operation is irreversible. Discarded sectors are still detectable even if TRIM is disabled again.

    In specific cases (depends on data patterns) some information could leak from the ciphertext device. (In example above you can recognize filesystem type for example.) Encrypted disk cannot support functions which rely on returning zeroes of discarded sectors (even if underlying device announces such capability). Recovery of erased data on SSDs (especially using TRIM) requires completely new ways and tools. Using standard recovery tools is usually not successful.

    If you decided to turn on TRIM for your device... You have been warned 😉

    Luks, Issuing & Allowing Discards (2017)

    The first thing to check is that discards are enabled on your encrypted device. If don't see allow_discards on your encrypted device fstrim will not work on any volume within that device.

    Example encrypted device missing the discard option:

    dmsetup table --showkeys

    cr_ata-XXX-YYY-partZ /dev/disk/by-id/ata-XXX-YYY-partZ: 0 12345 crypt aes-xts-plain64 XXYYZZ 0 8:3 4096
    

    Assuming you've encrypted root ("/") we'll check if the fstrim command works:

    fstrim / /: 185.6 MiB (194564096 bytes) trimmed

    If there's stdout indicating there's some number of bytes being trimmed, it's working.

    My take on this is that the SSD encryption risks are quite different as a function of whether TRIM is used or not.

    Keep in mind that there will be other variables depending upon which SSD manufacturer, model, and interface.

    Personally I'm inclined to stick with an HDD for encryption and avoid the issue entirely.

    Thanks for the question, I learned some new things!



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2