Passwords stored as obfuscated text, not encrypted



  • An application I admin (but did not develop or implement) claims to be storing passwords in a database using “reversible encryption.”

    I have access to the database table, and was able to work out that they’re really only obfuscated, and can determine the clear text values by applying a simple mapping. This seems very bad, am I right to be concerned? Recommended next steps?

    Edit 1: The app is developed by Company A and hosted for us by Company B (SaaS). It appears the “reversible encryption” setting is out-of-the-box configuration. Unclear what changing that setting would affect, but 99% of the passwords in the table are for logging into the app itself and not related to external systems. No requirement for anyone to know anyone else’s password.



  • First thing I'd do is consider whether the system needs "reversibly encrypted" passwords at all (usually yes if it's sending them on to some other service rather than just verifying them when a user logs in, sometimes yes if this is required by some important customer but they should have an option to properly hash them as well). Second, since you say "a simple mapping" I assume this isn't actually using any modern cryptographic cipher primitives (AES, *fish, SALSA20, etc.), so that's definitely a security bug you can file.

    Look up a security contact (email address, etc.). There should be one somewhere on the site. If you can't find one, try emailing security@company.domain, or just contact their support line and ask for a security contact.

    Note that any form of reversible encryption, no matter how up-to-date its ciphers or strong its keys, suffers from a key storage problem: the program needs access to the key, which means anybody who can access the program itself can almost certainly decrypt the data. However, there are still improvements to be made from using real encryption:

    • Real encryption, even with a hardcoded key, will prevent anybody who doesn't know the key from reversing the encryption if they get access to the DB. It sounds like they currently don't even meet this - very low - bar.
    • Done correctly, the key should be unique per instance of the app. Getting access to somebody else's DB shouldn't reveal anything, even if you know the encryption key used by your own copy/instance of the software.
    • The key should be stored in a location as hard to access as possible. Ideally, it would be stored somewhere not actually extractable (like an HSM), with the app having the ability to request encryption and decryption of arbitrary strings but no other software allowed to access the HSM. At the very least the key needs to be separate from the DB, such that even an attacker with total, unfettered DB access can't get the key without finding a new vulnerability in some other part of the system.

    It sounds like you're already well aware of why they should be using a slow password hashing function, rather than reversible encryption of any sort. Even if they need encryption for some passwords/API keys (stuff used to access external services, not to authenticate local users), they should use encryption for those secrets only, and use secure password hashing algorithms for user passwords.

    If the vendor won't budge - says that it's not a security bug, or that they don't care, or just refuses to respond - give them some time and then (IMO) it's time to escalate. If possible for you, try to convince your company to threaten to cancel the contract; that's often the simplest leverage. If you can't, I would tend to move up to name and shame. Companies are usually way more likely to respond to things when it's likely to impact their bottom line, and bad publicity can do that. Sites like https://plaintextoffenders.com/, or just reaching out publicly on social media (especially to, or at least mentioning, well-known security figures), can help get the word out.

    Obviously that last part isn't risk-free. There's probably something in the terms of use about not "reverse engineering" the software, and although I think this level of "cryptanalysis" doesn't count at all, I am not a lawyer. If you had to bypass any attempted safeguards to keep you out of the DB - entering a username/password of admin/admin might count, though copying a DB connection string out of a plain-text config file on a system you control does not - then that increases the risk they'd think it worthwhile to involve lawyers. A smart company wouldn't do this - siccing the law on somebody who is trying to responsibly report a security issue is a good way to get the entire security community mad at you, and some of us hold grudges and make product recommendations at big companies (and others are hacktivists) - but a smart company wouldn't let things get nearly that far to begin with. Before you take any steps beyond just reporting the issue to the vendor, especially if you have any notion of involving your company's name, you might want to talk to the legal department. However, I am not a lawyer, and this is NOT legal advice.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2