How good/bad is this implementation to secure password?



  • I've just started new project and decide to try something new, so I'am not using anymore EnityFramework and Identity framework. I Decide to try MongoDB, so i wanted to design from scratch storing Users in database.

    In database I'am going to store password and salt. My salt generator looks like this:

    internal class SaltGenerator : ISaltGenerator
    {
        private const int SaltLength = 32;
    
        public string GenerateSalt()
        {
            var saltBytes = new byte[SaltLength];
            using (var cryptoService = new RNGCryptoServiceProvider())
            {
                cryptoService.GetNonZeroBytes(saltBytes);
            }
    
            return Convert.ToBase64String(saltBytes);
        }
    }
    

    and Password hash generator:

    internal class PasswordHashGenerator : IPasswordHashGenerator
    {
        public string HashPassword(string password, string salt)
        {
            if (string.IsNullOrEmpty(password))
                throw new ArgumentNullException(nameof(password));
    
            if (string.IsNullOrEmpty(salt))
                throw new ArgumentNullException(nameof(salt));
    
            using(SHA256 sha256 = SHA256.Create())
            {
                var computedPassword = $"{password}{salt}";
                var passwordBytes = sha256.ComputeHash(Encoding.UTF8.GetBytes(computedPassword));
    
                return Convert.ToBase64String(passwordBytes);
            }
        }
    }
    

    And to test that everything is fine, very simple unit tests:

    [Fact]
        public void Should_GenerateSalt()
        {
            // Arrange
            ISaltGenerator _saltGenerator = new SaltGenerator();
    
            // Act
            var salt = _saltGenerator.GenerateSalt();
    
            // Assert
            Assert.False(string.IsNullOrEmpty(salt));
        }
    
    [Theory]
        [InlineData("testPassword", "testHash")]
        [InlineData("nextTestPasword", "nextTestHashHash")]
        [InlineData("testPasswooooord", "testHaaaaaash")]
        [InlineData("c98b7acd-19af-45a0-b133-96a43c8d2204", "eafe4fbb-4480-462d-9d3e-6d20a2128e8a")]
        public void Should_GeneratePasswordHash(string password, string salt)
        {
            // Act
            var hashedPassword = _passwordHashGenerator.HashPassword(password, salt);
    
            // Assert
            Assert.False(string.IsNullOrEmpty(hashedPassword));
            var nextHashedPassword = _passwordHashGenerator.HashPassword(password, salt);
    
            Assert.Equal(hashedPassword, nextHashedPassword);
        }
    

    My question: How good/bad this code is? What should I change to be sure, that password is protected enough? On the internet i also found somenthing like this: How to Use Argon2 for Password Hashing in C# Is this implementation much better?



  • Password hashing, like most cryptographic operations, is something you should never try creating from scratch, and should avoid even implementing from somebody else's design unless there's some reason you can't use a well-tested library.

    The obvious flaw in your scheme is that SHA256 is a fast hash, which means it's possible to attempt to brute-force it at a rate of literally billions of candidates per second on a single decent GPU. It also takes almost no RAM - it's designed to be viable for low-power embedded processors - which means it can be parallelized extremely efficiently not only in GPUs but also in things like FPGAs and ASICs, which are even more efficient. The use of a salt means that an attacker can't pre-compute the password hash candidates (they can't create a rainbow table), but they can take the hash out of the database and stick it into a cracking tool to try the billion most likely passwords for each user in under a second each, using commodity hardware or relatively cheap time on cloud infrastructure.

    The solution is to use a purpose-made password hashing algorithm. All of these, in addition to using a salt, have a tunable cost, also called a work factor, which controls how computationally expensive it is to hash a password (and hence to check a password candidate). Generally, you want to set this cost as high as possible without overloading your login server. For reasonably sensitive sites, recommendations of 100+ms per password are common. If it's a local system (such as for logging into a local app), you can go much more than that. Good password hashing algorithms also use significant (and, for the best ones, tunable) amounts of RAM, which make parallel attacks much more difficult.

    As of April 2021, algorithms worth considering are all going to have implementations for common platforms. In decreasing order of security features, they are:

    • The argon2 algorithm family, most typically the argon2id variant. Supports tuning CPU and RAM costs. Winner of a recent (concluding in 2016, IIRC) big contest to develop the best password hashing algorithm, it is the newest but is built with knowledge gained from past attempts and extensively reviewed and well-tested.
    • The scrypt algorithm, which is the first one of note to have a tunable memory cost (in addition to tunable CPU cost). It's been around longer than argon2, if you want to avoid the cutting edge, but honestly I don't see much reason to use it now that argon2 exists.
    • The bcrypt algorithm, which is somewhat showing its age but is very widely used, with library implementations for everything. Its memory cost is fixed, and shows its age somewhat today, though it's still generally inefficient to parallelize too heavily. Atypical in that it has a max password length, though in practice you won't run into it.
    • The PBKDF2 algorithm family, ideally using one of the stronger hash functions (a member of the SHA2 or SHA3 families). It is the oldest on this list (and technically created for deriving keys from passwords, rather than hashing them for verification, though it's extremely widely used for the latter purpose too), and lacks any memory hardness at all. Aside from interoperability with legacy code, the only place I would recommend it is if you have a legal requirement (such a FIPS) to only use specific constructions. For all other purposes it is inferior to the above options.

    Other considerations:

    • There's nothing wrong with using a big salt, but 32 bytes - 256 bits - is possibly overkill and can be reduced if you want to save some DB space. Library implementations of the recommended functions typically use 64-192 bits as their defaults, although some go higher.
    • There's not really any reason to avoid zero bytes in your salt, though - especially with such a large salt - it doesn't hurt anything either.
    • In general, libraries will offer to automatically generate the salt for you, and it's fine to do so.
    • You'll want to adjust your cost(s) over time, keeping up with upgrades to your hardware.
    • One nice thing about most libraries is that they store the salt and costs in the output (the password verifier - including the hash digest - that is kept in the DB) and provide a simple verify function that takes a password and a verifier, extracts the salt and costs from the verifier, and uses them to re-hash the password and check for a match. This both simplifies your logic, and means you can adjust things like costs and salt size over time without explicitly worrying about backward compatibility (though you may still want to re-hash with the new inputs and store the new output for each password that verifies, after you change the inputs).


Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2