A couple of years ago it was very easy to DoS attack .Net web services as the headers were added to a dictionary. Back then the hash of the key was predictable so using a bunch of machines in azure and a few days it was possible to generate enough strings that resulted in the same hash value that you then could make a fake request with a lot of headers (a few hundred is typically enough) that caused the web server to spend 100% of CPU searching and updating the header dictionary. Since I recently started doing some work using go (aka golang) I immediately started to wonder how this worked in this language.
After a little bit of digging I was happy to find that go uses a 32 bit random number to seed each instance of a map (aka dictionary for .Net developers). And it is a clear and obvious implementation. Just search for hash0 in this source. Case closed.
But what about .Net? Took some more time to figure out how it worked there. As far as I can read the source the .Net dictionary starts out without using a randomized seed. Only after a lot of collisions are detected (default is 100) it changes the comparer used to generate hash codes into a randomized comparer which will use a 64 bit seed.
Even if I'm wrong and the randomized comparer is used from start this is a very un-intuitive solution in my opinion. First of all the seeded hash is a property needed in the map to avoid expensive collisions, i.e. the hash of the key needs to be both unpredictable and generate good distribution across the hash space. On the other hand a hash for an object as used outside of a dictionary needs to be predictable so that the same hash is generated every time for the same value. Looks to me like the fix in .Net was made in the wrong place and over engineered. But A for effort in using a 64 bit seed rather than 32.