T
When people talk about running a database in Docker, they do not mean to store the data in a container; they are talking about having a docker image with the DB software, and mounting the data as a volume (a bind volume, not a container volume).
Volumes are an essential part in Docker, and are not something that is flakey or just tacked on. Docker is not just made for stateless (micro)services.
Wish as I might, I cannot find a technical reason not to run a database in a Docker, so unfortunately I'll pick the other side of the argument and hence maybe not give you the answer you are looking for.
(I'm using Oracle as an example because I'm familiar with it, both bare metal and dockerized, and because it's quite a notorious beast for being just a bit non-trivial to operate if you go past default settings.)
Packaging up the DB software itself in a container gives you the usual benefits - having the same version everywhere, avoiding dependency/shared library issues, being able to spin up the exact same DB on developer laptops or wherever you need it.
It is a snap getting it to run anywhere; updating is trivial, and so on. All the Docker benefits apply. There is an Oracle image on Dockerhub which allows you to spin up a working DB in a minute or three (and for the others as well, of course).
People did do performance tests and found no I/O differences between volumes and bare metal (https://www.percona.com/blog/2016/02/11/measuring-docker-io-overhead/, https://stackoverflow.com/questions/21889053/what-is-the-runtime-performance-cost-of-a-docker-container).
Under the hood, it's not like Docker somehow intercepts all I/O, anyway. It just gets creative with standard Linux tools (bind mounts in this case, mangling of the internal kernel tables that make the Docker-fu possible at all).
Obviously that does not mean that you can run two instances of the DB and just have them work on the same files, but nobody is implying that. Docker does not give you automatic simultaneous and magically race-free access to volumes, and did never pretend to do so. The rest of the benefits still apply. If your DB itself does not detect conflicts like this, you better supply a CMD script to the image which refuses spinning up a second container when the volume is already in use.
You have to be a little more careful spinning up/shutting down the container (just as you would not simply switch off a bare metal DB server), but that should be quite manageable.
Now, depending on circumstances, there may be soft reasons not to do it:
Oracle (the company), for example, might not support you if you run their RDBMS in a Docker container on production systems (in 2021, 3 years after writing this answer, it is unclear to me if that is still true). But maybe you are using dockerized Oracle RDBMS images only for your developers and the testing environment, where you would not need their support in any case, reserving it for a bare metal production server. (But don't forget to pay your licenses...).
If the ops guys are unfamiliar with Docker, it might just be a bit easier to accidently kill everything, destroy your data files etc..
If you have big dedicated metal DB machines already, with large amounts of very fast dedicated SAN storage, and running nothing else anyways, then there would just be no point in using Docker to containerize those as you will never just spin another server up when there are 100s of GB or even TB of data. After all, for production, a RDBMS like Oracle is very, very advanced in all the replication, data integrety, no-downtime failover, etc. aspects. Note that this argument just says "you do not need to containerize your RDBMS". It does not say "you should not do it" - maybe you want to do it because you wish to roll out database software upgrades through containers or for whatever other reason you could imagine.
So there you go. By all means do dockerize your DB, at the very least for your developers (who will be eternally thankful) and your testing environments. On the production, it will come down to taste, and there at least, I would also prefer the solution that sits best with the specialized DBA/Ops - if they have decades of experience working bare metal DB servers, then by all means trust them to continue so. But if you are a startup who has all IT in the cloud anyways, then a Docker container would just be one further piece of onion in the whole picture.