How to avoid replication lag in case all writes on master and reads on replica?
I have stuck with the replication lag problem. I am trying to refactor my DB infrastructure. The first step was read-write split. I used ProxySQL то implement it. Now I have masterDb where all INSERT and UPDATES executing and 2 replicas where I route all SELECT.
But the main problem I faced with, replication lag. Because usually when you change something in DB you immediately read data and expect to have new data in the query result, but with replication lag, I receive outdated data. After googling I found this article https://www.percona.com/blog/2018/11/29/mysql-high-availability-stale-reads-and-how-to-fix-them/ and according to this info the best option in 2018 was "ProxySQL 2.0 GTID consistent reads"
Any updates from 2018? Maybe you guys know a better solution? Please share!
Replication always has lag, if only a very small amount, because your data changes are not even written to the binary log until you commit a transaction. Then the replica has to download those events in the binary log and apply them on the replica instance. This is normally very quick, but by definition, it is > 0 lag for every event.
It is also possible for your application to attempt to read data it just wrote, before it commits its own transaction. In that case, there's no way the replica can have the same change, since it hasn't even been committed on the source instance yet.
If your app needs absolutely current data, with no possibility of reading stale data, then the queries must read from the source instance, not a replica.
But not every read needs to read such strictly current data. Each query in your application has potentially different sensitivity to reading data that is lagging a little bit behind the source.
I wrote a presentation https://www.percona.com/resources/technical-presentations/readwrite-splitting-mysql-and-php-percona-mysql-webinars for Percona, which describes several different solutions for different levels of replication lag tolerance.