No free buffers in MariaDB 10.6



  • I have recently updated MariaDB from 10.4 to 10.6. One major change in the underlying InnoDB engine is the https://mariadb.com/kb/en/innodb-buffer-pool/#innodb_buffer_pool_instances , so there is always only one.

    I believe this affects some math done for flushing and free buffers.

    What worries me is the value of Free buffers when checking SHOW ENGINE INNODB STATUS;

    Which is in my case just 1 (see below).

    Is this OK? Should I worry almost no free buffers?

    On old MariaDB 10.4 with same load and same InnoDB config I have hundreds/thousands of free buffers (about 1024 per pool and with 64 pools thats a lot)

    I am basing my ideas on https://www.percona.com/blog/2020/05/14/tuning-mysql-innodb-flushing-for-a-write-intensive-workload/ and I tried to adjust innodb_lru_scan_depth to various values. from the default 1024, all the way down to 128 or up to 4096. And this changes NOTHING for the Free buffers in InnoDB engine status.

    Here are some variables I use:

    +-----------------------+-------+
    | Variable_name         | Value |
    +-----------------------+-------+
    | innodb_lru_flush_size | 32    |
    | innodb_lru_scan_depth | 1024  |
    +-----------------------+-------+
    +--------------------------------+-----------+
    | Variable_name                  | Value     |
    +--------------------------------+-----------+
    | innodb_adaptive_flushing       | ON        |
    | innodb_adaptive_flushing_lwm   | 10.000000 |
    | innodb_flush_log_at_timeout    | 1         |
    | innodb_flush_log_at_trx_commit | 2         |
    | innodb_flush_method            | O_DIRECT  |
    | innodb_flush_neighbors         | 0         |
    | innodb_flush_sync              | ON        |
    | innodb_flushing_avg_loops      | 30        |
    +--------------------------------+-----------+
    +--------------------------------------+-----------+
    | Variable_name                        | Value     |
    +--------------------------------------+-----------+
    | innodb_max_dirty_pages_pct           | 80.000000 |
    | innodb_max_dirty_pages_pct_lwm       | 0.000000  |
    | innodb_page_size                     | 16384     |
    +--------------------------------------+-----------+
    +------------------------------------------+--------+
    | Variable_name                            | Value  |
    +------------------------------------------+--------+
    | innodb_io_capacity                       | 100    |
    | innodb_io_capacity_max                   | 2000   |
    | innodb_read_io_threads                   | 8      |
    | innodb_version                           | 10.6.7 |
    | innodb_write_io_threads                  | 8      |
    +------------------------------------------+--------+
    

    And here is InnoDB ENGINE STATUS

    -------------------------------------
    INSERT BUFFER AND ADAPTIVE HASH INDEX
    -------------------------------------
    Ibuf: size 1, free list len 19584, seg size 19586, 0 merges
    merged operations:
     insert 0, delete mark 0, delete 0
    discarded operations:
     insert 0, delete mark 0, delete 0
    0.00 hash searches/s, 0.00 non-hash searches/s
    ---
    LOG
    ---
    Log sequence number 17113639169452
    Log flushed up to   17113638981154
    Pages flushed up to 17111149633411
    Last checkpoint at  17111149633411
    0 pending log flushes, 0 pending chkp writes
    14645584 log i/o's done, 97.00 log i/o's/second
    ----------------------
    BUFFER POOL AND MEMORY
    ----------------------
    Total large memory allocated 95026151424
    Dictionary memory allocated 562154200
    Buffer pool size   5710848
    Free buffers       1
    Database pages     5710847
    Old database pages 2108085
    Modified db pages  1003052
    Percent of dirty pages(LRU & free pages): 17.564
    Max dirty pages percent: 80.000
    Pending reads 0
    Pending writes: LRU 0, flush list 0
    Pages made young 1978993, not young 8837190
    0.00 youngs/s, 0.00 non-youngs/s
    Pages read 4210165, created 19119199, written 39595529
    0.00 reads/s, 0.00 creates/s, 0.00 writes/s
    Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
    Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
    LRU len: 5710847, unzip_LRU len: 0
    I/O sum[145]:cur[34], unzip sum[0]:cur[0]
    


  • The innodb_lru_scan_depth affects the rate of flushing modified (or "dirty") pages in the buffer pool.

    The number of free buffers measures only how many pages of the buffer pool are unoccupied. Zero free buffers indicates that the buffer pool is fully occupied, but it does not necessarily indicate that the pages are dirty. They might be mostly or entirely read-only copies of data on disk, loaded into the buffer pool to satisfy SELECT queries, and therefore they do not require flushing. So the innodb_lru_scan_depth would have no effect in that scenario.

    The number of free buffers alone cannot be used to guide tuning the innodb_lru_scan_depth.


    Update:

    I thought of a way the flushing could affect the checkpoints. If your rate of writing transactions is very high, you could be filling the log file faster than the background InnoDB threads can flush dirty pages. In that case, you could increase the log file size to give it more flexibility, or you could tune the innodb_lru_scan_depth to try to flush pages faster. But there's a practical limit to the flushing rate on any storage.

    Ultimately, you could be trying to write transactions faster than a single server can keep up with. In that case, you need to split the database writes over multiple servers. Many growing applications face this challenge. It's not easy.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2