InnoDB: Cannot close file ./tablefile.ibd because of 1 pending operations
MariaDB Version: '10.5.13-9
I am facing multiple unsuccessful data migration(Copy) attempts from my old legacy data environment to my DBaaS Container Environment. Performing this data migration using mysqldump with pipe to restore in my Container which is running in rootless podman with pid 1. MariaDB states, this issue is sorted with version 10.5.13-9. They cannot reproduce this bug. Not able to find the reason behind why InnoDB keep logging [Note] InnoDB: Cannot close file because of 1 pending operations.
Enabled the coredump to analyze the bug, my error logs writes: Writing a core file... Working directory at /var/lib/mysql but I cant find the core file. Not able to find the reason behind it.
I even tried creating the container with podman using below parameters. But no luck. --ulimit core=-1 --privileged
--security-opt seccomp=unconfined \
MariaDB [(none)]> show global status like "%pending%"; +------------------------------+-------+ | Variable_name | Value | +------------------------------+-------+ | Innodb_data_pending_fsyncs | 0 | | Innodb_data_pending_reads | 0 | | Innodb_data_pending_writes | 0 | | Innodb_os_log_pending_fsyncs | 0 | | Innodb_os_log_pending_writes | 0 | +------------------------------+-------+ 5 rows in set (0.001 sec)
2022-01-25 11:19:05 0 [Note] mysqld: ready for connections. Version: '10.5.13-9-MariaDB-enterprise-log' socket: '/var/run/mysqld/mysqld.sock' port: 53805 MariaDB Enterprise Server 2022-01-25 11:27:26 14 [Note] Start binlog_dump to slave_server(52396), pos(mysqld-bin.000001, 369), using_gtid(0), gtid('') 2022-01-25 11:38:06 15 [Note] InnoDB: Cannot close file ./pptdatabase/fallback_stf_stat_availability_sweeper_valuecritanytype#P#p736783.ibd because of 1 pending operations . . . .
. 2022-01-25 14:11:30 15 [Note] InnoDB: Cannot close file ./pptdatabase/stf_stat_availability_externalcall_avl_hist#P#p738548.ibd because of 1 pending operations 220125 14:11:32 [ERROR] mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is wrong and this may fail.
Server version: 10.5.13-9-MariaDB-enterprise-log key_buffer_size=134217728 read_buffer_size=2097152 max_used_connections=13 max_threads=1502 thread_count=13 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 9397787 K bytes of memory Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x0 thread_stack 0x49000 addr2line: 'mysqld': No such file mysqld(my_print_stacktrace+0x2e)[0x563b87eb047e] Printing to addr2line failed mysqld(handle_fatal_signal+0x485)[0x563b8792b425] sigaction.c:0(__restore_rt)[0x7f943ec57b20] addr2line: 'mysqld': No such file mysqld(+0xdfc944)[0x563b87dc9944] mysqld(+0xdfd881)[0x563b87dca881] mysqld(+0x652bdd)[0x563b8761fbdd] mysqld(+0xe1086a)[0x563b87ddd86a] pthread_create.c:0(start_thread)[0x7f943ec4d14a] :0(__GI___clone)[0x7f943e061dc3] The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains information that should help you find out what is causing the crash. Writing a core file... Working directory at /var/lib/mysql Resource Limits:
Another Attempt after increasing buffer pool, log file size & log buffer also re producing same. Inndob keep logging this note messages. default_storage_engine = InnoDB innodb_buffer_pool_size = 15G innodb_log_file_size = 10G innodb_log_buffer_size = 10G innodb_file_per_table = 1 innodb_open_files = 800 innodb_io_capacity = 15000 innodb_read_io_threads = 10 innodb_write_io_threads = 20 innodb_flush_method = O_DIRECT # can be fsync O_DSYNC O_DIRECT innodb_flush_log_at_trx_commit = 1
Version: '10.5.13-9-MariaDB-enterprise-log' socket: '/var/run/mysqld/mysqld.sock' port: 53805 MariaDB Enterprise Server 2022-01-26 0:06:27 14 [Note] Start binlog_dump to slave_server(52396), pos(mysqld-bin.000001, 369), using_gtid(0), gtid('') 2022-01-26 0:17:58 16 [Note] InnoDB: Cannot close file . .
./pptdatabase/stf_stat_availability_officeoffice_bench_aws_mhist.ibd because of 1 pending operations
The signal 11 (SIGSEGV) looks like a bug. Can you please file it in the MariaDB bug tracker, against the 10.5.13 release, with as many details as possible? If possible, try to produce https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ by attaching a debugger to the server before it crashes, or by enabling core dumps and invoking the debugger on the core dump.
We have not seen any crash like that in our internal testing, but then again, we typically run stress tests with small amounts of data, and on RAM disk.
The scenario resembles that of https://jira.mariadb.org/browse/MDEV-25215 and that originally reported bug (excessive logging of "Cannot close", several times per second) was fixed.
The scenario is that some files need to be closed so that the maximum number of open files is not being exceeded. We are not allowed to close a file handle on which writes or fdatasync() is pending. I think that at the same time, a log checkpoint must be in progress. A larger innodb_log_file_size should make log checkpoints less frequent.