Does tempdb recommended count depend on the total processors on the computer or on the processors in a single NUMA node?



  • The number of secondary data files depends on the number of (logical) processors on the machine. As a general rule, if the number of logical processors is less than or equal to eight, use the same number of data files as logical processors. If the number of logical processors is greater than eight, use eight data files.

    Link: https://docs.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database?view=sql-server-ver15#physical-properties-of-tempdb-in-sql-server

    In server properties, processors section, I can see there are 2 NUMA nodes with 12 processors each. In the SQL error logs I can see: SQL Server detected 24 sockets with 1 cores per socket and 1 logical processors per socket. 24 total logical processors.

    Automatic soft-NUMA was enabled because SQL server has detected hardware NUMA nodes 
    with greater than 8 physical cores. 
    

    enter image description here enter image description here

    So it looks like the computer has got 4 physical processor nodes, with 6 logical processors in each node. In this scenario what is the recommended value of number of tempdb and does it depend on the number of NUMA nodes or number of total logical processors or number of logical processors per NUMA node?

    I am asking because the previous DBA has set the number of tempdb files to 6 (this is the number of logical processors in a single NUMA node). Whereas I understand based on the above MSDN link that the best practice is to configure the number of tempdb files as 8 or more in increments of 4 (12, 16, 20, 24).



  • TempDB data file configuration is generally based on total logical CPUs, regardless of NUMA count.

    The typical reason for adding data files is due to the locking of GAM, SGAM or PFS as objects are created and dropped within a single file. Unlike data files of user databases, objects are continuously created and dropped in TempDB very often. By adding data files, you can get around this limitation by spreading those actions across other files. As the CPU count goes up, so does the potential for more contention within a single data file.

    I’ve always followed the general guidelines mentioned in the linked doc for years. Set TempDB data file count equal to CPU count, up to a max of 8. Even on systems with 16+ CPUs, I’ll initially create 8 files. Only if contention is observed it TempDB would I add more, in increments of 4.

    When talking about other configurations, like MAXDOP, the general reason for limits related to NUMA is to attempt to keep a single query running on the same memory bank. In that case, you don’t want a query spread across CPU cores on different NUMA nodes. However, with TempDB contention, you’re dealing with all running queries running on the severer, not just the scope of a single query going parallel. Thus, you don’t care about NUMA.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2