Whether `SQL Server: Memory Manager: Target Server Memory (KB)` is ideal amount of memory based on recent workload OR the max memory setting?



  • Link: https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/monitor-memory-usage?view=sql-server-ver15

    SQL Server: Memory Manager: Target Server Memory (KB) This counter indicates an ideal amount of memory SQL Server could consume, based on recent workload. Compare to Total Server Memory after a period of typical operation to determine whether SQL Server has a desired amount of memory allocated.

    I am getting conflicting definitions on the internet about the SQL Server: Memory Manager: Target Server Memory (KB) counter.

    MSDN doc definition is as shown in the above quote. Where as other online links say that this is the max server memory that we configure under the sql server options.

    Example: https://www.mssqltips.com/sqlservertip/6571/perfmon-counters-for-sql-server-memory/

    Target Server Memory (KB) is the amount of memory that SQL Server can potentially allocate to the buffer pool under its current workload. The Target Memory (KB) counter would reflect SQL Server "max server memory (MB)" when it is set as that would be the amount of memory that SQL Server could and would like to reach.

    Which definition is correct?



  • This answer assumes the system is not configured with LPIM and startup trace flag 834 - such a system plays by its own rules which are fairly unique.

    If [Max Server Memory] is set to a value SQL Server can achieve given total RAM and other memory commitments on the system, [Target Server Memory] will be equal to [Max Server Memory]. If [Max Server Memory] cannot be achieved or maintained without resulting in low memory notifications from the OS, SQL Server will select a lower achievable [Target Server Memory] value - with some exceptions.

    This graph shows a wobbly [Target Server Memory]. It's wobbly because the highest of the [Target Server Memory] values - which are less than or equal to [Max Server Memory] - are high enough that sometimes other memory use on the system causes low memory notifications and adjustments. The scale of the adjustments is rather small - lowering [Max Server Memory] to the lowest of the [Target Server Memory] values minus 2GB would likely allow this system to have a stable rather than wobbly [Target Server Memory]. wobbly Target Server Memory

    Just so you know i'm not making this up 🙂 Here's what paging space use, [Available], and [Free & Zero Page List] memory look like on that system while [Target Server Memory] is wobbly.

    paging space during wobbly target

    One exception to the lowering of [Target Server Memory] when it cannot be achieved comes in the interaction with SSAS and SQL Server. SSAS uses a default minimum working set size of 20% of server RAM. Immediately after SSAS service starts, this minimum working set decreases ResAvail pages, even though the current working set may not have had nearly as much of an effect on [Available] or [Free & Zero Page List] memory. The decrease in ResAvail pages may limit the SQL Server [Total Server Memory] that can be achieved, but if LPIM is enabled it will not lower [Target Server Memory] as would normally be expected when memory conditions prevent [Max Server Memory] from being achievable. The end result can be a system where [Target Server Memory] never gets lowered. No low memory condition of [Available] or [Free & Zeroed Page List Memory] is signalled even though low ResAvail Pages prevents [Total Server Memory] from reaching [Target Server Memory]. With sufficient activity in a SQL Server instance in this condition, out-of-memory errors can occur even though no paging space is used, no trouble is evident based on [Available] or [Free & Zero Page List], and [Total Server Memory] hasn't yet reached [Target Server Memory].

    Here's a graph showing this situation. Initially, workload activity caused [Total Server Memory] (which is the sum of sqlos free memory, database cache, and stolen memory) to increase toward [Target Server Memory], which remained constant at [Max Server Memory] throughout. But [Total Server Memory] hit a ceiling, and further activity would not allow [Total Server Memory] to continue to grow toward [Target Server Memory]. Shortly after the data for this graph was captured with perfmon, OOMs began to occur in this SQL Server instance.

    total can't achieve target

    These blog posts give additional details about this unusual interaction between SQL Server and SSAS. I recommend isolating these services from each other whenever possible.

    https://www.qumio.com/Blog/Lists/Posts/Post.aspx?ID=48

    https://techcommunity.microsoft.com/t5/sql-server-support-blog/os-hang-or-out-of-memory-due-to-sql-ser-no-wait-it-s-sql/ba-p/334014

    Another situation where [Target Server Memory] may remain steady even though low memory conditions would otherwise predict a reduction in [Target Server Memory]: when [Min Server Memory] is set to a nonzero number and [Target Server Memory] has already decreased to [Min Server Memory], but low memory conditions persist.

    Let’s assume as before the system is anything other than (LPIM + T834). Further, no SSAS on the system and SQL Server is the only large memory consumer on the system. And finally, [Min Server Memory] = 0 (unregulated, default value).

    Immediately after SQL Server startup [Target Server Memory] will be set. If MSM is achievable, that value will be used as target. Otherwise a lower value will be selected as target. Initially, [Total Server Memory] will be fairly small. But workload pressure will result in [Total Server Memory] growing toward [Target Server Memory]. [Target Server Memory] will only be lowered in response to a low memory notification. If [Target Server Memory] is not equal to MSM, and the low memory condition has been cleared and SQL Server can achieve a higher [Target Server Memory] value, a higher value will be selected as target (up to MSM). At any point when [Total Server Memory] differs from [Target Server Memory], activity can result in [Total Server Memory] increasing or decreasing toward [Target Server Memory].

    One final behavior to mention: sometimes [Total Server Memory] may exceed [Target Server Memory] even when MSM is used as the target value. On the systems I work with, this happens most often and most aggressively when multiple concurrent batch mode queries run - whether those queries are batch mode on column store or batch mode on rowstore.

    So in the example of [Max Server Memory] = 92000 mb and [Target Server Memory] = 90000 mb, what can i say about the state of memory on the system? Well, a target of 92000 mb isn’t achievable by SQL Server. So the lower target of 90000 mb was chosen. When I’m tuning a system I personally like to make sure MSM is a value such that [Target Server Memory] = MSM - and rarely if ever becomes lower.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2