Reducing memory grant waits in development environment



  • I've got a query which waits a long time for memory grants, even though it's the only query running at the time. The query plan is at https://www.brentozar.com/pastetheplan/?id=rkNF4LF6K

    I cannot change the query content, but I can do anything with my server running locally (developer edition, so all features available).

    I've tried enabling the resource manager already, hoping to force a quicker turnaround with the memory requests:

    ALTER WORKLOAD GROUP "default" with (
      REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1,
      REQUEST_MAX_MEMORY_GRANT_PERCENT = 5)
    

    But this didn't seem to have any impact.

    I don't believe I can add indexing on a function in SQL Server, although that would be great.

    What else can I change from the server side to make a query like that execute in a reasonable time?



  • Since your original question got closed out as a dupe, re-posting my answer here (with additional information) so it's preserved for other people experiencing similar issues:

    You appear to be having a https://docs.microsoft.com/en-us/sql/relational-databases/performance/cardinality-estimation-sql-server?view=sql-server-ver15 issue which is resulting in the unnecessary memory grant to occur (as Martin pointed out). You should always use your execution plans as a comparison even if they both are doing the same operations, because the runtime statistics may be different between the slow and fast run, which appears to be the case here.

    You can observe the cardinality estimate issue you're experiencing in this example by looking at the clustered index scan operation on the right side of your execution plan. It is showing Actual Number of Rows is 5, but the plan's Estimated Number of Rows is over 7,000. This is a significant overestimate that results in the query requesting and waiting for much more memory than it needs. (Sometimes you'll the see the opposite occur, where the cardinality estimate is an underestimate, and under requests memory and / or generates a less efficient execution plan as a result.)

    Your issue likely results from https://www.sqlshack.com/how-to-use-sargable-expressions-in-t-sql-queries-performance-advantages-and-examples/ and hard to optimize predicates in your WHERE clause, most likely these predicates WHERE (REPLACE(MEDICARE_NO, ' ', '') LIKE '111111111%') AND (STATUS_CODE Not in ('D', 'X','I','Z')). Using functions in predicates can result in cardinality estimate issues like you're currently seeing. So replacing () the REPLACE() function with a different solution would probably help fix your issue. You can either store the data already staged with spaces replaced or add a https://docs.microsoft.com/en-us/sql/relational-databases/tables/specify-computed-columns-in-a-table?view=sql-server-ver15 to your table that applies the REPLACE() function. You can even make it a persisted computed column and index it, since the REPLACE() function is deterministic.

    Your other predicate AND STATUS_CODE Not in ('D', 'X','I','Z')) could also be better rewritten to maximize performance. NOT IN is an inequality operator and makes it difficult for the engine to optimize for. You're better off using IN and listing the values that are valid, which would be an equality operation then. Additionally the IN clause is just syntactical sugar for multiple OR clauses. Sometimes that also throws off the optimizer and you can try replacing them with using a UNION clause instead, for each value from your IN clause. Of course that would require you to repeat your query multiple times, so it hurts maintainability and readability a little bit, but most times you can refactor the core part of your code to a CTE first then use a UNION against that CTE for each value in the IN clause.

    But I'd recommend focussing on eliminating your use of the REPLACE() function in your WHERE clause first. My instincts would say that's your bigger issue here. You can quickly prove it out too by testing with selecting your data into a temp table first, with REPLACE(MEDICARE_NO, ' ', '') AS SomeColumn as a column you select into the temp table. Then use that temp table in your main query instead, so your main query's WHERE clause now becomes WHERE SomeColumn LIKE '111111111%'.

    As you've mentioned this is a query from a 3rd party application, there's not much you can do to fix the application's performance without changing the query yourself (obviously not advisable even if possible). Your best bet is to pass along the above information to the vendor and hopefully they're responsive in fixing it.

    If you are running the query yourself outside of the application, then you can stage the data from their tables into your own database in a more conducive structure for performance, as my answer mentions above. Then you can run the improved query on your database. There's a multitude of ways you can automated synchronizing the data from the 3rd party database into your own database to accomplish this. Some methodologies involve SQL Jobs, Replication, SSIS, or writing application code to do it. But that's outside the scope of this question, so if you need help with the aforementioned methodologies, feel free to ask a new question regarding them.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2