Azure Database Performance Throttling - Use Replication or?



  • We keep a "rather large" azure database where we load all of our company data into every night.

    It uses temporal tables to keep track of changes to the data

    Now we want to start reading this data heavily during the day by 3 separate teams. At least one team writes slow running sql queries. Another team needs fast access to at least 3 different tables using simple select/where.

    How can I make sure that all 3 teams can keep working without slowing down their read performance.

    Should I replicate to separate DB? Or what can I do? Should I use managed instance or something else? I am trying to keep cost down without increasing maintenance too much and without complicating the Teams way of working too much.



  • Depending on which flavour of Azure SQL is being used you may be able to implement https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor or equivalent.

    This allows you to allocate a portion of the hardware to each workload. The large slow queries cannot flood the buffer pool. Many small queries cannot starve the others of processing quanta.

    Of course the machine is still finite. Each group gets less than the whole machine but is guaranteed to never get none of it. If all groups need more performance the instance has to be scaled up. If you are at the upper limit of scale-up the only option is scale-out, with replication or sharding as appropriate.

    Replicating 400TB may prove, umm, interesting; don't know, never tried. I should think at that scale you would command some attention from your local MS sales team. See if they can get you some time with an MS solution architect.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2