All we need is an easy explanation of the problem, so here it is.
We keep a "rather large" azure database where we load all of our company data into every night.
It uses temporal tables to keep track of changes to the data
Now we want to start reading this data heavily during the day by 3 separate teams. At least one team writes slow running sql queries. Another team needs fast access to at least 3 different tables using simple select/where.
How can I make sure that all 3 teams can keep working without slowing down their read performance.
Should I replicate to separate DB? Or what can I do? Should I use managed instance or something else? I am trying to keep cost down without increasing maintenance too much and without complicating the Teams way of working too much.
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
Depending on which flavour of Azure SQL is being used you may be able to implement Resource Governor or equivalent.
This allows you to allocate a portion of the hardware to each workload. The large slow queries cannot flood the buffer pool. Many small queries cannot starve the others of processing quanta.
Of course the machine is still finite. Each group gets less than the whole machine but is guaranteed to never get none of it. If all groups need more performance the instance has to be scaled up. If you are at the upper limit of scale-up the only option is scale-out, with replication or sharding as appropriate.
Replicating 400TB may prove, umm, interesting; don’t know, never tried. I should think at that scale you would command some attention from your local MS sales team. See if they can get you some time with an MS solution architect.
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂