Gratulalunk overs?t




MAGYARÁZAT:Gratulalunk overs?t

Om Google Oversæt. Privatliv og vilkår Hjælp Send feedback Om Google. Tekstoversættelse Registrer sprog. Registrer sprog Dansk Engelsk Tysk. Engelsk Dansk Tysk. Henter oversættelsen…. Ryd kildetekst. Din browser understøtter ikke indtaling. Oversæt med stemmen. Din browser understøtter ikke stemmeoutput. Slå håndskrift til Vælg inputværktøj. Oversættelserne er bøjet efter køn. Få flere oplysninger. Gem oversættelse. Mere om denne kildetekst Der skal angives en kildetekst for at få yderligere oplysninger om oversættelsen.

Luk vælger. Søg efter sprog. Luk søgning. Ryd søgetekst. Seneste sprog. Dokumentoversættelse Registrer sprog. Vælg et dokument. Upload en fil i formatet. Find en fil på din computer. Send feedback. Tegnbegrænsningen er 5. Brug pilene for at se mere af den tekst, der skal oversættes. Gemte Historik.

Get New posts delivered straight to your inbox. Thank you for subscribing! Updated: Sep 11, This month's tsql2sday is hosted by the AirborneGeek t bwho asks us to take a lesson from something frequently done by pilots: learning from accidents and mistakes done by others.

As a long-time SQL Server Consultant DBA, I have learned from quite a lot of mistakes done mostly by others, seeing as a significant part of my job description is to come over and fix such mistakes. So, today I'll use this opportunity to talk about one such interesting incident.

In any case, this customer contacted us with complaints about recurring performance issues. They couldn't quite put their finger on anything specific, but they reported the following symptoms:. End-users intermittently getting "a general sense of slowness while using their system".

Users experiencing sudden disconnections from the database. The works. However, we couldn't pinpoint the cause of these disconnections. The servers were hosted by Microsoft Azureso we used that to our advantage. After contacting Microsoft Azure supportthey let us know that the disconnections happened because The servers were being throttled!

Something caused them to reach the disk max throughput limit periodically. Finally, we were making progress But we needed more detailed performance metrics, and gather them over a longer period of time. We needed better visibility. This called for the installation of SentryOnethe database monitoring platform that we love using as part of our managed database services solution.

Looking at the time window of a recent disconnection event, we saw the following:. Was this the culprit? Could it really be? We widened our search to a larger time window and aggregated the results We saw the following:. The picture started becoming clear It was running for all tables in the database.

It was scheduled to run every single night. Based on its execution history, its average duration was around 32 hours. And every time there was some kind of additional BI process or a heavy query, it brought the server down to its knees, causing the IO to reach its max throughput, and subsequently causing the throttling and AG disconnections.

Once we knew what the cause for the problems was, we knew how to deal with it:. We changed the maintenance job to the one implemented by Ola Hallengrenwhere you can specify the modification rate for the Update Statistics part of the job. This should reduce the duration of each execution, and make sure that we're only updating statistics for what actually needs updating.

We modified the sampling rate of the statistics update. The customer's data is rather uniformso this was quite enough for our use case. We reduced the frequency of the job - instead of running every single night, we scheduled it to run once every weekend.

We still had the SentryOne monitoring in place, so if any performance degradation resulted from bad statistics, we'd be able to detect it and adjust our configurations accordingly. After we made these changes, we saw an almost immediate effect. Getting rid of the "background noise", caused by constant statistics update jobs running all the time, has significantly reduced the stress on the server, and improved overall performance.

The SQL Wait stats didn't see much improvement, but the workload stress on the memory and disk was reduced significantly, and memory utilization has been stabilized and improved as well. And, most importantly, the customer no longer experienced any disconnections resulting from every heavy query or BI process being run.

Not even during the weekends, when the statistics job was running again. Everyone was happy, everything was good. To summarize what we've learned from this use case:. First and foremost, always have a proper database monitoring solution recording your SQL Server performance for retroactive investigation and tuning. SentryOne is our personal favorite, but there are many others as well. These can get ya right when you least expect it, and cause availability issues - which quite ironically - can be accentuated when using Availability Groups.

See the "additional resources" section below for relevant articles on this topic. Be mindful of the scheduling you set for your database maintenance jobs. Statistics update operations can be just as resource-intensive as index rebuilds and integrity checks, depending on your data volume and hardware performance. Don't make them run too frequently so that they'd overlap with end-user activity but also don't neglect them either Be mindful of the configurations for your database maintenance jobs.

Setting up default settings may not be the best ideal for all possible use cases, and neither would overtly inclusive settings such as rebuild all indexes, update all statistics, etc. This is a whole topic on its own, but there are quite a lot of resources available out there for this.

All in all: Don't do things blindly. A SQL Server estate is not something you can just play " fire and forget " with. First Name. Last Name. Eitan Blumin. SQL Server Performance tsql2sday tsqltuesday. Recent Posts See All. Post not marked as liked. Post not marked as liked 1.