I’ve started working with Docker platform and managing SQL Server Containers. This list are some of the commands I execute most often when using Docker and Containers.
Click through for the list.
Comments closedA Fine Slice Of SQL Server
I’ve started working with Docker platform and managing SQL Server Containers. This list are some of the commands I execute most often when using Docker and Containers.
Click through for the list.
Comments closedJeffrey Hicks shows us a quick way to retrieve information on folder sizes:
It is simple enough to run Get-Childitem and pipe the results to Measure-Object.
But this can be often time consuming. The other drawbacks, besides typing is it takes extra work for format the results into something more user friendly. And if i want to include hidden files, I have to remember to use -Force with Get-ChildItem.
Cmdlets are nice and convenient. And I always recommend to beginner or even intermediate scripters, if there is a cmdlet to use over the .NET Framework, use the cmdlet. But there are always exceptions and as you grow in expertise, you’ll realize there are times when going directly to the .NET Framework is a better choice. Which is what I have done.
Click through for the solution.
Comments closedTim Radney has a few considerations for you if you want to start using Azure SQL Managed Instances:
Storage is a bit more difficult to plan and make considerations for, due to having to consider multiple factors. For storage you need to account for the overall storage requirement for both storage size, and I/O needs. How many GBs or TBs are needed for the SQL Server instance and how fast does the storage need to be? How many IOPS and how much throughput is the on-premises instance using? For that, you must baseline your current workload using perfmon to capture average and max MB/s and/or taking snapshots of sys.dm_io_virtual_file_stats to capture throughput utilization. This will give you an idea of what type of I/O and throughput you need in the new environment. Several customers I’ve worked with have missed this vital part of migration planning and have encountered performance issues due to selecting an instance level that didn’t support their workload.
Tim has a lot of good advice in here, so read the whole thing.
Comments closedGilbert Quevauvilliers shares how to receive notification e-mails when an Azure Function App fails:
Below are the steps to enable error notifications on Azure Function Apps
Follows on from my previous blog post How you can store All your Power BI Audit Logs easily and indefinitely in Azure, where every day it extracts the Audit logs into Azure Blob storage. One of the key things when working with any job that runs, is that I want to know when the job fails. If I do not have this and I assume that the data is always where, I could fall into a situation where there is missing data that I cannot get back.
Below explains how to create an alert with a notification email if an Azure Function App fails.
Read on for the step-by-step instructions.
Comments closedNisarg Upadhyay gives us some of the low-down on monitoring availability groups:
In my previous articles, I have explained the step-by-step process of deploying an AlwaysOn Availability group on SQL Server 2017. In this article, I am going to explain how to monitor AlwaysOn availability groups.
First, let’s review the configuration of the availability group we had deployed previously. To do that, open SQL Server Management Studio Expand database engine from the object explorer Expand “AlwaysOn High Availability” Expand “Availability Groups.” You can see the availability group named SQLAAG. Under this availability group (SQLAAG), you can see the list of availability replicas, availability databases, and availability group listeners.
Click through for some tooling built into SQL Server Management Studio, as well as relevant Perfmon counters.
Comments closedKate Smith takes us through some important concepts around Elastic Jobs in Azure SQL Database:
It is very important that the T-SQL scripts being executed by Elastic Jobs be idempotent. This means that if they are run multiple times (by accident or intentionally) they won’t fail and won’t produce unintended results. If an elastic job has some side effects, and gets run more than once, it could fail or cause other unintended consequences (like consuming double the resources needed for a large statistics update). One way to ensure idempotence is to make sure that you check if something already exists before trying to create it.
This takes some getting used to, but once you’re in the habit, you are much better off. Read on for more details on other key concepts.
Comments closedTaiob Ali explains what the CPU and memory measures are from the scheduler monitor ring buffer:
Here is a sample output of XML from sys.dm_os_ring_buffers where WHERE ring_buffer_type = N’RING_BUFFER_SCHEDULER_MONITOR’. What do those XML elements mean? In order to monitor CPU usages, you need to understand what each element means so you can use the values. I will explain each one in this blog post.
Read on for the list and what each means.
Comments closedI was recently working with a client with a SQL Server Reporting Services (SSRS) issue. Their company has standardized on using Google Chrome for the browser. However, they were running into issues when using Google Chrome with SSRS reports.
The first issue was that they were receiving a log in prompt to the SSRS server when browsing to it. The second issue was the infamous Kerberos Double-Hop issue. If you’re not familiar with the Kerberos Double-Hop architecture, check out this link: https://docs.microsoft.com/en-us/archive/blogs/askds/understanding-kerberos-double-hop.
I still have bad memories of trying to get Mozilla and (much earlier) Chrome working with Reporting Services. Ugh.
Comments closedJohn Welch has a script to check if MAXDOP is configured correctly:
There’s a lot of information on the internet about how to set MAXDOP correctly. Microsoft even provides a knowledge base article with their recommendations. However, if you look at it, there’s a fair amount of information to digest. I’m
lazyforgetfulefficient, so I wanted to put this into a script I could easily reuse and not have to remember all the details.Please note that these are just guidelines, and you should consider carefully whether they fit your workloads and scenarios. As is the case anytime you are evaluating system settings, you should test carefully before and after making changes.
Read on for the explanation as well as a link to the script itself.
Comments closedJared Poche investigates a slow record deletion process:
I encountered a curious issue recently, and immediately knew I needed to blog about it. Having already blogged about implicit conversions and how the TOP operator interacts with blocking operators, I found a problem that looked like the combination of the two.
I reviewed a garbage collection process that’s been in place for some time. The procedure populates a temp table with the key values for the table that is central to the GC. We use the temp table to delete from the related tables, then delete from the primary table. However, the query populating our temp table was taking far too long, 84 seconds when I tested it.
Read on to understand why.
Comments closed