Press "Enter" to skip to content

Month: October 2017

Automating Cache Cleanup

Tracy Boggiano has a process to automate cleaning up different caches in SQL Server:

First, we need to create a table to store our information on the caches we would like to clear on an automated basis and populate it with values.

For example, we clear SQL Plans if we 10,000 plans are Adhoc or Prepared plans that take up 5GBs of memory or Single Used Plans is greater than 10,000 or the memory used for Adhoc or Prepared plans if more than 50% of memory.  We clear Transactions cache if is more than 2 GBs and Lock Manager : Node 0 if it is more than 2 GBs.

Read on for the script.

Comments closed

Gathering Information On Memory-Optimized Objects

Ned Otter has a diagnostic script for learning about how your servers are using memory-optimized objects:

Instance level evaluates the following:

  • which databases are memory-optimized
  • if running Enterprise, if there are any resource groups defined, and which databases are bound to them
  • version/edition of SQL server
  • ‘max memory’ setting
  • whether or not instance-level collection of execution statistics has been enabled for all natively compiled stored procedures
  • memory clerks for the buffer pool and In-Memory OLTP
  • the value of the committed_target_kb column from sys.dm_os_sys_info
  • display any event notifications (because they conflict with deploying In-Memory OLTP

Database level evaluates the following:

For each memory-optimized database:

  • database files, including container names, size, and location

  • memory-optimized tables

  • indexes on all memory-optimized tables

  • count of indexes per memory-optimized table

  • natively compiled stored procedures

  • whether or not the collection of execution statistics is enabled for any natively compiled procedures

  • count of natively compiled procedures

  • if using the temporal feature for memory-optimized tables, the amount of memory consumed by hidden temporal internal tables

  • memory structures for LOB columns (off-row)

  • average chain length for HASH indexes

Ned provides the script on his blog, so click through to get that.  This looks great if you’re trying to build up some basic information on how developers in your environment use memory-optimized objects.

Comments closed

Query Store Capture Modes

Arun Sirpal notes an important difference in the default Query Store settings for SQL Server 2017 versus Azure SQL Database:

So just remember the only difference when analyzing settings is the difference in Query Store Capture Mode. For Azure it is set to AUTO whereas with local installed SQL Servers it is set to ALL.

What does this mean? ALL means that it is set to capture all queries but AUTO means infrequent queries and queries with insignificant cost are ignored. Thresholds for execution count, compile and runtime duration are internally determined.

Read on to learn more, including how to change these settings.

Comments closed

Finding AD Group Members With Powershell

Amy Herold has a quick script to find which Active Directory users are in particular AD groups:

There was something that popped up today that called for a PowerShell script and the Get-ADGroupMember cmdlet – get a list of users from a list of groups. Some users are in there more than once so this needs to be a distinct list, unless you are into manually cleaning up things like this, and then I will be sad for you. Because that is kinda sad.

I originally wrote a script with two arrays (one for the initial list and one for the de-duped list of users), but even though this is quick and dirty, that was a little too dirty. Enter the Group-Object cmdlet – it takes this list of names and groups them. No black magic this time. Just a cmdlet, that comes baked into PowerShell giving me what I need.

Click through for the script.

Comments closed

Build Versus Buy For Hadoop

Tom Phelan walks through some thoughts on whether to build versus buy when using big data platforms:

This means you absolutely must sweat the details up front. Big Data project failures are more often than not predicated by the statement: “We will do this bit now, and figure the rest out later”. But you need to begin with the end in mind.

You need to know the performance that you’ll be able to deliver and what your requirements are. You need to know how to integrate with your corporate Active Directory, and LDAP, and Kerberos services. You need to know your network topology and security requirements as well as the required user roles and responsibilities breakdown. You need to know how you’ll handle high availability, QoS, and multi-tenancy. You need to know how you’ll manage upgrades to the latest versions of your Hadoop distribution or other big data tools, and how you’ll respond to requests for new big data frameworks and new data science tools. If not, you’re just asking for trouble.

The motif in his post is building your own car, which makes sense as an extended metaphor.

Comments closed

The Correct Way To Load Libraries In R

Gerald Belton opens a can of worms:

When I was an R newbie, I was taught to load packages by using the command library(package). In my Linear Models class, the instructor likes to use require(package). This made me wonder, are the commands interchangeable? What’s the difference, and which command should I use?

Interchangeable commands . . .

The way most users will use these commands, most of the time, they are actually interchangeable. That is, if you are loading a library that has already been installed, and you are using the command outside of a function definition, then it makes no difference if you use “require” or “library.” They do the same thing.

… Well, almost interchangeable

Read on to understand the differences between the two.  I end up doing something very similar to his code snippet for exactly the reason he describes.  H/T R-Bloggers

Comments closed

How To Create Difficult Measures In Power BI

Matt Allington walks through his process of how he creates measures in Power BI:

Killer Tip 1: Create Good Test Data

The first thing I did was to replicate the test data shown above.  As I have mentioned many times, good quality test data is essential to getting a quick correct answer to your problem.  The data from the OP looked pretty good as it had covered the relevant scenarios (1 ID had just blue, 1 ID had blue and red, several IDs had no blue – this is good test data).

Starting out with good test data is vital—it helps you clarify exactly what it is you want, and if you come up with edge cases, you have the makings of a good test workbench to ensure that your code actually works, and not just in the simplest scenario.

Comments closed

Don’t Unit Test Private Methods

Vladimir Khorikov argues that you should not unit test private methods:

When your tests start to know too much about the internals of the system under test (SUT), that leads to false positives during refactoring. Which means they won’t act as a safety net anymore and instead will impede your refactoring efforts because of the necessity to refactor them along with the implementation details they are bound to. Basically, they will stop fulfilling their main objective: providing you with the confidence in code correctness.

When it comes to unit testing, you need to follow this one rule: test only the public API of the SUT, don’t expose its implementation details in order to enable unit testing. Your tests should use the SUT the same way its regular clients do, don’t give them any special privileges. Here you can read more about what an implementation detail is and how it is different from public API: link.

In the database world, this is one reason why I like using stored procedures:  they give the equivalent of a public API for database code, so you can write tests for them.

Comments closed

Azure SQL Database FAQ

Dimitri Furman answers some common questions about Azure SQL Database:

Q7. Can I use Windows Authentication in Azure SQL Database?

The short answer is no. Therefore, if you are migrating an application dependent on Windows Authentication from SQL Server to Azure SQL Database, you may have to either switch to SQL Authentication (i.e. use a separate login and password for database access), or use Azure Active Directory Authentication (AAD Authentication).

The latter is conceptually similar to Windows Authentication in the sense that connections from directory principals are authenticated without the need to provide additional secrets, such as a password. Since Azure Active Directory can be federated with the on-premises Active Directory Domain Services, it can effectively authenticate the same Active Directory principals that could access the database prior to migration. However, the authentication flow for AAD Authentication is significantly different, so the analogy with Windows Authentication only goes so far.

There are some good questions in here, especially the one about retry logic; that’s good to have in any situation, but becomes vital when working with a cloud service.

Comments closed

How Statistics In SQL Server Have Changed Over The Years

Erin Stellato gives us a version-based timeline of how SQL Server has handled statistics over the years:

SQL Server 2008

This is a very interesting historical look.  Most interesting to me was the decreases in the number of steps available.

Comments closed