Press "Enter" to skip to content

Category: Administration

Capturing SSAS Query Activity

Bill Anton explains why and how he captures query activity by user in SSAS:

In most environments, it is trivial to obtain the name of the user who ran each query… all you have to do was capture the [QueryEnd] event in a profiler/xevent trace and pull the information from the [NTUserName] field. However, in environments involving Power BI and the Enterprise On-Premise Data Gateway, there’s a bit more to it.

The main issue is how authentication is handled in this type of architecture. When working with Power BI reports connected to an on-premise data source via the On-Premise Data Gateway, the account of the user running the report is passed as the “EffectiveUsername”. The implication here is that the value shown in the [NTUserName] field of the xevent/profiler trace is going to be the Data Gateway account – NOT the account of the user who actually generated the activity.

Read on for the full answer.

Comments closed

Query Store Filegroups

Kendra Little links to a Connect item:

Can you change the filegroup where Query Store keeps its data?

I thought there might be a trick to use a different filegroup for Query Store by using the default filegroup setting in SQL Server before enabling it, but NOPE!

Please vote for this to be improved in this Connect Item.

I concur; Query Store can grow to be pretty large on busy systems, so diligent DBAs who want to keep PRIMARY as small as possible will suddenly find a multi-gigabyte Query Store slowing down those PRIMARY filegroup restores.

Comments closed

Using SQL Server DMA

Arun Sirpal has a series on using the new Data Migration Assistant.  Part 1:

It “enables you to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality on your new version of SQL Server. It recommends performance and reliability improvements for your target environment. It allows you to not only move your schema and data, but also uncontained objects from your source server to your target server”. It can be found at this link:https://www.microsoft.com/en-us/download/details.aspx?id=53595.

Part 2 is all about performing a migration:

By the way the backup file created via the tool is temporary, after a migration it is deleted. Also the compatibility level DOES NOT change, you need to do this yourself.

I haven’t used this tool yet, but it does look like an upgrade to the old Upgrade Advisor.

Comments closed

Perfmon And SQL Server Memory

Lonny Niederstadt looks at using Perfmon to understand what’s happening with memory allocations on your SQL Server instance:

Lets look at stolen memory a bit.  The relationship between memory grants and stolen memory is probably the least intuitive relationship.  Remember – if a query gets a memory grant the grant happens at the beginning of query execution.  Its just a promise of sort/hash memory to be made available when the query needs it.  The grant memory isn’t stolen immediately – rather its stolen in small allocations over time as needed by the query.

In the graph immediately below, the outstanding grants are shown over time.  There are no pending grants during the observation period.  Granted memory and reserved memory are both shown as areas, with reserved memory in front of granted memory.  Granted memory is consistently greater than reserved memory (in this case, no resource pools have been added beyond the pre-existing default and internal pools).  This is how we can determine that the reserved memory is granted memory which hasn’t been stolen yet.

This is a great explanation of what stolen memory is and why it’s important.

Comments closed

Alert On SQL Jobs Missing Schedules

Brian Hansen wraps up a three-part series on scheduled job alerts:

The first two parts of this series addressed the general approach that I use in an SSIS script task to discover and alert on missed SQL Agent jobs. With apologies for the delay in producing this final post in the series, here I bring these approaches together and present the complete package.

To create the SSIS, start with an empty SSIS package and add a data flow task. In the task, add the following transformations.

Regardless of how you do it, knowing when jobs fail is important enough to build some infrastructure around answering this question.

Comments closed

UDL Files To Test Connectivity

Marek Masko shows how to test a database connection without having any database tools:

UDL extension stands for Universal Data Link. These files are used by Data Link API which exposes a user interface to create and manage OLE DB connections. This functionality was introduced in Windows OS at least in Windows 95, maybe even earlier. That means you can use it on every Windows machine you work on. You no longer need to worry about additional tools.

Sometimes you need a creative solution to a policy-induced problem.

Comments closed

Filegroups And RTO

Raul Gonzalez explains the importance of filegroups in minimizing RTO:

So if we don’t create additional filegroup[s] in our databases not only all the data will go to the same logical container but also in case we have to restore that database from a backup, we will have to wait until all of it it’s restored.

Imagine you have a lot of historical data for instance and there is a disaster, if you had different filegroups, one for current data and another for the historical, you would be able to get your live data first and quickly (to get you up and running), and then restore all the historical which is not critical.

To show you how, I’m going to create a database with different filegroups so you’ll see how we can do.

Click through for the scripts, as well as more information.

Comments closed

Accidental DBAs

Charity Majors on the Accidental DBA phenomenon:

(OH RIGHT, WE WROTE A BOOK ABOUT THIS!!!)

My friend Laine and I are writing a book for people on the data side, called “Database Reliability Engineering“, which is aimed at generalist engineers who want to learn how to deal with data responsibly and effectively.

(Actually that’s a good point, I am supposed to be pitching this book! — which is really mostly Laine with a smidgen of me but it’s going to be super awesome.  Consider this your sales pitch.)

So first, as an accidental DBA, you should obviously buy this book  :).  Second: stateful services require a different mindset[*].  It’s cool that you are running your own databases!  But reading post mortems like this where the conclusion is “MongoDB sucks” makes me fucking grind my teeth.

The theme of the story is a Mongo upgrade gone south, but this is a post about principles.  And rainbows.

Comments closed

Deploy SQL Server R Services Without Internet Access

Arvind Shyamsundar shows how to install SQL Server R Services on a machine without internet access:

When deploying SQL Server R Services, it is important to note that the setup components for SQL Server do not include the Microsoft R Open and Microsoft R Server components. Those ‘R Components’ (as we will refer to them later in this post) are provided as separate downloadable components. SQL Server will automatically download these when executed on computer which is connected to the Internet. But in cases where setup is done on a computer without Internet access (quite typical of many SQL Server deployments) we need to handle things differently. There is a documented process for doing this. But even with the documentation, we still had some customers with questions on the process.

Inspired by those customer engagements, this blog post walks through the process of setting up SQL Server R Services in environments without Internet access. We walk through a number of scenarios, right from the very basic scenario to the more complex ones involving unattended and ‘smart setup’.

This is a nice walkthrough.  I wanted to highlight a link at the end showing how to create a local repository so you can install packages as well.

Comments closed

Flushing Change Tracking Internal Tables

Amit Banerjee mentions a new stored procedure for change tracking cleanup:

In SQL Server 2014 Service Pack 2 and above, we provided a new Stored Procedure, sp_flush_CT_internal_table_on_demand, to assist with Change Tracking cleanup. KB3173157 has more details. This stored procedure accepts a table name as parameter and will attempt to cleanup records from the corresponding change tracking internal table.  During the course of the deletion, it will print some verbose in the output window about the progress of deletion.

If automated change tracking cleanup works well enough for you, there’s no change; but if you’re struggling with that cleanup, this procedure might help.

Comments closed