Press "Enter" to skip to content

Month: July 2019

SQL Assessment API

Ebru Ersan announces a public preview of the SQL Assessment API:

SQL Assessment API is a new mechanism to evaluate configuration of your SQL Server for best practices. The API methods are used by means of a SQL Server Management Object (SMO) extension and new cmdlets in SqlServer PowerShell module. API is delivered with a ruleset that is highly customizable and extensible. It can be used to assess SQL Server versions 2012 and higher, both on Windows and Linux.

Looks like you can customize rules as well. I wonder if it will work better (or have more support) than Policy-Based Management. I’m also clocking how many minutes before dbatools supports this…

1 Comment

Procedure Parameters: Optional and Required

Kenneth Fisher takes us through procedure parameters:

If you are executing a stored procedure with a bunch of parameters it can be a bit of a pain if you have to pass a value in for each of them. Fortunately, it’s pretty easy to make some parameters required and others optional. You simply give them a default value.

Kenneth also points out that functions don’t behave this way, and shows how to handle parameters where you don’t want to accept NULL under any circumstances. This is useful when NULL is just a placeholder for “I don’t really want to use this parameter” but the application doesn’t know how to avoid sending the parameter in the first place.

Comments closed

Building an Azure DevOps Pipeline

Grant Fritchey shows off how to build an Azure DevOps pipeline:

I don’t mean for this to be a complete tutorial on setting up Azure DevOps (see the bottom of this post for my all-day, in-person, teaching sessions, where I will). I just want to discuss a set of pipelines I’ve built and why I built them that way as a method for illustrating how you can begin automating your processes in support of a DevOps implementation.

The most important concepts when we get to building and deploying within Azure DevOps are Builds and Releases. Yes, we can get into all the fun of talking about Deployment Groups (sets of servers for simultaneous deployment) and others, however, we have to first get a successful and then define how that build will get deployed/released; Builds and Releases.

Read on for the demonstration.

Comments closed

Gaps in Identity Columns

Josh Simar doesn’t like gaps in identity columns brought about by rollbacks:

At the end of this you can see that those records were in the table before the rollback as evidenced by the now 2000 records in the table. However we ran our identity checker after the rollback and we’ll see some interesting results because of that.

Checking identity information: current identity value '2000', current column value '1000'. DBCC execution completed. If DBCC printed error messages, contact your system administrator.

The rollback did absolutely nothing for our identity and left it as is.

The short answer is that this is expected and reasonable behavior. Don’t expect identity integers to be sequential; the best you’ll get (assuming no resets or overflows) is a guarantee that they’ll be monotonically increasing. I left the long answer as a comment, currently awaiting moderation.

Comments closed

Using the Cosmos DB Change Feed

Hasan Savran shows us how we can use Cosmos DB’s change feed to track changes to documents in a container:

This is great but I want to do more than that. How am I going to access to changed data? What should I do if there is more than one change or insert? In my case, I need to access to SensorCode attribute so I can do something about this alert. To answer these questions, you need to know more about the Azure Functions. If you can see the number of modified documents by this code, that means you are done with Change Feed functionality. First, we need some kind of loop so if the code can process multiple changes. To do that, I will use a simple foreach loop.

The thing which comes to mind when I hear about Cosmos DB’s change feed is Kafka, with that immutable log of actions you can read through.

Comments closed

Spark Access Control in Qubole

Achuth Rajagopal and Shridhar Ramachandran show off the Spark Data Access Control Framework on Qubole’s platform:

With these requirements in mind, we decided to implement Hive Authorization as our first Policy Manager. Hive Authorization policies are stored in the Qubole Metastore which acts as a shared central component and stores metadata related to Hive Resources like Hive Tables. We enhanced Spark to honor the policies stored in the Qubole Metastore while accessing Hive Tables or for adding and modifying those policies.

In summary, we implemented a SQL standard access control layer identical to what is present in Apache Hive or Presto today. The following sections detail the architecture and provide an example that illustrates how it works.

Click through to learn more.

Comments closed

Data Virtualization and Power BI

Gerhard Brueckl explains how Power BI can act as a data virtualization engine:

But lets examine what we currently have in Power BI:
– a semantic layer and data modelling capabilities
– access to various data sources via Direct Query (remember, we do not want to load any data!)
– ability to combine data from those sources

The last part is the most important one here and you may wonder what I am talking about. And you are right, by default a DQ model is only linked to one data source at a time but you can add other data sources manually in the Power Query editor! 

This is where it get’s interesting and what this blog post is about.

Read the whole thing.

Comments closed

Filtering in DAX

Matt Allington takes us through the FILTER() function in DAX:

FILTER() is most often used as a filter parameter within the calculate function. OK, so now it is getting confusing. FILTER() is a function. There is another use of the word filter in DAX, and that is as a parameter in CALCULATE. Let’s look at the syntax

FILTER Syntax: FILTER(<table>, <true/false test>) CALCULATE Syntax: CALCULATE (<measure expression>, <filter parameter>, <filter parameter>,…)

This is one of the most useful functions in DAX and one you really want to know well.

Comments closed

Data Aggregation with Powershell

Jess Pomfret shows us how we can aggregate data using Powershell:

As a SQL Server DBA, aggregating data is first nature.  I can easily throw together some T-SQL if I need to get the average sales per product group, or perhaps the number of employees hired per month. Yesterday I was writing a little PowerShell when I came across a problem. I needed to get the size of my database data and log files for several databases on a server, when I realized I didn’t know how to group and sum this data in PowerShell.

Click through to learn how.

Comments closed