Press "Enter" to skip to content

Month: September 2017

Effective Permissions In SQL Server

Daniel Hutmacher has a helper procedure to assist you in understanding who has what effective rights on a SQL Server box:

Principals, permissions and securables can all inherit each other. A principal could for instance be a group or a role, and will confer its permissions on to its group/role members. One permission can imply a number of other permissions – SELECT, for instance, requires you to also have VIEW DEFINITION rights to the object. Securables are also arranged in a hierarchy, with the server owning databases, which in turn own schemas that own objects, and so on.

To make things even more complicated, if you have multiple conflicting permissions (DENY and GRANT), the strictest rule applies, meaning that the effective permission is DENY.

Read on to get his procedure.  For my money, the best method to get these details is to query sys.fn_my_permissions() but that requires that you be able to impersonate the user whose permissions you want to see.

Comments closed

Listing SQL Server Options

Kenneth Fisher breaks apart the @@OPTIONS bit flags:

There are a fair number of options settings. ANSI_NULLS, ARITHABORT, QUOTED_IDENTIFIER, etc. Each session has its own set of configurations. They are initially set based on the user settings system configuration, then the various connection programs (SSMS for example) can override that, then the various SETcommands can override that.

Now personally I prefer to keep my settings to the default to eliminate confusion but they do get changed occasionally. Just as a for example, when you generate a script from SSMS it typically includes a bunch of SET ON and SET OFF commands. And if you turn on a setting that was already on, then turn it off and the end, well, your setting has changed unexpectedly. My original intent for this post was to create a stored procedure that would let you save the current settings and restore them. Unfortunately, I ran into a scope problem. I can find the current settings: @@options. I can break down the integer value using a script from here (just in case the post should disappear before this one does here is the code from the article)

Read on for the list, including things like ANSI_NULLS, ANSI_PADDING, and XACT_ABORT.  I probably pay less attention to these than I should and just have a habit of setting the few most important settings for my environment atop every procedure definition.

Comments closed

A Power BI Report To Track Power BI Changes

Julie Smith has found the only way to keep up with Power BI changes—using a Power BI report:

Microsoft makes major improvements via monthly and sometimes weekly releases to Power BI. In my time working on Power BI projects at Innovative Architects, I have found that the only way to stay on top of the frenetic pace of Power BI’s improvements is to closely follow its blog and other social media feeds.

This page is a Power BI report (how meta right?) built off of the content from Microsoft’s Power BI blog. I have set it up to run daily and refresh. Please let me know in the comments if you feel that the data has become stale. As part of building this report, I have added some curated slicers–things like PBI Service, PBI Desktop, Gateways, Connectors– and blogging content tags such as Contest or Webinar. Keep in mind that this is my best effort and there is no guarantee on the accuracy of my tagging. I provide this as my gift to you “as is” with no expressed or implied warranty.

Click through to see the report.

Comments closed

A Cheat Sheet For Purrr

Mara Averick has a list of resources for learning more about purrr:

Purrr royal decree (ok, I’ll stop with the 🐱  puns now), the purrr 📦  now has its very own official RStudio cheat sheetApply Functions Cheat Sheet

The purrr package makes it easy to work with lists and functions. This cheatsheet will remind you how to manipulate lists with purrr as well as how to apply functions iteratively to each element of a list or vector. The back of the cheatsheet explains how to work with list-columns. With list columns, you can use a simple data frame to organize any collection of objects in R.

So, I thought we’d celebrate with a bit of a purrr 🐦  tweet roundup:

Purrr is one of those libraries I know I need to learn more about, and this looks like a good starting point.

Comments closed

Using Spark Streaming On Kafka

Ayush Tiwari has an introductory tutorial on using Spark Streaming on top of Kafka:

The Spark Streaming integration for Kafka 0.10 is similar in design to the 0.8 Direct Stream approach. It provides simple parallelism, 1:1 correspondence between Kafka partitions and Spark partitions, and access to offsets and metadata. However, because the newer integration uses the new Kafka consumer API instead of the simple API, there are notable differences in usage. This version of the integration is marked as experimental, so the API is potentially subject to change.

In this blog, I am going to implement the basic example on Spark Structured Streaming & Kafka Integration.

This is a code-heavy tutorial, so check it out.

Comments closed

Fun With The Beta Distribution

John D. Cook shows how one chatoic equation just happens to follow a beta distribution:

Indeed the points do bounce all over the unit interval, though they more often bounce near one of the ends.

Does that distribution look familiar? You might recognize it from Bayesian statistics. It’s a beta distribution. It’s symmetric, so the two beta distribution parameters are equal. There’s a vertical asymptote on each end, so the parameters are less than 1. In fact, it’s a beta(1/2, 1/2) distribution. It comes up, for example, as the Jeffreys prior for Bernoulli trials.

The graph below adds the beta(1/2, 1/2) density to the histogram to show how well it fits.

It’s an interesting bit of math and statistics, and John provides some Python demo code at the end.

Comments closed

Building A Disaster Recovery Checklist

Robert Davis has a checklist for you:

If a disaster struck tomorrow, are you ready for it? Are you sure you thought of everything? I cannot count the number of times something happened, and I thought that I should have been able to anticipate that. It can be a big benefit to get another viewpoint on your disaster plan. No matter how well you think you’ve covered yourself, someone who thinks about things differently than you do can often see something you don’t.

Here’s a checklist of things you might not have considered. This can be your other set of eyes.

This is not a to-do list or a process guide when disaster strikes; rather, it’s a checklist of things you should have in place before a disaster occurs.

Comments closed

Minimizing Problem Reproductions

Lonny Niederstadt explains the value of a minimalistic repro:

Often among the hardest of my decisions is whether I should spend more time trying to simplify a given problem by eliminating additional factors, shrinking the data needed for a repro, etc… or just put all that effort into investigation purely aimed at understanding the behavior. I expect that to be a long-term challenge 🙂

I was quite happy with the way this particular one worked out, though. It started as a maze… access violations generated on a SQL Server 2016 instance with dozens of databases. The access violation came from an insert query using a synonym – with the underlying table in another database! (I didn’t know that was possible – or *ever* desirable – until I saw it in this context.) The AV was occurring during a multi-step data transfer process and it was hard to isolate the data flowing into and out of the two databases in question. But after some finagling, I got the problem repro pretty doggone small. Reproduced the AVs on several CUs of SQL Server 2016 and on SQL Server 2017 RC2.

If you think you’re going to enlist the help of someone outside your organization, then you definitely want a minimalistic repro.  That will reduce the risk of red herrings, reduce the burden of assistance, and make it much more likely that some poor sap in support can actually fix your problem if it turns out to be a bug.

Comments closed

Basics Of Powershell Modules

Tracy Boggiano shows you how to create a Powershell module:

The first file you need to create is a manifest file.  This file tells who wrote the module what version PowerShell should be used and module version.  You will need to open either PowerShell ISE or PowerShell under Run as administrator to run this command. You change the path to match the name you want the name you want your module to have along with the name of the module and author.  You may have to create the folder in your Modules folder (DBAFunctions) first depending on the security on your machine.

There isn’t too much to the process, but there is a big benefit:  code portability.

Comments closed

Working With Rowversion Data Types

Louis Davidson walks through some of the properties of rowversion data types:

For years, I had thought (and was probably taught in SQL.AlongTimeAgoInAPlaceFarFarAway) that the timestamp column (well before rowversion was a thing,) was not guaranteed to be an ever increasing value. But this is not the case.

In BOL (https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql)  it states:

“The rowversion data type is just an incrementing number…”

This makes it useful for determining rows that have changed, because it it automatic and the user cannot override the value in the column. However, there is a major concern when you use rowversions, and that is what happens when you change the structure of the table, and expect the consumer to see that change. The problem is that when you change the structure of the table, the rowversion will not be set (except when adding a new rowversion column.)

I’m not much of a fan of rowversion and tend not to use it, but my biases don’t have to be yours.

Comments closed