Press "Enter" to skip to content

Month: October 2016

Cached Azure Analysis Services Logins

Chris Webb shows how to log into Azure Analysis Services from Management Studio as a different user:

When Azure Analysis Services was announced I had to try it out right away. Of course I didn’t read the instructions properly so when I tried to log in to my Azure Analysis Services instance from SQL Server Management Studio, like an idiot I logged in with the wrong username. The problem is that once you’ve done this, with current versions of SQL Server Management Studio there’s no way of logging out and logging in as a different user. Luckily Igor Uzhviev of Microsoft had a solution for me and I thought I’d share it for anyone else who’s made the same mistake. Here’s what you need to do:

This seems a bit much, but should just be a temporary workaround.

Comments closed

Benford’s Law

Tomaz Kastrun is starting a series on fraud analysis and starts with Benford’s Law:

One of the samples Microsoft provided with release of new SQL Server 2016 was using simple logic of Benford’s law. This law works great with naturally occurring numbers and can be applied across any kind of problem. By naturally occurring, it is meant a number that is not generated generically such as a page number in a book, incremented number in your SQL Table, sequence number of any kind, but numbers that are occurring irrespective from each other, in nature (length or width of trees, mountains, rivers), length of the roads in the cities, addresses in your home town, city/country populations, etc. The law calculates the log distribution of numbers from 1 to 9 and stipulates that number one will occur 30% of times, number two will occur 17% of time, number three will occur 12% of the time and so on. Randomly generated numbers will most certainly generate distribution for each number from 1 to 9 with probability of 1/9. It might also not work with restrictions; for example height expressed in inches will surely not produce Benford function. My height is 188 which is 74 inches or 6ft2. All three numbers will not generate correct distribution, even though height is natural phenomena.

Tomaz includes SQL Server R Services code, so check it out.

Comments closed

Finding Packages Which Use Configurations

Bill Fellows explains how to find SSIS packages still using the Configuration option in the classic deployment model:

Create an SSIS package. Add a Variable to your package called FolderSource and assign it the path to your SSIS packages. Add a Script Task to the package and then add @[User::FolderSource] to the ReadOnly parameters.

Double click the script, assuming C#, and when it opens up, use the following script as your Main

Bill continues on with the contents of his script task, so click through for more.

Comments closed

SSIS And SSRS Practices

Chris Seferlis has a list of practices which he’s learned over the years:

Use Source Control

  1. For anyone who was a developer in their past life, or is one now, this is a no-brainer, no-alternative best practice.  In my case, because I come from a management and systems background, I’ve had to learn this the hardway.  If this is your first foray into development, get ready, because you’re in for some mistakes, and you’re going to delete or change some code you really wish you didn’t.  Whether it be for reference purposes on something you want to change, or something you do by accident, you’re going to need that code you just got rid of yesterday, and we both know you didn’t back up your Visual Studio jobs… Hence, source control.  Github and Microsoftoffer great solutions for Visual Studio, and Redgate offers a great solution for SSMS.  I highly recommend checking them out and using the tools!  There are some other options out there that are free, or will save your code to local storage locations, but the cloud is there for a reason, and many of us are on the go, so having it available from all locations is very helpful.

Regarding source control for Integration Services packages, that’s a good reason to learn Biml—it works much better for source control than the native packages (which change every time you open the package and contain a lot of noise).

Comments closed

Filegroups And RTO

Raul Gonzalez explains the importance of filegroups in minimizing RTO:

So if we don’t create additional filegroup[s] in our databases not only all the data will go to the same logical container but also in case we have to restore that database from a backup, we will have to wait until all of it it’s restored.

Imagine you have a lot of historical data for instance and there is a disaster, if you had different filegroups, one for current data and another for the historical, you would be able to get your live data first and quickly (to get you up and running), and then restore all the historical which is not critical.

To show you how, I’m going to create a database with different filegroups so you’ll see how we can do.

Click through for the scripts, as well as more information.

Comments closed

Semantic Layers

Melissa Coates explains the relevance of Analysis Services as a semantic layer:

Part 1: Why a Semantic Layer Like Azure Analysis Services is Relevant {you are here}

Part 2: Where Azure Analysis Services Fits Into BI & Analytics Architecture {coming soon}

Fundamentally, Analysis Services serves as a semantic layer (see below for further discussion of a semantic layer). Because the business intelligence industry now embraces an array of technology choices, sometimes it seems like a semantic layer is no longer valued like it once was. Well, my opinion is that for many businesses, a semantic layer is tremendously important to support the majority of business users who do *not* want to do their own data wrangling, data prep, and data modeling activities.

We (I) spend so much time thinking about the Brave New World of massive blobs of semi-structured data that it’s a good idea to step back every once in a while and remember that yes, there is a need for sanitized, easy-to-consume data which answers known business questions.  The percentage of people at a company willing to create an R or Python notebook or run a MapReduce job is typically well under 5%.

Comments closed

Passing Values To Bash

Steph Locke shows how to send input parameters to Bash scripts:

This is a very quick post on how you can make a bash script that allows you to provide it values via the command line. Passing values to a bash script uses a 1-based array convention inside the script, that are referenced by prefixing with $ inside the script.

This means that if I provide .\dummyscript.sh value1 value2, inside the dummyscript.sh I can retrieve these by referencing their positions:

Read on for more information, including how to use named parameters.  Given that Bash is now officially supported in Windows 10 (well, in beta form), it might be worth checking that scripting language out.

Comments closed

Polybase As Ersatz StretchDB

Ginger Grant has a great idea:

PolyBase, which was released with SQL Server 2016, provides another method to access live data either locally or in the cloud, very similar to the SQL Server Stretch database feature. Polybase can also provide the ability to provide a more cost-effective availability for cold data, streamlines on-premises data maintenance, and keeps data secure even during migration. Polybase differs from Stretch database in a few ways, as the SQL must be different, the speed is noticeably slower, and it is a lot less expensive. The cost is significantly less because storing data in a Azure blob store starts at 1 cent a month and Stretch database starts at $2.50 an hour. In this post,I will show how to take data which was archived due to the age of the data, which was created in 2012 and store it in an Azure Blob Storage file which will be available via Polybase when I needed.

The ideal scenario for this solution is extremely cold data which is nonetheless required as part of regulatory compliance, where having a query run for 3 hours once every six months or so is acceptable.

Comments closed

Finding Duplicate Indexes With Biml

Ben Weissman shows how to find duplicate indexes using Biml:

This little piece of Biml will check all your tables for indices sharing the same columns.
It does not generate any SSIS tasks etc. but might be a good starting point to build your own Index-Monitoring or Index-Clinic – because Biml is NOT just for SSIS

Depending upon your definition of a duplicate index, this might generate false positives.  Regardless, it’s a nice way of showing that Biml is about more than SSIS.

Comments closed