Press "Enter" to skip to content

Curated SQL Posts

Orphaned Users in SQL Server

Dave Bland walks us through one way to fix an orphaned user:

In my many years of working as a DBA, I have encountered many disabled logins.  However, I have never really encountered what looks to be a disabled database user account.  I didn’t even think it was possible to disable a user account in a SQL Server database.  I checked the user account properties just to makes sure I was correct.  Sure enough, no option to disable a user account. This finally turned out to be a simple case of looks can be deceiving.

You can also use the sp_change_users_login procedure to fix orphaned users.

Comments closed

Controlling Power BI Visual Visibility

Matt Allington shows how we can take one Power BI visual and use it to control the visibility status of another visual:

I have written a few articles in the past that toy with the ideas of changing visibility and text colour based on selection.  I started to wonder if it was possible to make a visual appear (or not) based on a selection from the user.  There is no out of the box way to do that today. It is possible to use bookmarks to show an hide an object, but the user must click a specific button to do this. I want the user to be able to interact with a report and see (or not see) a chart based on some valid selection across the report.  Microsoft is already working on building expression based formatting across the breadth of Power BI however as of now the only item you can change is the header in a chart.

Hopefully this gets better over time.

Comments closed

Processing Fixed-Width Files with Spark

Subhasish Guha shows how you can read a fixed-with file with Apache Spark:

A fixed width file is a very common flat file format when working with SAP, Mainframe, and Web Logs. Converting the data into a dataframe using metadata is always a challenge for Spark Developers. This particular article talks about all kinds of typical scenarios that a developer might face while working with a fixed witdth file. This solution is generic to any fixed width file and very easy to implement. This also takes care of the Tail Safe Stack as the RDD gets into the foldLeft operator.

It’s a little more complicated than with R, where stringr can handle fixed-width formats. But it’s not bad.

Comments closed

Sentiment Analysis with Spark on Qubole

Jonathan Day, et al, have a tutorial on using Qubole to build a sentiment analysis model:

This post covers the use of Qubole, Zeppelin, PySpark, and H2O PySparkling to develop a sentiment analysis model capable of providing real-time alerts on customer product reviews. In particular, this model allows users to monitor any natural language text (such as social media posts or Amazon reviews) and receive alerts when customers post extremely nice (high sentiment) or extremely negative (low sentiment) comments about their products.

In addition to introducing the frameworks used, we will also discuss the concepts of embedding spaces, sentiment analysis, deep neural networks, grid search, stop words, data visualization, and data preparation.

Click through for the demo.

Comments closed

Running Spark MLlib to Feed Power BI

Brad Llewellyn shows how you can take Spark MLlib results and feed them into Power BI:

MLlib is one of the primary extensions of Spark, along with Spark SQL, Spark Streaming and GraphX.  It is a machine learning framework built from the ground up to be massively scalable and operate within Spark.  This makes it an excellent choice for machine learning applications that need to crunch extremely large amounts of data.  You can read more about Spark MLlib here.

In order to leverage Spark MLlib, we obviously need a way to execute Spark code.  In our minds, there’s no better tool for this than Azure Databricks.  In the previous post, we covered the creation of an Azure Databricks environment.  We’re going to reuse that environment for this post as well.  We’ll also use the same dataset that we’ve been using, which contains information about individual customers.  This dataset was originally designed to predict Income based on a number of factors.  However, we left the income out of this dataset a few posts back for reasons that were important then.  So, we’re actually going to use this dataset to predict “Hours Per Week” instead.

Check it out. And Brad’s not joking when he says the resulting model is terrible. But that’s okay, because it was never about the model.

Comments closed

Power Query Container Size and Performance

Chris Web looks into what changing the Power BI Dataflow container size does for us:

Currently there is no way to change this 256MB in Power BI Desktop or Excel although someone has already posted a suggestion on the Ideas site to allow us to change it. How much of an impact does this actually have on refresh performance though? Without the ability to change this setting it’s hard to say, but I suspect it could be significant and that a lot of Power Query performance problems could be explained by this behaviour.

The situation is different in the Power BI service, where I understand there is a limit on the overall amount of memory that a single Power Query query evaluation can use.

Read on to understand the differences here between running on Power BI Desktop and running in the Power BI service, as well as a bit of testing on Chris’s part.

Comments closed

Time Series Analysis with Forensic Accounting

I have another part up in my series on forensic accounting techniques:

This post will be a pretty short one. In my talk, I don’t have any demos, mostly because much of cohort analysis has secretly been time series analysis at the same time. Instead, I’ll lob out a few points and call it a day.

Time series analysis, at its core, is all about how your data changes over time. The grain for time series analysis is important: as we saw in the last post, we were able to get an excellent result at the yearly level when regressing number of active buses versus number of line items.

Spoilers: it’s not as short as I thought it would be.

Comments closed

Building an AKS Cluster with Azure DevOps

Rob Sewell shows how to use Azure DevOps to build an AKS cluster with Terraform:

In the last few posts I have moved from building an Azure SQL DB with Terraform using VS Code to automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines to using Task Groups in Azure DevOps to reuse the same Build Process and build an Azure Linux SQL VM and Network Security Group. This evolution is fantastic but Task Groups can only be used in the same Azure DevOps repository. It would be brilliant if I could use Configuration as Code for the Azure Build Pipeline and store that in a separate source control repository which can be used from any Azure DevOps Project.

Luckily, you can

And Rob shows us how it’s done.

Comments closed

Aggregating Only Visible Values in DAX

Alberto Ferrari shows how you can calculate a sum based only on the visible values and ignoring the missing cases:

These measures produce the correct figures month by month. However, at the year level the number of working days might be too big whenever there are incomplete months – this always happens before the end of the year, or when the first month with sales is not January as in the following example. As you can see in the following figure, in this case the yearly value of SalesPerWorkingDays is lower than the value of any month, which does not make much sense.

Click through for the solution.

Comments closed