Press "Enter" to skip to content

Month: November 2017

Roll Your Own SSAS Performance Monitoring Tool

Shabnam Watson has a great post on building an Analysis Services performance monitoring tool from scratch using Power BI:

In many cases, SSAS works efficiently with default settings right out of the box. However, when you have large databases, substantial number of concurrent users, insufficient resources on your server, or when best practices are not followed during SSAS database design, you can run into performance bottlenecks and problems. In these scenarios, you need to know what to measure and how to measure them, what’s normal for your environment (benchmark), and you need to have some amount of historical measurements to be able to see the events that lead to a certain bad performance/failure point. Once you have this data, you can improve your server’s performance by addressing the problem(s).

This is a tour de force of an article, absolutely worth reading if you plan on dealing with Analysis Services at some point.  Even if you don’t build your own tool, you’ll learn a lot about what drives SSAS performance and what indicates that there might be a problem.

Comments closed

Fun With Dynamic SQL: Implicit Casting Can Allow SQL Injection

Remus Rusanu shows an example where implicit casting from NVARCHAR to VARCHAR can introduce a SQL injection vulnerability that you otherwise wouldn’t expect:

In both examples above the SQL executed apparently should had been safe from SQL injection, but it isn’t. Neither REPLACE nor QUOTENAME were able to properly protect and the injected division by zero was executed. The problem is the Unicode MODIFIER LETTER APOSTROPHE(NCHAR(0x02bc)) character that I used in constructing the NVARCHAR value, which is then implicitly cast to VARCHAR. This cast is converting the special ‘modifier letter apostrophe’ character to a plain single quote. This introduces new single quotes in the value, after the supposedly safe escaping occurred. The result is plain old SQL Injection.

Click through for the script.  The upside of this is that it’s entirely under your control and you should be able to get it right by using NVARCHAR consistently.

Comments closed

HDFS Federation

Sangeeta Gulia explains what HDFS Federation is and how it differs from classic HDFS:

HDFS Federation improves the existing HDFS architecture through a clear separation of namespace and storage, enabling generic block storage layer. It enables support for multiple namespaces in the cluster to improve scalability and isolation. Federation also opens up the architecture, expanding the applicability of HDFS cluster to new implementations and use cases.

Namenodes are federated, that is, all these NameNodes work independently and don’t require any coordination with each other.

It’s one way to reduce the number of potential single points of failure in a Hadoop environment.

Comments closed

Pandas Basics

Kevin Jacobs has a tutorial on Python’s Pandas library:

There are a few things worth mentioning. Often, Pandas is abbreviated as pd (like Numpy which is often abbreviated as np). If you look at other code, you will see that DataFrames are often abbreviated by df. Here, the DataFrame is constructed using data from a list of lists. The columns argument specifies the keys of the data.

This is a high-level intro, but helps you get your feet wet if you’ve not played with the library.

Comments closed

A Look At Azure SQL Database Price By Tier

Arun Sirpal puts together a comparison of Azure SQL Database prices (in GBP) by service tier:

Hopefully this paints a picture for you. I will have my say though. Basic tier database is something that you should NOT be using for production workloads, its quite obvious with the 2GB limit but worth reinforcing the point. Standard tier is more for your common workloads and premium is designed for high transactional volume where I/O performance is much more important to you – my hunch maybe they are utilizing SSDs? I am not sure but the premium costs are much higher.

There is another service tier called Premium RS (In preview). My understanding is that performance is similar to that of Premium HOWEVER only useful for workloads that can tolerate data loss up to 5-minutes due to service failures. I will probably not use this for production but then again it seems to be nearly half the cost of premium. Choices choices choices.

Two notes with this one.  First, these are prices as of when Arun put together the notes; they will probably fluctuate over time.  Second, there might be differences in prices by data center.  At the very least, though, this gives you an idea of the relative price spread.

Comments closed

Getting Distinct Dimension Count Based On A Filtered Measure

Gogula Aryalingam shows us a neat trick in Power BI:

Enthusiastic as we were, one of the hardest nuts to crack, though it seemed so simple during requirements gathering, was to perform a distinct count of a dimension based on a filtered measure on a couple of the reports. To sketch it up with some context; you have products, several more dimensions, and a whole lot of measures including one called Fulfillment (which was a calculation based on a couple of measures from two separate tables). The requirement was to get a count of all those products (that were of course filtered by other slicers on the Power BI report) wherever Fulfillment was less than 100%, i.e. the number of products that had not reached their targets.

Simple as the requirements seemed, the hardest part in getting it done, was the limited knowledge in DAX, specifically, knowing which function to use. We first tried building the data model itself, but our choice in DAX formulae, and the number of records we had (50 million+) soon saw us running out of memory in seconds on a 28GB box; Not too good, given the rest of the model didn’t even utilize more than half the memory.

Click through for the answer.

Comments closed

Improving Code Quality With SonarQube

Samir Behara has a quick look at SonarQube, an open source static analysis engine:

In my project, we have also integrated SonarQube with our TFS CI/CD build and have configured the Quality Gates.

For example – If I try to inject a security threat or a known coding issue — the TFS build will fail, the check in will get rejected, the quality gate fails and SonarQube points me to the exact issue – which I can rectify and do another check-in. So it will basically stop you from checking in code with potential issues.

Currently the only way to catch such issues is during manual coding reviews. SonarQube will help in automating that process. You can write your own rules to look for known issues in the code and stop it before the code gets checked in to source control.
So overall you can ensure good quality code going to Production and less regression defects coming up at a later point of time.

Read on for an example where a SonarQube rule can find a SQL injection vulnerability and thereby fail the build.

Comments closed

Backups Are Faster With SQL Server 2017

Parikshit Savjani explains how the SQL Server team was able to use indirect checkpoints to improve backup performance:

In RDBMS, whenever tables get larger, one of the technique to tune and optimize the scans on the tables is by partitioning it. With indirect checkpoints, we do the same.

In indirect checkpoint, for every database which has target_recovery_time set, a dirty page manager and dirty page list is created. The dirty page list is further partitioned by scheduler allowing the dirty page tracking to scale further. This decouples the dirty page scan for a given database from the size of the buffer pool and allows the scan to scale and be much faster than automatic checkpoint algorithm.

As Bob Dorr mentions in his blog here, a new database creation process in SQL Server 2016 requires only 250 buffers to scan as opposed to 500 Million buffers with former algorithm. This is the rationale for making indirect checkpoint a default which is much more scalable algorithm to track dirty pages in the buffer pool compared to automatic checkpoints.

Read on to see how this technology led to faster backups.

Comments closed

Generating Task Factory Dynamics CRM Loads With Biml

Meagan Longoria shows how to use Biml to generate SSIS packages which use the Task Factory Dynamics CRM source:

I recently worked on a project where a client wanted to use Biml to create SSIS packages to stage data from Dynamics 365 CRM. My first attempt using a script component had an error, which I think is related to a bug in the Biml engine with how it currently generates script components, so I had to find a different way to accomplish my goal. (If you have run into this issue with Biml, please comment so I know it’s not just me! I have yet to get Varigence to confirm it.) This client owned the Pragmatic Works Task Factory, so we used the Dynamics CRM source to retrieve data.

Meagan has the code as well as some important notes, so read the whole thing.

Comments closed

AG Failover From Powershell

Frank Gill has written a script to perform an Availability Group failover using Powershell:

The function takes a replica name as input and queries system tables for Availability Groups running as secondary that are online, healthy, and synchronous.  For each AG found, the function generates an ALTER AVAILABILITY GROUP statement.  If the -noexec parm is set to 0, the command will be executed.  If -noexec is set to 1, the command will be written out to a file.

When writing the function, I started out trying to use the native PowerShell Availability Group cmdlets.  After several false starts, I found it easier to develop the T-SQL code in Management Studio and use Invoke-Sqlcmd to execute the code.  The code is available below.  I hope you can put it to use.

Click through for the script.

Comments closed