Press "Enter" to skip to content

Month: January 2022

Cannot Open User Default Database

David Alcock backs out of a problem:

This error isn’t to do with my login as such, it’s still there with sysadmin role membership so I don’t have to do anything too drastic like restarting SQL Server with the -m or -f startup parameters and recreate it. The error message is telling me that my logins default database cannot be opened, which is more than likely because I’ve deleted it.

Click through to see how David got out of this issue. This is a big part of why I highly prefer not to change the default database from master for logins..

Comments closed

Getting IDs of Visuals using the Power BI Embedded Analytics Playground

Chris Webb meets us on the playground:

Log Analytics contains information on the dataset, report and visual that are associated with a DAX query but that information is in the form of IDs rather than names. Getting the IDs for specific datasets and reports is fairly straightforward – you can get them from urls in the Power BI Portal – and as I wrote here, it’s possible to get a list of IDs and names for the visuals in a report from the JSON file you get when you export from Performance Analyzer in Power BI Desktop. However, my colleague Rui Romano recently showed me a different way to get the same information using the Power BI Embedded Analytics Playgound, which may be an easier option to use in some cases.

Click through to learn more about the playground itself, as well as a way to convert visual names to their component IDs.

Comments closed

Vim as an IDE

Andrew Pruski shares some settings:

Disclaimer – I like VS Code and I won’t be uninstalling it anytime soon and I’m not recommending people do.

However, I feel it can be overkill for 90% of the work that I do. So I’ve been playing around with Vim to see if it will give me what I want.

What I really want is a light weight text editor that allow me to run commands in a terminal…that’s it!

I’ve found that vim-markdown is one of those extensions Andrew mentions not having installed but being good.

Comments closed

(Not) Title Casing Graph Titles

Mike Cisneros lays out the argument:

In school, I was taught that you should center-align and capitalize the first letters of words in titles. I’ve noticed, though, that storytelling with data charts only capitalize the first word in the chart title, use ALL CAPS for the axis titles, and don’t center-align anything. Why?

Title casing is a really hard habit for me to break. I understand the “why” behind this, but it’s a change I’m unlikely to make anytime soon.

Comments closed

Sparklines in Power BI

Reza Rad gives us the lowdown on sparklines:

You can, of course, achieve the same thing using a line chart. You have to multiply it for each of the categories (you can do that in Power BI using small multiples). However, if you have many categories, then a small multiple might now show a nice view. that is why Sparkline can be helpful.

Sparklines are normally with minimal information. Their X-axis is a trend based on date (or something similar), but the axis is hidden because of minimal space. You can use the sparkline to understand the trend, the highest, the lowest, the starting and the ending point, etc. Because of their minimal nature, Sparklines are not used for a very detailed analysis. Instead, they are used to understand the trend of different categories over time in a high-level view.

Click through to see how you can add a sparkline to a table or matrix.

Comments closed

TRY_CAST and TRY_PARSE

Joe Obbish shows the difference between two functions:

There’s a lot of guidance out there that states that TRY_CAST is a faster, more modern version of TRY_PARSE and that TRY_PARSE should only be used if you need to set the optional culture parameter. However, the two functions can return different results in some cases, even without the culture parameter.

That guidance is blatantly wrong. TRY_CAST() and TRY_PARSE() both came out in SQL Server 2012. TRY_PARSE() uses .NET to perform parsing, which is going to have some edge case differences, especially around cultures and localization. TRY_CAST() is CAST() in an error-safe wrapper. If anything, TRY_CAST() is the “old” version and TRY_PARSE() the “new” version, with scare quotes in place because they both came out at the same time.

Both of them are useful, though I do agree with Joe’s advice of avoiding TRY_PARSE(), at least for larger datasets. If you’re parsing a single date or a small table of dates, TRY_PARSE() does an excellent job because TRY_PARSE('13/01/2019' AS DATE USING 'fr-fr') is not something you can easily do with TRY_CAST() in a US locale.

Comments closed

SSIS Framework File Community Edition

Andy Leonard has an announcement:

The very first data integration / data engineering framework I ever wrote was for Data Transformation Services, or DTS. The DTS framework had one job: manage connections. I don’t recall all the details, but I remember DTS included a task that allowed packages to retrieve settings from INI files. INI files are key-value files, so I simply added entries with identical keys and different values – values that matched connection strings for each lifecycle tier – and placed each version of the INI file in the same location on every server in the lifecycle.

The next framework I wrote was for SSIS. I stored metadata in tables – including connections metadata – and created a concept I called an SSIS Application. An SSIS application is, according to my definition, a “collection of SSIS packages that execute in a pre-determined order.”

The SSIS Framework File Community Edition is very similar to this first framework, except for the connections management.

Click through to learn more about the SSIS Framework File Community Edition and check it out.

Comments closed

Capturing SQL Server Audit Events with Azure Monitor

Bruno Gabrielli connects Azure Montor to SQL Server Audit:

Today I am going to cover an interesting aspect on how to capture security audit events from both Azure and non-Azure SQL Server machines. Most of you probably know that SQL Server is capable of auditing security related information, such as access to a given database, record creation or deletion, configuration change and so on) according to the Audit configuration applied to a given instance or database.

In this post, we will not dig into SQL Server Audit configuration or capability. We will rather explore the steps and configurations necessary to collect data using Azure Monitor.

Read on for the process. You will need the appropriate agent for this, but that agent doesn’t necessitate that your machine be in Azure.

Comments closed

Testing Failover Group and TCP Connectivity with Managed Instances

Niko Neugebauer has a pair of connectivity tests for us. First up is failover group connectivity:

When you set up a failover group between primary and secondary SQL Managed Instances in two different regions, each instance is isolated using an independent virtual network. Replication traffic needs to be allowed between these VNets.

To allow this kind of traffic, one of the prerequisites is:

– “You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000-11999 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.”

Click through for a SQL Agent job script which helps with the test. Meanwhile, you can also test TCP connectivity from a managed instance:

In this post we shall focus on helping you determining the TCP connectivity from SQL Managed Instance against a given endpoint and port of your choice.

If you are interested in other posts on how-to discover different aspects of SQL MI – please visit the  http://aka.ms/sqlmi-howto, which serves as a placeholder for the series.

There are scenarios where it would be nice to be able to test if a SQL Managed Instance can reach some “external” endpoints, like Azure Storage as an example.

Check out both posts.

Comments closed

Remapping Database Columns in Python

John Mount performs mapping en masse:

The tricky part is: data science application scale easily has hundreds of string valued variables, each having hundreds of thousands of tracked values. The possibility of a large number of variable values or level renders the CASE/WHEN solution undesirable- as the query size is proportional to the number variables and values. The JOIN solutions build a query size proportional to the number of variables (again undesirable, but tolerable). However, super deeply nested queries are just not what relational databases expect. And a sequence of updates isn’t easy to support as a single query or view.

As an example of remapping, John shows translating “a” in a column to 1, “b” to 2, “d” to 3, etc.—that is, perhaps mapping each unique string to a unique number.

Comments closed