Press "Enter" to skip to content

Category: Uncategorized

Auditing Logins and Finding Unused Tables with XESmartTarget

Gianluca Sartori continues a series on XESmartTarget. First up is a process to audit successful logins:

Sometimes you are not interested in the individual events captured by a session, but you want to extract some information from a series of events, by grouping and aggregating them. This is the case of events that happen very often, like, logon events, that can help you validate your story.

Imagine that you inherited a big, busy and chaotic SQL Server instance, with lots of databases and logins. One of the things that you probably want to do is track which logins are active and which ones are not and can be safely disabled.

Once you have that in place, how about unused objects?

The previous recipe showed you how to capture data from Extended Events sessions, summarize it in memory and then save it to a table in SQL Server, merging with any existing rows. This comes extremely handy when the events are not useful individually, but when the story is told by the aggregation of all the events.

Another possible problem that can be solved with the same technique is finding unused objects in the database. It looks like a trivial problem, but it’s not.

Read on to see how you can solve both of these issues.

Comments closed

Redis Streams for Apache Kafka Users

Paul Brebner gives us an overview of Redis Streams:

The Redis Streams data type is newer than the Redis Pub/Sub data type, and is designed to support “disconnected” distributed streaming applications. The data type itself is essentially an append-only data structure, stored in memory—basically preserved messages!

This differs from Redis Pub/Sub channels, which are focused only on the delivery of messages to currently connected subscribers only—Pub/Sub uses a push-based delivery mechanism, and if there are no current connected subscribers, then messages are simply discarded. And channels don’t remember messages to enable disconnected subscribers to catch up with missed messages, replay messages, or read different ranges of messages, etc. But channels are fast!

I’m still partial toward Kafka but Redis Streams are pretty nice.

Comments closed

Quantifier Predicates in SQL

Joe Celko takes us through quantifier predicates:

SQL is based on set theory and logic. The only bad news is that many programmers have never had a class on either of those topics. They muddle through using the Boolean operators in their programming language and think that’s all there is to formal logic.

Let’s flashback to the early days of logic and play catch up. We need to start with syllogisms. Syllogisms are logical forms made up of combinations of two statements about classes of things that lead to a conclusion. They were invented by the Greeks and written up by Aristotle in Prior Analytics. You might have run into them, If you had a philosophy class that included lectures on formal fallacies. The three forms of statements allowed are:

Click through to receive a brief primer on formal logic and learn more about how SQL implements these concepts.

Comments closed

Data Processing in Data Explorer Pools

Tsuyoshi Matsuzaki shows us how Data Explorer pools work in Azure Synapse Analytics:

In Microsoft Ignite 2021, new Data Explorer (DX) pool in Azure Synapse Analytics is released in preview. You might wonder which one to choose among 3 different analytical pools – Spark pool, Dedicated SQL pool, and DX pool.

In this post, I’ll briefly summarize how data is processed in Data Explorer (Kusto) – Azure Data Explorer (ADX) and Azure Synapse Data Explorer (DX) pool.
I hope this will give you a hint for your optimal analytical platform.

Read on for this explanation.

Comments closed

Updating SQL Server Container Memory Limits

Andrew Pruski doesn’t have time to restart containers:

When running multiple SQL Server containers on a Docker host we should always be setting CPU and Memory limits for each container (see the flags for memory and cpus here). This helps prevent the whole noisy neighbour situation, where one container takes all the host’s resources and staves the other containers.

But what if we forget to set those limits? Well, no worries…we can update them on the fly!

Click through to see how you can change the memory limits on a running container.

Comments closed

AWS RDS for SQL Server Notes and Limitations

Tom Collins summarizes places where AWS Relational Database Services (RDS) for SQL Server differs from the box product:

Some AWS  RDS SQL Server limitations

– Some ports are reserved for Amazon RDS, and you can’t use them when you create a DB instance.

– Amazon RDS for SQL Server doesn’t support importing data into the msdb database.

– You can’t rename databases on a DB instance in a SQL Server Multi-AZ deployment.

–  AWS RDS doesn’t support Data Quality Services and Master Data Services on the same RDS service, need to spin up an EC2 and run services form another server

Read on to see more limitations, notes on how security is different, and notes on feature support. Do keep in mind, though, that some of these may change over time—a few years back, the number of limitations was much greater.

Comments closed

Row-Level Security in Serverless SQL Pools

Jovan Popovic has a method of implementing poor man’s row-level security in Azure Synapse Analytics serverless SQL pools:

Serverless Synapse SQL pools enable you to read Parquet/CSV files or Cosmos DB collections and return their content as a set of rows. In some scenarios, you would need to ensure that a reader cannot access some rows in the underlying data source. This way, you are limiting the result set that will be returned to the users based on some security rules. In this scenario, called Row-level security, you would like to return a subset of data depending on the reader’s identity or role.

Row-level security is supported in dedicated SQL pools, but it is not supported in serverless pools (you can propose this feature in Azure feedback site). In some cases, you can implement your own custom row-level security rules using standard T-SQL code.

Read on to see how.

Comments closed

Selecting Columns Based on Data Types in Power Query

Sandeep Pawar solves an interesting problem:

It’s not uncommon to have a table with hundreds of columns with different column types such as numerical, text, date, percentage etc. You can select columns by using the UI option “Choose Columns” and then selecting which columns to keep. But if you have have tens of columns of the same type, you will have to manually go through the list of columns and select the columns to keep. As far as I know, there is no direct way to only keep, for example, numerical columns. In Pandas, you can use select_dtypes method and pass which columns to choose based on the data types. No such option or function is available in Power Query.

In this example, I will show how I achieved it. You can customize it or turn it into a function based on your use case.

Read on for two classes of technique which do the trick.

Comments closed

Data Paging using Common Table Expressions

Steve Stedman takes us through one method of generating pages of data:

I can remember the first time that I worked on data paging code. I had to page through web site search results 20 at a time on a given web page. My task was to understand how it had been written and to do some bug fixing. After reviewing about 2,000 lines of code, and reviewing the seven different variables that were being used to know the current page, the next page, the previous page, the page size, the row at the top of the page, the row at the bottom of the page, and the number of pages, I finally understood what the code was intended to do. What a mess that was, but beyond the mess, the performance was horrible. The way the page worked was that based on the page you were on, all of the rows would be queried, then a loop would read through all of the rows before the current page, then loop through the rows on the current page displaying them on the page, and finally it would ignore the results after the current page. So page 1 was slow, page 2 was slower than page 1, page 3 was slower than page 2 and on and on.

Since that point I have implemented several different data paging algorithms myself, all better than the original implementation but none as elegant as the CTE way of doing data paging. I used to look at data paging as a painful task, but thanks to the SQL Server implementation of CTEs there is no more pain.

Steve also covers OFFSET and FETCH. This technique won’t be great with enormous data sets, but for moderate-sized data sets which query quickly, it works. This is one area which is quite painful, and the best (and wackiest) solution I’ve come up with in SQL Server when the initial query is quite expensive is to create tables with random names to store results and row numbers, populate a table the first time a query is run, and query that table on subsequent runs, using the RETURN value in a stored procedure to pass along the name of the table to access. Granted, that solution works best with static data and you’d want to have a method to clean up those tables after they’re no longer in use (like storing a list of those tables and their last access dates and times). So it’s a mess.

Comments closed

Memory-Optimized Table Variables and tempdb Contention

Erik Darling notes that memory-optimized table variables can be useful in specific circumstances:

First, yes, they do help relieve tempdb contention if you have code that executes under both high concurrency and frequency. And by high, I mean REALLY HIGH.

Like, Snoop Dogg high.

Because you can’t get rid of in memory stuff, I’m creating a separate database to test in.

Been there. When tempdb object creation causes massive contention, this certainly alleviates the stress.

As Erik notes, there are some tradeoffs to this, meaning that you have a real decision to make rather than simply using memory-optimized user-defined table types as a starting point.

Comments closed