Press "Enter" to skip to content

Curated SQL Posts

Tidygraph

Thomas Lin Pedersen announces tidygraph, a tidyverse library for dealing with graphs and trees in R:

One of the simplest concepts when computing graph based values is that of
centrality, i.e. how central is a node or edge in the graph. As this
definition is inherently vague, a lot of different centrality scores exists that
all treat the concept of central a bit different. One of the famous ones is
the pagerank algorithm that was powering Google Search in the beginning.
tidygraph currently has 11 different centrality measures and all of these are
prefixed with centrality_* for easy discoverability. All of them returns a
numeric vector matching the nodes (or edges in the case of
centrality_edge_betweenness()).

This is a big project and is definitely interesting if you’re looking at analyzing graph data.

Comments closed

R’s iGraph + SQL Server Graphs

Dennes Torres has a post which shows how to use R’s iGraph library to visualize graphs created in SQL Server 2017:

The possibility to use both technologies together is very interesting. Using graph objects we can store relationships between elements, for example, relationships between forum members. Using R scripts we can build a cluster graph from the stored graph information, illustrating the relationships in the graph.

The script below creates a database for our example with a subset of the objects used in my article and a few more relationship records between the forum members.

Click through for the script.

Comments closed

Dynamic Unpivoting For Change Detection

Shane O’Neill has a script that dynamically unpivots a pair of rows and compares values column by column, storing the changes in XML:

Overall, the script is longer at nearly double the lines but where it shines is when adding new columns.
To include new columns, just add them to the table; to exclude them, just add in a filter clause.

So, potentially, if every column in this table is to be tracked and we add columns all the way up to 1,024 columns, this code will not increase.
Old way: at least 6,144.
New way: at least 2,048.
Dynamic: no change

Read on for that script.  Even though his developer ended up not using his solution, Shane has made it available for the rest of the world so that some day, someone else can have the maintenance nightmare of trying to root out a bug in the process.

Comments closed

System Health XE

Kenneth Fisher describes what is in the system_health Extended Events session:

Per BOL you get the following information:

  • Errors with a severity of >= 20.

  • Memory related errors (Errors 17803, 701, 802, 8645, 8651, 8657 and 8902).

  • Non-yielding scheduler problems (Error 17883).

  • Deadlocks.

  • Sessions that have waited on locks for > 30 seconds.

  • Sessions waiting for a long time on preemptive waits (waits on external API calls).

Read on to learn more of the things this session contains as well as a couple ways you can access the data.

Comments closed

When You Need To Read Memory-Optimized Data From Disk

Ned Otter enumerates the scenarios in which SQL Server needs to read data from disk for memory-optimized tables:

Those who have studied In-Memory OLTP are aware that in the event of “database restart”, durable memory-optimized data must be streamed from disk to memory. But that’s not the only time data must be streamed, and the complete set of events that cause this is not intuitive. To be clear, if your database had to stream databack to memory, that means all your memory-optimized data was cleared from memory. The amount of time it takes to do this depends on:

  • the amount of data that must be streamed

  • the number of indexes that must be rebuilt

  • the number of containers in the memory-optimized database, and how many volumes they’re spread across

  • how many indexes must be recreated (SQL 2017 has a much faster index rebuild process, see below)

  • the number of LOB columns

  • BUCKET count being properly configured for HASH indexes

Read on for the list of scenarios that might cause a standalone SQL Server instance to need to stream data from disk into memory to re-hydrate memory-optimized tables and indexes.

Comments closed

TempDB System Table Contention

Alexander Arvidsson diagnoses an interesting problem:

I ran this several times to see if there was a pattern to the madness, and it turned out it was. All waits were concentrated in database ID 2 – TEMPDB. Many people perk up by now and jump to the conclusion that this is your garden variety SGAM/PFS contention – easily remedied with more TEMPDB files and a trace flag. But, alas- this was further inside the TEMPDB. The output from the query above gave me the exact page number, and plugging that into DBCC PAGE gives the metadata object ID.

His conclusion is to reduce temp table usage and/or use memory-optimized tables.  We solved this problem with replacing temp tables with memory-optimized TVPs in our most frequently-used procedures.

Comments closed

Avoid Ticks

Michael J. Swart shows you how to convert DATETIME2 values to Ticks:

A .Net tick is a duration of time lasting 0.1 microseconds. When you look at the Tick property of DateTime, you’ll see that it represents the number of ticks since January 1st 0001.
But why 0.1 microseconds? According to stackoverflow user CodesInChaos “ticks are simply the smallest power-of-ten that doesn’t cause an Int64 to overflow when representing the year 9999”.

Even though it’s an interesting idea, just use one of the datetime data types, that’s what they’re there for. I avoid ticks whenever I can.

I agree with Michael:  avoid using Ticks if you can.

Comments closed

DAX’s ALL() Function

Matt Allington explains what the ALL() function is in DAX and when you might want to use it:

The ALL() function seems very simple on the surface however it has layers of complexity.  In its most simple usage it is a function that simply returns a table (virtual or materialised).  The syntax for ALL() is as follows

=ALL(TableOrColumn,[Column2],[ColumnN]..)

ALL() will always return a table, not a value.  Because it is a table, you cannot put the result directly into a cell in a Pivot Table or a Matrix.  Think about it, you can’t put a table with (potentially) multiple columns and (potentially) multiple rows into a single cell in a visual – it wont “fit”.

There’s a lot to ALL() and Matt does a great job explaining it.

Comments closed

Don’t Fear The Tidyverse

David Robinson explains why he prefers to explain the tidyverse version of R first rather than base R:

I’d summarize the two “competing” curricula as follows:

  • Base R first: teach syntax such as $ and [[]], loops and conditionals, data types (numeric, character, data frame, matrix), and built-in functions like ave and tapply. Possibly follow up by introducing dplyr or data.table as alternatives.
  • Tidyverse first: Start from scratch with the dplyr package for manipulating a data frame, and introduce others like ggplot2, tidyr and purrr shortly afterwards. Introduce the %>% operator from magrittr immediately, but skip syntax like [[]] and $ or leave them for late in the course. Keep a single-minded focus on data frames.

I’ve come to strongly prefer the “tidyverse first” educational approach. This isn’t a trivial decision, and this post is my attempt to summarize my opinions and arguments for this position. Overall, they mirror my opinions about ggplot2: packages like dplyr and tidyr are not “advanced”; they’re suitable as a first introduction to R.

I think this is the better position of the two, particularly for people who already have some experience with languages like SQL.

Comments closed