Press "Enter" to skip to content

Author: Kevin Feasel

Visiting Production

Randolph West discusses production access:

During a recent client meeting about a database migration, I realised that I have never logged into a SQL Server on their production environment. My involvement has been strictly dealing with setting up the new environment and log shipping the backups.

I get that I’m not a full-service DBA for this client, but it got me wondering about the many security discussions I’ve seen and participated in, in the past: that not even a junior DBA might need access to production database systems, if it’s not within the scope of his or her work.

Limiting production access is a smart move, but it’s important to realize the downstream consequences:  the people who still have access to production will (at least in the short term) have to perform a lot of the tasks that others were doing previously, including data fixes, research, etc.  It’s important to be prepared for that.

Comments closed

More Power BI Tips

Dustin Ryan has a few more Power BI tips for us:

But what if I want to show a descriptive field as a tooltip? As of now, only measures can be displayed in the tooltip. But have no fear! One way to work around this is to create a custom calculation to display my descriptive field.

If I want to show a description of the classes within the subjects seen in the chart above, I can create a calculation like so:

Classes = CALCULATE(CONCATENATEX(VALUES(‘Grade data'[Class Name]),’Grade data'[Class Name],”, “))

So now I can show the classes within each subject as a tooltip

I really like his synonyms example.  Read the whole thing.

Comments closed

Zeppelin Road Map

Vinay Shulka, et al, discuss recent and forthcoming improvements to Apache Zeppelin:

Over the last year, there have been several key improvements to Apache Zeppelin that have been contributed by a diverse group of developers. Some of the highlights are:

  • Security Features-Authentication, Access Control, LDAP Support
  • Sharing Collaboration- Notebook import/export
  • Interpreters-Noteable R interpreter, and others too numerous to list

The pluggable nature of the Apache Zeppelin interpreter architecture has made it easy to add support for interpreters. Now there are over 30 interpreters supporting everything from Spark, Hive, MySql, and R to things like Geode and HBase.

It’s an exciting time to be in the world of data analysis.

Comments closed

Jepsen: Crate

Kyle Kingsbury checks out Crate, a SQL database built on Elasticsearch:

Building a database on Elasticsearch is something of a double-edged sword. Crate has been able to focus on hard problems like query planning, joins, aggregations, and so on–without having to take on the tough work of building a storage layer, cluster membership, replication algorithm, etc. However, Crate is tightly coupled to Elasticsearch, and is dependent on the Elastic team for improvements to that technology. Elasticsearch’s consistency issues have been well-known for years, and the process to fix them is still ongoing. It’s not clear what Crate can do to get out of this situation: a rewrite would be complex and expensive (and introduce new and unknown failure modes), whereas fixing Elasticsearch’s consistency problems could easily consume person-years of engineering time that a small company can ill-afford.

There are good reasons to use Crate: distributed SQL stores, especially with Crate’s capacity for aggregations and joins, are hard to come by. Moreover, Crate introduces several helpful features not present in Elasticsearch. That said, the risk of data loss is real, and is unlikely to be resolved at any point in the near future. I recommend that Crate users avoid using Crate as their system of record–at least, where each record matters. Like Elasticsearch itself, you should use a safer database as your primary store, and continuously backfill data from that primary store into Crate for querying. Crate may also be suitable for cases where occasional data loss or corruption does is mostly harmless, e.g. high-volume sensor data, observability, analytics, etc.

Every time the Jepsen series gets updated, I make time to read.

Comments closed

Moving SSRS Reports

Andy Mallon fills a need in the community:

We’re just interacting with the SSRS Web Service. I’m using PowerShell, but you could also write a little .NET app to do something similar.

I’m not using any magical SSRS-specific cmdlets. I’m using PowerShell to interact with a web service….that web service just happens to be SSRS.

If you’re still trying to get along with RSScripter (if you can even find a copy any longer), Andy’s code might help you out.

Comments closed

Hard Problems In Stream Processing

Kartik Paramasivam discusses tough issues within the Lambda architecture:

During a data center failover like the exampleabove, we could have a “late arrival,” i.e. the stream processor might see the AdClickEvent possibly a few minutes after the AdViewEvent. A poorly written stream processor might deduce that the ad was a low-quality ad when instead the ad might have actually been good. Another anomaly is that the stream processor might see the AdClickEvent before it sees the corresponding AdViewEvent. To ensure that the output of the stream processor is correct there has to be logic to handle this “out of order message arrival.”

In the example above, the geo-distributed nature of the data centers makes it easy to explain the delays. However delays can exist even within the same data center due to GC issues, Kafka cluster upgrades, partition rebalances, and other naturally occurring distributed system phenomena.

This is a pretty long article and absolutely worth a read if you are looking at streaming data.

Comments closed

Overlapping Ranges Using U-SQL

Michael Rys explains how to merge overlapping ranges of data using U-SQL:

If you look at the problem, you will at first notice that you want to define something like a user-defined aggregation to combine the overlapping time intervals. However, if you look at the input data, you will notice that since the data is not ordered, you will either have to maintain the state for all possible intervals and then merge disjoint intervals as bridging intervals appear, or you need to preorder the intervals for each user name to make the merging of the intervals easier.

The ordered aggregation is simpler to scale out, but U-SQL does not provide ordered user-defined aggregators (UDAGGs) yet. In addition, UDAGGs normally produce one row per group, while in this case, I may have multiple rows per group if the ranges are disjoint.

Luckily, U-SQL provides a scalable user-defined operator called a reducer which gives us the ability to aggregate a set of rows based on a grouping key set using custom code.

There are some good insights here, so read the whole thing.

Comments closed

Polybase DMVs

I look at the DMVs associated with Polybase and external table creation:

Let’s walk through this one step at a time and understand what the DMV is telling us.  Unfortunately, the DMV documentation is a little sparse, so some of this is guesswork on my part.

  1. A RandomIDOperation appears to create a temporary table.  In this case, the table (whose name is randomly generated) is named TEMP_ID_53.  I’m not sure where that name comes from; the session I ran this from was 54, so it wasn’t a session ID.

  2. After the table gets created, each Compute node gets told to create a table called TMP_ID_53 in tempdb whose structure matches our external table’s structure.  One thing you can’t see from the screenshot is that this table is created with DATA_COMPRESSION = PAGE.  I have to wonder if that’d be the same if my Compute node were on Standard edition.

  3. We add an extended property on the table, flagging it as IS_EXTERNAL_STREAMING_TABLE.

  4. We then update the statistics on that temp table based on expected values.  629 rows are expected here.

  5. Then, we create the dest stat, meaning that the temp table now has exactly the same statistics as our external table.

  6. The next step is that the Head node begins a MultiStreamOperation, which tells the Compute nodes to begin working.  This operator does not show up in the documentation, but we can see that the elapsed time is 58.8 seconds, which is just about as long as my query took.  My guess is that this is where the Head node passes code to the Compute nodes and tells them what to do.

  7. We have a HadoopRoundRobinOperation on DMS, which stands for “Data Movement Step” according to the location_type documentation.  What’s interesting is that according to the DMV, that operation is still going.  Even after I checked it 40 minutes later, it still claimed to be running.  If you check the full query, it’s basically a SELECT * from our external table.

  8. Next is a StreamingReturnOperation, which includes our predicate WHERE dest = ‘ORD’ in it.  This is a Data Movement Step and includes all of the Compute nodes (all one of them, that is) sending data back to the Head node so that I can see the results.

  9. Finally, we drop TEMP_ID_53 because we’re done with the table.

This post was about 70% legwork and 30% guesswork.  That’s a bit higher a percentage than I’d ideally like, but there isn’t that much information readily available yet, so I’m trying (in my own small way) to fix that.

Comments closed

You Should Use Biml

Meagan Longoria explains why you should use Biml if you’re building Integration Services packages:

Biml provides a way automate SSIS design patterns. This reduces the time required to complete a data integration project, and it helps employ consistent design patterns within and across projects. Re-generating multiple packages after making a change to a design pattern takes just a few minutes, so small changes to several similar packages are no longer a significant effort.

Automating SSIS design patterns allows teams to work more efficiently. Senior developers can stop solving the same problems over and over again. Instead, they can solve them once, automate the solution, and move on to new and interesting challenges. Junior developers still learn good development practices with Biml, but they require less supervision to create quality output in a shorter amount of time. SSIS developers that prefer typing code over the drag-and-drop interface of SQL Server Data Tools now get a better way to work in addition to the automation capabilities.

If there’s one piece of advice I can give ETL developers, it’s “learn Biml.”

Comments closed

Stored Procedure Last Run Times

Richie Lee has a script to see when stored procedures were last executed:

Quick script to get the last time a stored procedure was executed in the database. The reason for the seemingly over-engineered script is that different query plans can be generated, meaning that stored procedures can appear more than once in the list.

The query doesn’t quite work as-is, but making qs.execution_count into an aggregation and removing it from the GROUP BY would work.  I’d probably rewrite it to look a bit more like:

WITH querystats AS
(
	SELECT
		OBJECT_NAME(qt.objectid) AS ProcedureName,
		SUM(qs.execution_count) OVER (PARTITION BY OBJECT_NAME(qt.objectid)) AS ExecutionCount,
		qs.creation_time AS CreationTime,
		ROW_NUMBER() OVER (PARTITION BY OBJECT_NAME(qt.objectid) ORDER BY creation_time DESC) AS rownum
	FROM sys.dm_exec_query_stats AS qs
	CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS qt
	WHERE
		qt.[dbid] = DB_ID()
		AND OBJECT_NAME(qt.objectid) LIKE '%%'
)
SELECT
	qs.ProcedureName,
	qs.ExecutionCount,
	qs.ExecutionCount
FROM querystats qs
WHERE
	qs.rownum = 1;
Comments closed