Press "Enter" to skip to content

Author: Kevin Feasel

Using Show-Command with dbatools

Garry Bargsley shows how you can “build” a GUI for any Powershell cmdlet, using one from dbatools as an example:

The Show-Command cmdlet will display in a graphical interface all the parameter options related to a function, cmdlet or workflow.

I always recommend the book “Learn Windows PowerShell in a Month of Lunches” for people that ask me how to get started with PowerShell.  It is a very thorough book and I believe helps build the foundation you need to user PowerShell to the fullest.  Earlier this month I spent time going through the book for maybe the third time.  I must have blown right by the Show-Command cmdlet which shows up in Chapter 4, as I do not ever remembering seeing it before.  Then a couple of weeks ago Kenneth Fisher ( b | t ) post a blog about this command.  I like how Kenneth related the Show-Command in PowerShell to generating scripts with SSMS.  This is a great comparison for the DBA to understand.  After reading his blog post I thought, wow, this might be a handy tool to combine with my favorite module out there dbatools.

Click through for a demonstration.

Comments closed

Testing Memory-Optimized tempdb

Erin Stellato wants to see how big a benefit using In-Memory OLTP for tempdb metadata objects really is:

I have SQL Server 2019 CTP 3.2 installed on my virtual machine, which has 8GB of memory (max server memory set to 6 GB) and 4 vCPUs. I created four (4) tempdb data files, each sized to 1GB.

I restored a copy of WideWorldImporters and then created three stored procedures (definitions below). Each stored procedure accepts a date input, and pushes all rows from Sales.Order and Sales.OrderLines for that date into the temporary object. In Sales.usp_OrderInfoTV the object is a table variable, in Sales.usp_OrderInfoTT the object is a temporary table defined via SELECT … INTO with a nonclustered added afterwards, and in Sales.usp_OrderInfoTTALT the object is a pre-defined temporary table which is then altered to have an additional column. After the data is added to the temporary object, there is a SELECT statement against the object that joins to the Sales.Customers table.

This isn’t a be-all, end-all performance test, but it does give us an idea of what can work better with memory-optimized tempdb metadata.

Comments closed

Against Feature Restrictions

Solomon Rutzky is not a fan of Feature Restrictions in SQL Server 2019 CTP 3.2:

However, nobody’s perfect and occasionally the SQL Server team makes a mistake. A new feature introduced in SQL Server 2019 CTP 3.2, “Feature Restrictions”, is an unfortunate example of such a mistake. It’s a tragically misguided attempt at improving security that not only increases the chances of SQL Injection, but it also prevented useful changes from being made. “Misguided” because it doesn’t even accomplish it’s stated goal, and “tragic” because it a) most likely increases the chances of SQL Injection, and b) used up the time that could have been spent on implementing useful changes.

Read on to see why. Some of it is the travails of blacklisting (for example, the product blacklists WAITFOR but there’s more than one way to force a wait), some is offsetting behavior (as you introduce risk-reducing products, people compensate by taking additional risks up to the point of the prior equilibrium), and some of it is additional information leakage.

Comments closed

Calculating Memory Consumption by Object

Max Vernon has a script to help you figure out which objects are consuming the most buffer pool space:

SQL Server caches object data in memory in the buffer pool. Understanding memory consumption by object can be crucial for performance. For instance, you may have a large logging table consuming 90% of the buffer pool. Moving older rows out of the logging table might allow you to reduce memory consumption if you really only care about the last 2 weeks of log records. The script below takes a snapshot of sys.dm_os_buffer_descriptors, then links it to the objects in each database on the instance. The output shows which objects are in memory, along with how much memory is being consumed by each object.

Click through for the script.

Comments closed

Measuring Closeness Centrality in Graphs

Niko Neugebauer explains the concept of Closeness Centrality:

The real center of the network or also known as The King of the Network, Closeness Centrality is a measure which represents the relative location of the Vertice to the center of the network, or better to say the average distance to all other Vertices within that network.

This measure results in the high effectiveness of information spreading/flow within the given network, because of the necessary number of Edges to cross to reach to any given connected Vertice.

Read on to see why this is useful and how you can calculate it in SQL Server 2019.

Comments closed

Replicating ACID Tables in Hive

Ashutosh Bapat shows off some of the improvements in Apache Hive replication:

Transactional tables in Hive support ACID properties. Unlike non-transactional tables, data read from transactional tables is transactionally consistent, irrespective of the state of the database. Of course, this imposes specific demands on replication of such tables, hence why Hive replication was designed with the following assumptions:

1. A replicated database may contain more than one transactional table with cross-table integrity constraints.
2. A target may host multiple databases, some replicated and some native to the target. The databases replicated from the same source may have transactional tables with cross-database integrity constraints.
3. A user should be able to run read-only workloads on the target and should be able to read transactionally consistent data.
4. Since in Hive a read-only transaction requires a new transaction-id, the transaction-ids on the source and the target of replication may differ. Thus transaction-ids can not be used for reading transactionally consistent data across source and replicated target.

Read on to learn why these assumptions are in place and what they mean for replication.

Comments closed

Naive Bayes Predictions with Analysis Services

Dinesh Asanka shows how you can use the Naive Bayes algorithm in an Analysis Services data mining project:

Microsoft Naive Bayes is a classification supervised learning. This data set can be bi-class which means it has only two classes. Whether the patient is suffering from dengue or not or whether your customers are bike buyers or not, are an example of the bi-class data set. There can be multi-class data set as well.

Let us take the example which we discussed in the previous article, AdventureWorks bike buyer example. In this example, we will use vTargetMail database view in the AdventureWorksDW database.

During the data mining algorithm wizard, the Microsoft Naive Bayes algorithm should be selected as shown in the below image.

Of mild interest is that it’s a two-class classifier here, but it’s a multi-class classifier in the (much) later ML.NET.

Comments closed

Distributed Transactions on Linux

Tejas Shah and crew announce distributed transactions with SQL Server on Linux:

With SQL Server 2017, a new era was heralded with SQL server being available to deploy on Linux (and Linux based container) systems. While all functionality of the SQL Server engine were brought over as is to SQL Server on Linux, some of the functionality which depended on Windows system processes such as distributed transactions (which relies on MSDTC service) were not brought over immediately.

Well, now your wait is over.

Comments closed

Hearing Differences in Power BI

David Eldersveld shows us one technique for adding a sound component to your Power BI dashboard:

Data sonification uses variations in audio to hear differences in data values. From an accessibility standpoint, data sonification offers another potential avenue to enrich your reports beyond methods pertaining to data visualization.

There are a few pieces that need to be assembled to enable data sonification that works in both Power BI Desktop and Service. While, audio has been used in the Power BI Service for awhile, Power BI Desktop has been silent until now. This post attempts to show how to produce audio tones in Power BI for greater accessibility. It also demonstrates how to blend data with a standard range of audio pitches.

It’s an interesting idea.

Comments closed

Finding Memory-Rich Queries

Matthew McGiffen wants to find the queries which demand the largest memory grants:

I had a server that looked like it had been suffering from memory contention. I wanted to see what queries were being run that had high memory requirements. The problem was that it wasn’t happening right now – I needed to be able to see what had happened over the last 24 hours.

Enter Query Store. In the run-time stats captured by Query Store are included details relating to memory.

Click through for a script which retrieves this data over a time frame.

Comments closed