Press "Enter" to skip to content

Month: July 2017

Certificate Copying

Brian Carrig shows how to create certificates from binary:

Sometimes it is necessary to copy a certificate from one database to another database. The most common method I have seen to do this is involves taking a backup of the certificate to disk from one database and then restoring the certificate to the other database.

There is however, a lesser known alternative option available, provided you are working with SQL Server 2012 and above. Sadly despite it being 2017, this is not as foregone a conclusion for SQL Server DBAs as it should be. This alternate option is known as CREATE CERTIFICATE FROM BINARY. There are a few caveats with this option. Chief among them is that you cannot use a variable for the binary value, so you will likely end up needing to use some dynamic SQL.

One of the nice aspects to this feature from an administration and a security perspective is that you do not need to worry about accidentally leaving a copy of your certificate on a disk somewhere or having to remember to delete it after you have imported it into your user database.

Read on to see it in action.  Also, it’s about time that Brian started blogging.

Comments closed

Understanding The Latest Intel Xeon Processor Family

Glenn Berry explains what’s going on with Intel Xeon scalable processors:

The Skylake-SP has a different cache architecture that changes from a shared-distributed model used in Broadwell-EP/EX to a private-local model used in Skylake-SP. How this change will affect SQL Server workloads remains to be seen.

In Broadwell-EP/EX, each physical core had a 256KB private L2 cache, while all of the cores shared a larger L3 cache that could be as large as 60MB (typically 2.5MB/core). All of the lines in the L2 cache for each core were also present in the inclusive, shared L3 cache.

In Skylake-SP, each physical core has a 1MB private L2 cache, while all of the cores share a larger L3 cache that can be as large as 38.5MB (typically 1.375MB/core). All of the lines in the L2 cache for each core may not be present in the non-inclusive, shared L3 cache.

A larger L2 cache increases the hit ratio from the L2 cache, resulting in lower effective memory latency and lowered demand on the L3 cache and the mesh interconnect. L2 cache is typically about 4X faster than L3 cache in Skylake-SP. Figure 2 details the new cache architecture changes in Skylake-SP.

Glenn explains what the performance ramifications of these changes are, and also gives a consumer caveat regarding a major price difference based on memory capacity per socket.

Comments closed

Dimensional Modeling

Jen Underwood explains the basics of dimensional modeling:

A dimensional model is also commonly called a star schema. It provides a way to improve report query performance without affecting data integrity. This type of model is popular in data warehousing because it can provide better query performance than transactional, normalized, OLTP data models. It also allows for data history to be stored accurately over time for reporting. Another reason why dimensional models are created…they are easier for non-technical users to navigate. Creating reports by joining many OLTP database tables together becomes overwhelming quickly.

Dimensional models contain facts surrounded by descriptive data called dimensions. Facts contains numerical values of what you measure such as sales or user counts that are additive, or semi-additive in nature. Fact tables also contain the keys/links to associated dimension tables. Compared to most dimension tables, fact tables typically have a large number of rows.

Jen’s post was built off of an early SQL Saturday presentation.  It’s still quite relevant today.

Comments closed

Deploying Reporting Services Reports With Powershell

Claudio Silva has a post covering Reporting Services Powershell cmdlets:

In this post I will share with you the request and how I have automated it saving a lot of time. Just to keep you interested, I went from 23 and a half minutes to 3 (your mileage may vary depending on the number of objects/actions that you need to do).

The request

  1. Create new folder “FolderB”

  2. We need to deploy a copy of the reports and data source to a new folder (“FolderB”). You should get the existing ones from the folder “FolderA” on the same server.

  3. Then you have to change the datasource to point to the database “dbRS” with the login “ReportingUser”

  4. Finally we need to change the data source for each report to match the new datasource pointing to database “dbRS” created on last step.

Click through for the code.  Claudio even has a one-minute video showing his work in action.

Comments closed

An Introduction To Kafka

Prashant Sharma explains the basics of Apache Kafka:

Apache describes Kafka as a distributed streaming platform that lets us:

  1. Publish and subscribe to streams of records.

  2. Store streams of records in a fault-tolerant way.

  3. Process streams of records as they occur.

Kafka is probably the most generally interesting of the current Hadoop ecosystem, with Spark not too far behind.  By “generally interesting,” I mean in the sense that companies with no vested interest in Hadoop as a whole could still be excited by the prospect of Kafka.

Comments closed

Data Lake Tools For VS Code Updated

Jenny Jiang announces Azure Data Lake Tools for Visual Studio Code’s July update:

Local Debug enables you to debug your C# code behind, step through the code, and validate your script locally before submitting to ADLA.

  • Use command ADL: Start Local Run Service to start local run service and set a breakpoint in your code behind, then click command ADL: Local Debug to start local debug service. You can debug through the debug console and view parameter, variable, and call stack information.

Click through to see the other improvements.

Comments closed

Cloudera Enterprise 5.12 GA

Fred Koopmans announces that Cloudera Enterprise 5.12 is now generally available:

Data Science & Engineering

  • Cloudera Data Science Workbench enhancements include:

    • GPU Support: Cloudera Data Science Workbench now enables popular deep learning frameworks to run on GPUs, both on-premises and in the cloud.

    • Embedded Web UIs: Users can work with the Apache Spark Web UI for Spark sessions. Other interactive web applications like TensorBoard, Shiny, and Plotly now appear directly in the workbench.

    • Enhanced Job Scheduling: Cloudera Data Science Workbench users can now schedule jobs directly from external schedulers or orchestration systems via the new Jobs API.

Read on for more enhancements.

Comments closed

Using SMO On Linux

Richie Lee explains how to get the SQL Tool Service running on Linux:

So, to briefly sum up, to use SMO on Linux, you need to do the following:

  • Install .NET Core 2.0
  • Install PowerShell beta 2
  • Install SQL Tool Service

You can use PowerShell from the Terminal, but I prefer something like an IDE so this is optional:

  • Download Visual Studio Code

  • Install PowerShell plugin

  • Change settings file to point explicitly to PowerShell beta 2.

Read the whole thing.

Comments closed

TDE + AG = Higher CPU Utilization

Ginger Keys has an analysis stress testing CPU load when Transparent Data Encryption is on and a database is in an Availability Group:

Microsoft says that turning on TDE (Transparent Data Encryption) for a database will result in a 2-4% performance penalty, which is actually not too bad given the benefits of having your data more secure. There is even more of a performance hit when enabling cell level or column level encryption. When encrypting any of your databases, keep in mind that the tempdb database will also be encrypted. This could have a performance impact on your other non-encrypted databases on the same instance.

In a previous post I demonstrated how to add an encrypted database to an AlwaysOn group in SQL2016. In this article I will demonstrate the performance effects of having an encrypted database in your AlwaysOn Group compared to the same database not-encrypted.

The results aren’t surprising, though the magnitude of the results might be.

Comments closed

SSMS Default Reports

Kenneth Fisher talks about the default reports built into Management Studio:

As DBAs our stock in trade is information and there is certainly an impressive amount available. The diagnostic views are the most common place to get the information we need but every now and again it’s nice to get an organized/pretty view. To that end, you can write your own reports or you can use the default reports that Microsoft makes available through SSMS. There are reports at the Server, Database and Agent level.

The Disk Usage by Table report is on my go-to list.

Comments closed