Press "Enter" to skip to content

Day: July 17, 2017

Executing Powershell In SSIS

Daniel Calbimonte shows how to execute Powershell via C# in SQL Server Integration Services:

Get started

The article will include the following topics:

  1. Get the list of services using PowerShell in C#.

  2. How to send SSIS Parameters to PowerShell using the script task.

  3. How to use the PowerShell.addscript function.

  4. How to invoke a PowerShell script in C#

I think this is a fairly limited scenario—if you’re going to have to write C# code anyhow, you can do this same work in C#.  I suppose that it would be most useful in cases where you have to call common Powershell cmdlets rather than writing your own .NET code.

Comments closed

Hadoop Name Node Capacity Planning

Mamta Chawla has some rules of thumb for sizing your Hadoop name node:

Both name node servers should have highly reliable storage for their namespace storage and edit-log journaling. That’s why — contrary to the recommended JBOD for data nodes — RAID is recommended for name nodes.

Master servers should have at least four redundant storage volumes — some local and some networked — but each can be relatively small (typically 1TB).

It is easy to determine the memory needed for both name node and secondary name node. The memory needed by name node to manage the HDFS cluster metadata in memory and the memory needed for the OS must be added together. Typically, the memory needed by the secondary name node should be identical to the name node.

Click through for some specific recommendations.

Comments closed

ADO.Net In Powershell Core

Max Trinidad shows how to use Powershell Core on Windows or Linux to run T-SQL queries:

There’s a catch!

Nothing is perfect yet! Using the .NET Core version of System.Data, there’s a known issue with the datarow class. It seems it’s building the data results as string list of values without the column information.

But, there’s always a way to make thing work adding some extra code to work around this issue and reconstruct the data the way we want.

Max points out a couple of issues that exist today, but they’re getting resolved.

Comments closed

Copying Databases With dbatools

Mike Robbins has a video showing how to copy databases from several SQL Server instances using dbatools:

The video starts out by checking the default instance of SQL Server on a server named SQL17 to see if any user databases exist. Then the names of five different SQL Servers are piped to ForEach-Object. Within the ForEach-Object loop, $_ is a variable for the current object. It’s translated to each individual server name as it iterates through the list of SQL Servers, copying the user databases to SQL17. Only one user database exists on each of the source SQL Servers. The databases are backed up to the specified network share and restored to the destination server. The network share and any sub-folders that are specified must already exist. The account that SQL Server runs as on each of the servers must also have access to the network share. The names of the SQL Servers used in the demo correspond to the version of SQL Server they’re running. The SQL05 server is running Windows Server 2008 (non-R2) and does not have any version of PowerShell installed which means the Copy-SqlDatabase function is extremely versatile.

Click through to watch the video and see how quickly you can get going.

Comments closed

Certificate Copying

Brian Carrig shows how to create certificates from binary:

Sometimes it is necessary to copy a certificate from one database to another database. The most common method I have seen to do this is involves taking a backup of the certificate to disk from one database and then restoring the certificate to the other database.

There is however, a lesser known alternative option available, provided you are working with SQL Server 2012 and above. Sadly despite it being 2017, this is not as foregone a conclusion for SQL Server DBAs as it should be. This alternate option is known as CREATE CERTIFICATE FROM BINARY. There are a few caveats with this option. Chief among them is that you cannot use a variable for the binary value, so you will likely end up needing to use some dynamic SQL.

One of the nice aspects to this feature from an administration and a security perspective is that you do not need to worry about accidentally leaving a copy of your certificate on a disk somewhere or having to remember to delete it after you have imported it into your user database.

Read on to see it in action.  Also, it’s about time that Brian started blogging.

Comments closed

Understanding The Latest Intel Xeon Processor Family

Glenn Berry explains what’s going on with Intel Xeon scalable processors:

The Skylake-SP has a different cache architecture that changes from a shared-distributed model used in Broadwell-EP/EX to a private-local model used in Skylake-SP. How this change will affect SQL Server workloads remains to be seen.

In Broadwell-EP/EX, each physical core had a 256KB private L2 cache, while all of the cores shared a larger L3 cache that could be as large as 60MB (typically 2.5MB/core). All of the lines in the L2 cache for each core were also present in the inclusive, shared L3 cache.

In Skylake-SP, each physical core has a 1MB private L2 cache, while all of the cores share a larger L3 cache that can be as large as 38.5MB (typically 1.375MB/core). All of the lines in the L2 cache for each core may not be present in the non-inclusive, shared L3 cache.

A larger L2 cache increases the hit ratio from the L2 cache, resulting in lower effective memory latency and lowered demand on the L3 cache and the mesh interconnect. L2 cache is typically about 4X faster than L3 cache in Skylake-SP. Figure 2 details the new cache architecture changes in Skylake-SP.

Glenn explains what the performance ramifications of these changes are, and also gives a consumer caveat regarding a major price difference based on memory capacity per socket.

Comments closed

Dimensional Modeling

Jen Underwood explains the basics of dimensional modeling:

A dimensional model is also commonly called a star schema. It provides a way to improve report query performance without affecting data integrity. This type of model is popular in data warehousing because it can provide better query performance than transactional, normalized, OLTP data models. It also allows for data history to be stored accurately over time for reporting. Another reason why dimensional models are created…they are easier for non-technical users to navigate. Creating reports by joining many OLTP database tables together becomes overwhelming quickly.

Dimensional models contain facts surrounded by descriptive data called dimensions. Facts contains numerical values of what you measure such as sales or user counts that are additive, or semi-additive in nature. Fact tables also contain the keys/links to associated dimension tables. Compared to most dimension tables, fact tables typically have a large number of rows.

Jen’s post was built off of an early SQL Saturday presentation.  It’s still quite relevant today.

Comments closed

Deploying Reporting Services Reports With Powershell

Claudio Silva has a post covering Reporting Services Powershell cmdlets:

In this post I will share with you the request and how I have automated it saving a lot of time. Just to keep you interested, I went from 23 and a half minutes to 3 (your mileage may vary depending on the number of objects/actions that you need to do).

The request

  1. Create new folder “FolderB”

  2. We need to deploy a copy of the reports and data source to a new folder (“FolderB”). You should get the existing ones from the folder “FolderA” on the same server.

  3. Then you have to change the datasource to point to the database “dbRS” with the login “ReportingUser”

  4. Finally we need to change the data source for each report to match the new datasource pointing to database “dbRS” created on last step.

Click through for the code.  Claudio even has a one-minute video showing his work in action.

Comments closed