Press "Enter" to skip to content

Author: Kevin Feasel

Migrating to SQL Managed Instances with dbatools

Jovan Popovic shows how we can perform an offline migration from on-prem/IaaS SQL Server to a SQL Managed Instance using dbatools:

Typically, the offline migration process looks like:

– You need to create an Azure Blob Storage account that will be used to temporary hold the database backups that will be moved from SQL Server to Managed Instance.
– You need to back up the databases to Azure Blob Storage and restore them from Azure Blob Storage to Managed Instance.
– You need to migrate server-level objects such as logins, agent jobs from the source to destination instance.

In this article, I will use Azure PowerShell to create and manage necessary Azure resources, and DBATools PowerShell library to initiate migration.

Read on for the process, including the Powershell scripts and dbatools calls needed.

Comments closed

Verifying Database Backups

Lori Brown reminds us to perform checksums and verify backups on completion:

I found out that I have been missing something from our regular database backups that I had no idea that I should have been using all along.  I know about verifying your backup file and have incorporated into our standard maintenance routines one that will periodically test backups by restoring using VERIFYONLY.  However, I totally missed also having CHECKSUM specified when creating backup files.  Ugh!!  Not sure how that happened but I am totally onboard with it now.  Better late than never!

Lori does explain what the consequences are in terms of time and CPU utilization so that you’re aware of the tradeoffs when enabling these options.

Comments closed

Breaking Out Powershell Functions with Powershell

Shane O’Neill shows us how we can use Powershell to break Powershell functions out into their own files:

The stupid thing that I was doing was that I was manually, visually scanning the script, copying out the function definitions, and pasting them into their own function files.

This was long, this was tedious, and this was not a efficient use of my time.

Especially since the scripts were not laid out as logically as I would have liked.

Click through to see how Shane solved this.

Comments closed

Real-Time Analytics with Divolte, Kafka, Druid, and Superset

Fokko Driesprong gives us a proof of concept architecture for real-time analytics in the Hadoop ecosystem:

Divolte Collector is a scalable and performant application for collecting clickstream data and publishing it to a sink, such as Kafka, HDFS or S3. Divolte has been developed by GoDataDriven and made available to the public under the Apache 2.0 open source license.

Divolte can be used as the foundation to build anything from basic web analytics dashboarding to real-time recommender engines or banner optimization systems. By using a JavaScript tag in the browser of the customers, it gathers data about their behaviour on the website or application. You’re in full control what you do, and don’t want to capture.

Click through for the example.

Comments closed

SQL Server 2019 RC 1.1

Amit Banerjee announces a minor numeric change and a big update to SQL Server 2019 RC1:

In continuation with our announcement of SQL Server 2019 release candidate last week, we’re announcing that the release candidate refresh for SQL Server 2019 is now available to download. The release candidate now includes bits for Big Data Clusters in SQL Server 2019 in this refresh.

Back in July, we announced the preview of Big Data Clusters in SQL Server 2019 and since then we’ve seen our customers actively bringing their big data analytical workloads to SQL Server 2019 to operationalize their AI and machine learning projects.

Read on for more.

Comments closed

Combining Neo4j with Kafka

David Allen shows how you can use Neo4j to visualize graphic data living in Kafka:

We’re enabling the plugin to work as both a source and a sink. In the NEO4J_streams_sink_topic_cypher_friends item, we’re writing a Cypher query. In this query, we’re MERGE-ing two Person nodes. The plugin gives us a variable named event, which we can use to pull out the properties we need. When we MERGE nodes, it creates them only if they do not already exist. Finally, it creates a relationship between the two nodes (p1) and (p2).

This sink configuration is how we’ll turn a stream of records from Kafka into an ever-growing and changing graph. The rest of the configuration handles our connection to a Confluent Cloud instance, where all of our event streaming will be managed for us. If you’re trying this out for yourself, make sure to replace KAFKA_BOOTSTRAP_SERVERSAPI_SECRET, and API_KEY with the values that Confluent Cloud gives you when you generate an API access key.

Click through for the example.

Comments closed

SQL Injection without Dynamic SQL

Erik Darling has a card trick for us:

I always try to impart on people that SQL injection isn’t necessarily about vandalizing or trashing data in some way.

Often it’s about getting data. One great way to figure out how difficult it might be to get that data is to figure out who you’re logged in as.

There’s a somewhat easy way to figure out if you’re logged in as sa.

Wanna see it?

Of course you do.

Comments closed

Fixing Windows Power Settings

Jeff Iannucci takes us through power settings within T-SQL:

Well, not exactly, but it’s definitely like that. The default Power Setting is “Balanced” which means during periods of lower activity the clock speeds of your CPUs are reduced to conserve power and save your battery.

Apparently all Windows installations think they are on laptops. SPOILER ALERT: your database servers are probably not laptops.

Jeff has a T-SQL script to fix this. Unfortunately, it won’t fix the other power-based performance killer: power settings in BIOS.

Comments closed

Understanding Power BI Data Virtualization Queries

Gerhard Brueckl walks us through a few examples of queries Power BI makes when virtualizing data:

Even though this query only touches two different data sources, it is a good way to analyze the queries sent to the data sources. To track these queries I used the built-in Performance Analyzer of Power BI desktop which can be enabled on the “View”-tab. It gives you detailed information about the performance of the report including the actual SQL queries (under “Direct query”) which were executed on the data sources. The plain text queries can also be copied using the “Copy queries” link at the bottom.

Read on for the queries and for Gerhard’s analysis.

Comments closed