Press "Enter" to skip to content

Month: January 2019

Capturing SQLCMD Errors

Jack Vamvas shows us how we can capture errors from SQLCMD:

I’m executing  code using SQLCMD from a batch file . The code points to a sql file and there is also an output file. 


SQLCMD -E -S MYSERVER\INST1 -i “setup_job_entry.sql” -o “setup_job_entry.log”


But I noticed that if the actual SQLCMD returns an error , for example , if I’m connecting to  an server which doesn’t exist this error message will appear in the output file – but there will  not be an ERROR number , which would allow me to trap and return an appropriate message 

There is a way and Jack shows us how.

Comments closed

Creating Cosmos DB Indexes

Hasan Savran explains indexing in Cosmos DB:

In SQL Server you need to pick which columns you like to index, In CosmosDB you need to pick which columns not to index. It’s kind of same thing at the end. You might ask “If everything is indexed and working fine, why do you want me to poke the well running system?” When we compare SQL Server indexes to CosmosDB Indexes, one thing works exactly same. That is the index file size. CosmosDB holds the indexes in a separate file like SQL Server and if we want to index everything, index file size is going to get large. Since we need to pay for the file space in CosmosDB, you might need to pay extra for indexes that you might never use. Also, your updates, inserts and deletes might cost you more Request Units since CosmosDB needs to maintain all the indexes in the background.

There’s just enough difference to make you pay the price if you assume Cosmos DB works just like SQL Server.

Comments closed

Migrating Lots Of Databases To SQL Server 2016

Andy Levy has a problem. Well, about 8000 of them. In part 1, he describes the plan:

How do you move eight thousand databases in a reasonable amount of time? I spent about an hour and a half one morning hashing ideas out w/ folks in the dbatools Slack channel, plus several conversations in the office and with our hosting provider.

Then, in part 2, he describes the execution:

We missed the estimated time for our go/no-go decision by five minutes. With the number of moving parts, databases in play, unexpected delays, and amount of testing we had to do, that’s pretty good! My colleague and I had some additional work we needed to take care of after the team declared the migration a success. Agent jobs needed to be enabled, overnight job startups monitored, things like that. We called it a day after about 14 hours in the office.

It was a nice success story, so check it out.

Comments closed

What Happens When Your Secondary DAG Fails Over

Tianyu Wen explains what happens when there is a failover incident in the secondary Availability Group of a Distributed Availability Group:

If the primary replica on the secondary AG (also known as the “forwarder”) is lost and causes automatic failover to happen or manual failover is performed in the secondary AG in a DAG, there will be no data loss if the following conditions are met:
– The primary replica on the primary AG runs with no synchronization issue when the failover happens;
– The secondary AG on the DAG has a functioning secondary replica before the failover happens;
– The primary replica on the primary AG can communicate properly with the secondary replica on the secondary AG over their database mirroring endpoints.

There’s a lot of detail here, so if you are supporting Distributed Availability Groups, check it out.

Comments closed

Getting Reporting Services Installation Details With Powershell

Josh Smith wants to find every installation of SQL Server Reporting Services on a machine:

This is one of those posts so I never have to google this again (one hopes). Here is the PS code to pull back a set of details about every SSRS instance installed on a server including the SSRS instance name, & the first URL port it is running on, the service name and the name of the report server database etc.

Click through for the Powershell script.

Comments closed

A Compendium Of Kafka Links

Manas Dash shares some interesting Kafka-related articles, case studies, and books:

Articles
1. Kafka in a Nutshell. Published on September 25, 2015, by Kevin Sookocheff. Kevin’s article is all about Kafka in a nutshell. He says “Kafka is quickly becoming the backbone of many organization’s data pipelines — and with good reason. By using Kafka as a message bus we achieve a high level of parallelism and decoupling between data producers and data consumers, making our architecture more flexible and adaptable to change.” If you have not read about Kafka yet, you must go through it. This is more like an executive summary of the what, where, and why of Kafka.

Read on for several more articles, as well as a few case studies and two books.

Comments closed

The Basics Of Naive Bayes Classifiers

I have the first post in a series up on using the Naive Bayes class of algorithms for classifying inputs:

Why Should We Use Naive Bayes? Is It the Best Classifier Out There?
Probably not, no. In fact, it’s typically a mediocre classifier—it’s the one you strive to beat with your fancy algorithm. So why even care about this one?
Because it’s fast, easy to understand, and it works reasonably well. In other words, this is the classifier you start with to figure out if it’s worth investing your time on a problem. If you need to hit 90% category accuracy and Naive Bayes is giving you 70%, you’re probably in good shape; if it’s giving you 20% accuracy, you might need to take another look at whether you have a viable solution given your data.

Click through to learn what day it is based on what some fictional fellow has as head covering. Also, learn what it is I actually mean when I let “update your priors” slip.

Comments closed

Generating SQL With Biml

Cathrine Wilhelmsen shows us you can do a lot more with Biml than just generating SSIS packages:

This actually happened to me in a previous job. We had a fairly complex ETL solution for the most critical part of our Data Warehouse. Many SSIS packages, views, and stored procedures queried the tables that were replicas of the source system tables. One day, we found out that the source system would be deploying a new version of their database the following day. In every single table, some columns were removed, others added, and many changed data types.
Oh.
There was no way that we could manually update all our SSIS packages, views, and stored procedures in less than a day. Thousands of users depended on our solution. It was too late to pause the source system changes.
Oh.

That story ends up with a happy ending.

Comments closed