Press "Enter" to skip to content

Curated SQL Posts

Thoughts On Max Memory Settings

Monica Rathbun shares some thoughts on SQL Server’s max memory settings:

Quite often I see database administrators set SQL Server max server memory thinking everything related to SQL Server uses this shared memory pool. This is a mistake. There are many things that rely on memory that are not part of SQL Server. Best practices state that you should leave memory allotted for the operating system. However, did you know that if you are running services like SSIS, SSAS or SSRS on the same server as the database engine that it does not use the same memory you have allocated for SQL Server? If the Max Memory setting is not configured correctly, these other serves could incur memory pressure.  While the memory consumed by SSAS and SSRS can be configured, SSIS can be a little bit more challenging. Beyond this, there are even scenarios where SQL Server max memory consumed can exceed the setting, like with CLR in versions earlier than 2012 and some other bugs in SQL Server.

As a consultant, I have seen memory pressure and memory exhausted too many times to count because the DBA was unaware of this. I applaud those that take the time to properly configure this setting according to what the database engine requires. Let’s take it a step further and take the time to look at what additional services you are using and allot memory accordingly.

Read on for more.

Comments closed

Undeleting A Deleted Azure SQL Database

Arun Sirpal shows us the “undoing a big mistake” button:

Okay honestly I have done this once. I have deleted Azure SQL Databases and then try and find the quickest way to recover. The Azure portal is actually pretty good when it comes to deleting resources, for example it will usually ask you to re-type the name of the resource to confirm deletion, so you can tell what a bad mistake I made.

Let’s look at how to delete a database then recover it.

I’m curious how long it stays there before dropping off into the abyss.

Comments closed

Calculating Lifetime Value With R

Sergey Bryl shows how to calculate the lifetime value of a subscription service:

Predicting LTV is a common issue for a new, recently launched product/service/application when we don’t have a lot of historical data but want to calculate LTV as soon as possible. Even though we may have a lot of historical data on customer payments for a product that is active for years, we can’t really trust earlier stats since the churn curve and LTV can differ significantly between new customers and the current ones due to a variety of reasons.

Therefore, regardless of whether our product is new or “old”, we attract new subscribers and want to estimate what revenue they will generate during their lifetimes for business decision-making.

This topic is closely connected to the Cohort Analysis and if you are not familiar with the concept, I recommend that you read about it and look at other articles I wrote earlier on this blog.

Read the whole thing.

Comments closed

Writing SQL Against Elasticsearch

Guy Shilo shows how you can write SQL to query Elasticsearch:

The mappings Elastic SQL uses are:

Index = Table

Document = Row

Field = Column

This mapping is quite intuitive. Types are left out because they are obsolete in Elastic 6.0 on.

So let’s give it a try. I used the latest Elastic 6.4 for this demonstration and ran the queries from Kibana, although they can be run with curl or just a browser as well. First we will need some data. I found this article in Elastic documentation that suggests several data files ready to be loaded. I did not need all of the data so I only used the json file that contains all the works of William Shakespeare that can be downloaded here.

Feasel’s Law continues.

Comments closed

Watch Those Powershell Shell Versions

Emin Atac reviews a couple of 64- versus 32-bit Powershell oddities:

    • Context

My colleagues send an message with a link that points to a script located on a shared drive to help our users reinstall their software.
Our users just click on the link in their Outlook and got a message saying:
\\servername.fqdn\share\softwarename\install.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170.

    • Issue

Users use Outlook that is a 32-bit process. If they click on link that points to a script, it will spawn a 32-bit console and run a 32-bit powershell.exe child process.
It appears that the ExecutionPolicy isn’t defined in the 32-bit PowerShell and set to its default value: “Restricted” although it’s defined in the 64-bit Powershell.

Read on for the solution to this issue as well as a second, similar issue.

Comments closed

Sending SQL Server Notifications To Slack

Alessandro Alpi shows how to integrate SQL Server notifications with Slack:

Now, how can we send notifications from SQL Server in an easier way than using custom code or a Slack incoming webhook? Is there any integration or a Slack app?  Yes. And guess what? I think you’ll like it because you don’t need to write a single line of code, and you don’t need to choose between CLR, PowerShell or any other language. It’s ironic, but the integration is called “Email”.

Speaking of CLR, I’ve had success with the SqlServerSlackAPI in the past.

Comments closed

Running SQL Server On Cluster Shared Volumes

Sreekanth Bandarla continues a series on clustered shared volumes:

In the previous part of this series, we have seen what a cluster shared volume is and what are the advantages and other considerations to keep in mind when deploying CSVs for SQL Server workloads. In this article, I will walk though actual installation of a failover cluster Instance leveraging CSVs.

To begin with, I will walk you through my cluster setup from 20,000 foot view. I created two brand new VMs running windows server 2012 R2 and renamed them accordingly. Nothing special w.r.t disk drives at this point, Just basic VMs with a system drive(C$).

The rest of the story is over at SQLShack.

Comments closed

The DAX Guide

Marco Russo announces a new site:

  • What is DAX Guide? DAX Guide is a website offering a complete reference to the DAX language. Every function is presented with its complete syntax, a short description, and links to related functions and articles.

  • Is DAX Guide a tutorial to learn DAX?No, DAX Guide is not designed as a learning tool. The goal of DAX Guide is to provide a quick reference with accurate information. The only commitment is “quality first”.

  • What are some unique features of DAX Guide?DAX Guide is updated automatically through the monitoring of new versions of Microsoft products. Every DAX function comes with a compatibility matrix describing in which Microsoft products and versions the function may be available. Additional attributes highlight which functions perform a context transition, which arguments are executed within a row context, and which functions are obsolete or deprecated – in our opinion.

If that sounds interesting to you, check it out.

Comments closed

A Quick Look At Data Visualization Tools

Vincent Wong walks us through data visualization tools on the market today:

Highcharts

When we talk about Echarts, we will usually compare it with Highcharts. The relationship between them is a bit like the relationship between WPS and Office.

Highcharts is also a visualization library which you have to pay for it if you are going to use it. It has many advantages, for example, its documents and tutorials, JS scripts, and CSS are very detailed. It saves time and allows you to pay more attention to learning and developing. What’s more, it is very stable.

There are some good tools on this list.

Comments closed

It’s All ETL (Or ELT) In The End

Robin Moffatt notes that ETL (and ELT) doesn’t go away in a streaming world:

In the past we used ETL techniques purely within the data-warehousing and analytic space. But, if one considers why and what ETL is doing, it is actually a lot more applicable as a broader concept.

  • Extract: Data is available from a source system
  • Transform: We want to filter, cleanse or otherwise enrich this source data
  • Load: Make the data available to another application

There are two key concepts here:

  • Data is created by an application, and we want it to be available to other applications
  • We often want to process the data (for example, cleanse and apply business logic to it) before it is used

Thinking about many applications being built nowadays, particularly in the microservices and event-driven space, we recognize that what they do is take data from one or more systems, manipulate it and then pass it on to another application or system. For example, a fraud detection service will take data from merchant transactions, apply a fraud detection model and write the results to a store such as Elasticsearch for review by an expert. Can you spot the similarity to the above outline? Is this a microservice or ETL process?

Things like this are reason #1 why I expect data platform jobs (administrator and developer) to be around decades from now.  The set of tools expand, but the nature of the job remains similar.

Comments closed