Press "Enter" to skip to content

Author: Kevin Feasel

Ignoring Updates to Some Statistics

Raul Gonzalez gives some tips on optimizing statistics updates:

For now, everything described might not be such a horrible thing, it’s clear that SQL Server will not take full advantage of the stats on the column [Body] if the queries we are running use wildcards (specially leading), but why so much fuss? Well, now it’s when things start making sense (or not).

Running stats maintenance on this kind of columns every night can become really expensive and this is what I’ve found more than once when using the Query Store to look for queries that have a high number of reads.

Read the whole thing.

Comments closed

Drain Mode in Azure Functions

Rayis Imayev pulls the plug:

As requests to execute Azure Functions increase, then the demand for such compute resources is supported, but only while it is needed (scale-out). As requests fall, any extra resources and application instances drop off automatically (scale-in).

Recently Microsoft enabled a new Drain mode in Azure Functions, that allows for a graceful shutdown of the Azure Function host by completing inflight invocations and stops listening for new events from triggering sources.

Read on for the set of steps it performs, as well as the benefit it provides.

Comments closed

What SET NOCOUNT ON Does

Brent Ozar takes us through a simple but useful SET command:

When you’re working with T-SQL, you’ll often see SET NOCOUNT ON at the beginning of stored procedures and triggers.

What SET NCOUNT ON does is prevent the “1 row affected” messages from being returned for every operation.

Read on to see why this is useful. Also check out the comments for a few other reasons to use it, such as applications written in such a way that they get confused and fail when NOCOUNT is off.

Comments closed

Embedding Power BI into Jupyter Notebooks

Dennes Torres takes a look at a new Power BI feature:

Microsoft recently announced the ability to include Power BI reports inside Jupyter notebooks. After overcoming the dazzle of this exciting feature, what comes to my mind is: “Why do we need this?”

I’m far from being a Jupyter notebook expert, but as far as I know, they are used for interactive analysis. Why, in the middle of an interactive analysis, would I need to get a Power BI Report?

Even if the Power BI Report is not exactly what I need, I could continue the analysis in Power BI. Why should I move it to Jupyter and make this kind of integration with an existing report?

Read on to see what you can do with it. As far as how you might be able to use it, that remains an open question.

Comments closed

Building a Payoff Diagram in R

Holger von Jouanne-Diedrich builds out payoff diagrams:

Not many people understand the financial alchemy of modern financial investment vehicles, like hedge funds, that often use sophisticated trading strategies. But everybody understands the meaning of rising and falling markets. Why not simply translate one into the other?

If you want to get your hands on a simple R script that creates an easy-to-understand plot (a profit & loss profile or payoff diagram) out of any price series, read on!

Click through for several examples of code and financial instruments.

Comments closed

Understanding Query Execution Time Statistics

Esat Erkec takes us through SET STATISTICS TIME ON:

The SET STATISTICS TIME ON statement returns a text report and this report includes how long it is taken by the query compilation and execution time of a query. To enable this option for any query we need to execute the SET STATISTICS TIME ON command before the execution of the query so that the execution time report will appear in the message of the query result panel until we turn off this option. All values of the report ​​are shown in milliseconds type and its syntax like as below:

Read on to see how you can use it, as well as things to keep in mind as you do.

Comments closed

Searching for Key Lookups

Grant Fritchey answers a question:

While teaching about Extended Events and Execution Plans last week, Jason, one of the people in the class, asked: Is there a way in Extended Events to find queries using a Key Lookup operation? Sadly, the answer is no. However, you can query the Execution Plans in cache or in the Query Store to find this. Thanks for the question Jason. Here’s your answer.

Read on to see how.

Comments closed

SQL Server 2016 Leaving Mainstream Support July 2021

Glenn Berry reminds us that time flies:

SQL Server 2016 falls out of Mainstream Support on July 13, 2021. What this means is that there won’t be any new Service Packs or Cumulative Updates released for SQL Server 2016 after that date. It is still in Extended Support until July 14th, 2026. While in Extended Support, there will still be security and critical functional updates, if any are needed. This post is about SQL Server 2016 falling out of Mainstream Support.

Read on for more information about what this means, as it’s not a situation to panic and immediately change everything.

Comments closed

Using Spark in CDP’s Operational Database Experience

Gokul Kamaraj, et al, take us through using Apache Spark in Cloudera Data Platform’s Operational Database Experience:

Apache Spark is a very popular analytics engine used for large-scale data processing. It is widely used for many big data applications and use cases. CDP Operational Database Experience Experience (COD) is a CDP Public Cloud service that lets you create and manage operational database instances and it is powered by Apache HBase and Apache Phoenix. 

To know more about Apache Spark in CDP and CDP Operational Database Experience, see Apache Spark Overview and CDP Operational Database Experience Overview.

Apache Spark enables you to connect directly to databases that support JDBC. When integrating Apache Spark with Apache Phoenix in COD, you can leverage capabilities provided by Apache Phoenix to save and query data across multiple worker nodes, and use SELECT columns and pushdown predicates for filtering. 

In this blog post, let us look at how you can read and write data to COD from Apache Spark. We are going to use an Operational Database COD instance and Apache Spark present in the Cloudera Data Engineering experience

Read on for the process.

Comments closed