Press "Enter" to skip to content

Month: June 2021

Unusual Rounding via DATETIME Math

Eitan Blumin opens Pandora’s Box:

In one of my previous posts, Fun with DATETIME Arithmetics, I introduced a way to use “math” to manipulate datetime values for effectively generating, calculating, and displaying intervals (i.e. difference between two datetime values). These mostly work with the addition and subtraction operators (+, -).

In one of the paragraphs, I mentioned multiplication and division, and posed the question about why anyone would ever need to do this.

Read on for one not-quite-ordinary reason why you might need this.

Comments closed

Storing DATETIMEOFFSETs

Randolph West shows us how the DATETIMEOFFSET type is stored in SQL Server:

Cast your mind back to our discussion on DATETIME2. As you know, DATETIME2 is basically the same as squishing DATE (3 bytes) and TIME (between 3 and 5 bytes depending on the scale) into the same column. You end up with a persisted value that is between 6 and 8 bytes wide.

DATETIMEOFFSET is kinda sorta the same thing, but with more bytes on the end. If you take a look at the Microsoft Docs page, the similar idea of a varied column size is retained. For a scale of 0 fractions of a second you only need 8 bytes to store your value, while the default scale of 7 decimal places for storing seconds requires the full 10 bytes.

Click through to understand how the sordid details.

Comments closed

Building QQ plots in R

The folks at finnstats explain the notion of a Quantile-Quantile plot and show how to create one in R:

QQ-plots in R, first need to understand the Q-Q plot. The Q-Q plot is a graphical tool to help us examine if a set of data plausibly came from some theoretical distribution such as a Normal or not.

Suppose, if we are executing a statistical analysis the test comes under parametric methods assumes variable is Normally distributed, we can make use of a Q-Q plot to check that assumption.

It’s just a visual verification, not full proof, so we can make use of some other statistical test also. But Q-Qplot allows us to see at-a-glance if our assumption is valid or not.

Click through to learn more. H/T R-bloggers.

Comments closed

Loading Data into Power BI Premium Per User vs Azure Analysis Services

Gilbert Quevauvilliers continues a series on moving from Azure Analysis Services to Power BI Premium Per User:

I have been working with a customer where I have got data in AAS and in PPU for the same dataset.

What I have found is that when the data is loading it is very similar in terms of how long the data takes to load.

With one of my customers as an example the data was being curated in Asia, whilst the business was running things from Australia. By hosting AAS/PPU where the data was curated meant that the data loading was significantly faster. Yes while the reports would have to access the data across the ocean, this only sends the results, so the performance of the reports was and is still blazingly fast!

Click through for the full story.

Comments closed

Ignoring Updates to Some Statistics

Raul Gonzalez gives some tips on optimizing statistics updates:

For now, everything described might not be such a horrible thing, it’s clear that SQL Server will not take full advantage of the stats on the column [Body] if the queries we are running use wildcards (specially leading), but why so much fuss? Well, now it’s when things start making sense (or not).

Running stats maintenance on this kind of columns every night can become really expensive and this is what I’ve found more than once when using the Query Store to look for queries that have a high number of reads.

Read the whole thing.

Comments closed

ConvertTo-SQLSelect

Shane O’Neill has a new cmdlet for us:

Don’t get me wrong – I’m aware that you don’t need Excel installed on the computer where you’re running these commands from. You still need to save the files somewhere though. The function doesn’t take data from variables.

I can use dbatools and Write-DbaDbTableData. This function is not dependent on the table having to already exist. It will create the table for you if you tell it to. Thank you -AutoCreateTable; even though I recommend pre-sizing your columns if you want to go with this method.

However, I don’t want to have to create the table beforehand.

Click through to check it out and grab a copy for yourself.

Comments closed

Drain Mode in Azure Functions

Rayis Imayev pulls the plug:

As requests to execute Azure Functions increase, then the demand for such compute resources is supported, but only while it is needed (scale-out). As requests fall, any extra resources and application instances drop off automatically (scale-in).

Recently Microsoft enabled a new Drain mode in Azure Functions, that allows for a graceful shutdown of the Azure Function host by completing inflight invocations and stops listening for new events from triggering sources.

Read on for the set of steps it performs, as well as the benefit it provides.

Comments closed

What SET NOCOUNT ON Does

Brent Ozar takes us through a simple but useful SET command:

When you’re working with T-SQL, you’ll often see SET NOCOUNT ON at the beginning of stored procedures and triggers.

What SET NCOUNT ON does is prevent the “1 row affected” messages from being returned for every operation.

Read on to see why this is useful. Also check out the comments for a few other reasons to use it, such as applications written in such a way that they get confused and fail when NOCOUNT is off.

Comments closed

Building a Payoff Diagram in R

Holger von Jouanne-Diedrich builds out payoff diagrams:

Not many people understand the financial alchemy of modern financial investment vehicles, like hedge funds, that often use sophisticated trading strategies. But everybody understands the meaning of rising and falling markets. Why not simply translate one into the other?

If you want to get your hands on a simple R script that creates an easy-to-understand plot (a profit & loss profile or payoff diagram) out of any price series, read on!

Click through for several examples of code and financial instruments.

Comments closed

Embedding Power BI into Jupyter Notebooks

Dennes Torres takes a look at a new Power BI feature:

Microsoft recently announced the ability to include Power BI reports inside Jupyter notebooks. After overcoming the dazzle of this exciting feature, what comes to my mind is: “Why do we need this?”

I’m far from being a Jupyter notebook expert, but as far as I know, they are used for interactive analysis. Why, in the middle of an interactive analysis, would I need to get a Power BI Report?

Even if the Power BI Report is not exactly what I need, I could continue the analysis in Power BI. Why should I move it to Jupyter and make this kind of integration with an existing report?

Read on to see what you can do with it. As far as how you might be able to use it, that remains an open question.

Comments closed