Let’s point out a few things.
First, the fill color scale has been carefully crafted to optimally show differences between countries.
Second, we are simultaneously using the highlighting technique to highlight the OPEC countries.
Finally, notice that we’re using the title to “tell a story” about the highlighted data.
All told, there is a lot going on in this example.
It’s a very interesting example of building higher-quality visuals in R. I also give them kudos for picking five colors which work for people with every kind of Color Vision Deficiency. H/T R-Bloggers
Customer retention curves are essential to any business looking to understand its clients and will go a long way towards explaining other things like sales figures or the impact of marketing initiatives. They are an easy way to visualize a key interaction between customers and the business, which is to say, whether or not customers return — and at what rate — after the first visit.
The first step to building a customer retention curve is to identify those who visited your business during the reference period, what I will call p1. It is important that the length of the period chosen is a reasonable one, and reflects the expected frequency of visits.
Different types of businesses are going to expect their customers to return at different rates:
A coffee shop may choose to use an expected frequency of visits of once a week.
A supermarket may choose a longer period, perhaps two weeks or one month.
In this case, I think the motivation portion is better than the queries themselves, but the article definitely works as an inspiration for building out good measures of frequency of occurrence.
Additionally, we have taken the first steps to provide metrics and observability in KSQL. This greatly enhances the operability of KSQL, like in cases where you’re monitoring KSQL capacity or when diagnosing issues. You can now see different metrics for streams, tables, and queries for every KSQL server instance.
For streams and tables, we now have
DESCRIBE EXTENDED <stream/table name>statement to show statistics, such as number of messages processed per second, total messages, the time when the last message was received, as well as corresponding failure metrics.
Looks like they’re building it out a piece at a time.
All of the options I will show require you to have R installed on your machine. I am using R version 3.4.3 I got here as well as R Studio (an IDE: Integrated Scripting Environment) version 1.1.383 I obtained here. You can also use Microsoft R Open, which you can get here. All are free. I am choosing base R and R Studio because I want to play with/show the use of non-Microsoft tools in conjunction with Microsoft tools. I am using 2.53.4954.481 64-bit (December 2017) of Power BI Desktop. Note that things could look/behave differently in other version of Power BI Desktop.
For this post, I am using a well-known dataset known as the Iris dataset, which you can read about here. I downloaded the zip file from here to obtain a csv file of the data set for one of my examples. The Iris dataset is also included in the “datasets” package in R Studio, which I will use as well.
Note: A key R concept to understand is that of a data frame, which is essentially just data in a tabular format. In a data frame, the “columns” are actually called “variables.”
Once you have R and an R IDE installed, Power BI Desktop will detect them. You can see this in the Power BI Desktop Options.
Mark shows you step by step using some snazzy SnagIt imagery.
For some time we have been teaching
Rusers “when working with wide tables on Spark or on databases: narrow to the columns you really want to work with early in your analysis.”
The idea behind the advice is: working with fewer columns makes for quicker queries.
The issue arises because wide tables (200 to 1000 columns) are quite common in big-data analytics projects. Often these are “denormalized marts” that are used to drive many different projects. For any one project only a small subset of the columns may be relevant in a calculation.
Some wonder is this really an issue or is it something one can ignore in the hope the downstream query optimizer fixes the problem. In this note we will show the effect is real.
This is good advice for more than just dealing with R on Spark.
Do Disabled Indexes Affect Missing Index Recommendations?
I’m so glad you asked! Let’s take a look.
Read on to learn if Betteridge’s law of headlines holds.
Happy holidays everyone! Ok, this isn’t even remotely related to SQL Server but it sounded fun. I found someone playing music using Powershell (I lost the link, sorry) and thought Christmas Music!
Not the most efficient way to generate music, but it’ll do.
With what I have learned from Pester, it seems to be best to test each command on its own. That got me to thinking, what if I walk into a large environment and I want to see an enterprise summary of best practices? I started working on a few different versions with the ultimate goal of using dbatools function Get-DbaRegisteredServer. I have included a few examples of Test functions that include best practices, as well as some default database best practices all from dbatools.
In my first run, I wanted to make sure that I can pass multiple servers with multiple tests and came up with the below script. You will notice the top section that uses the Get-Dba functions, I had to supply my own values of the properties I am testing. In the bottom section, you will notice that dbatools Test-Dba functions have the IsBestPractice property already and that is all you have to test against.
Read on for a sample of what you can do.
The AlwaysOn Health Events were consistent with the SQL Server Logs. I discovered that the Windows Server Failover Cluster had been down (but was back online), and that the AlwaysOn Group had failed over and back. But the culprit to my secondary replica and databases being disconnected was an endpoint issue. I found in my SQL Server logs the service account running AlwaysOn had for some reason lost its permissions to connect to the endpoint:
Click through for the entire troubleshooting process as well as the solution.