John Mount has a quick tip for R users:
Here is an R tip. Need to quote a lot of names at once? Use
qc()
.
This function is part of wrapr.
Comments closedA Fine Slice Of SQL Server
John Mount has a quick tip for R users:
Here is an R tip. Need to quote a lot of names at once? Use
qc()
.
This function is part of wrapr.
Comments closedPrashant Shekhar gives us an overview of Principal Component Analysis using R:
PCA changes the axis towards the direction of maximum variance and then takes projection on this new axis. The direction of maximum variance is represented by Principal Components (PC1). There are multiple principal components depending on the number of dimensions (features) in the dataset and they are orthogonal to each other. The maximum number of principal component is same as a number of dimension of data. For example, in the above figure, for two-dimension data, there will be max of two principal components (PC1 & PC2). The first principal component defines the most of the variance, followed by second principal component, third principal component and so on. Dimension reduction comes from the fact that it is possible to discard last few principal components as they will not capture much variance in the data.
PCA is a useful technique for reducing dimensionality and removing covariance.
Comments closedI emphasize the link between a tidy dataframe and a normalized data structure:
The kicker, as Wickham describes on pages 4-5, is that normalization is a critical part of tidying data. Specifically, Wickham argues that tidy data should achieve third normal form.
Now, in practice, Wickham argues, we tend to need to denormalize data because analytics tools prefer having everything connected together, but the way we denormalize still retains a fairly normal structure: we still treat observations and variables like we would in a normalized data structure, so we don’t try to pack multiple observations in the same row or multiple variables in the same column, reuse a column for multiple purposes, etc.
I had an inkling of this early on and figured I was onto something clever until I picked up Wickham’s vignette and read that yeah, that’s exactly the intent.
Comments closedI have wrapped up my ggplot2 series, with the last post being on radar charts:
First, we need to install ggradar and load our relevant libraries. Then, I create a quick standardization function which divides our variable by the max value of that variable in the vector. It doesn’t handle niceties like divide by 0, but we won’t have any zero values in our data frames.
The
radar_data
data frame starts out simple: build up some stats by continent. Then I call themutate_each_
function to callstandardize
for each variable in thevars
set.mutate_each_
is deprecated and I should use something different likemutate_at
, but this does work in the current version of ggplot2 at least.Finally, I call the
ggradar()
function. This function has a large number of parameters, but the only one you absolutely need is plot.data. I decided to change the sizes because by default, it doesn’t display well at all on Windows.
It was a lot of fun putting this series together. I think the most important part of the series was learning just how easy ggplot2 is once you sit down and think about it in a systemic manner.
Comments closedDean Attali announces a new shiny package:
shinyalert
uses the sweetalert JavaScript library to create simple and elegant modals in Shiny. Modals can contain text, images, OK/Cancel buttons, an input to get a response from the user, and many more customizable options. A modal can also have a timer to close automatically.Simply call
shinyalert()
with the desired arguments, such as a title and text, and a modal will show up. In order to be able to callshinyalert()
in a Shiny app, you must first calluseShinyalert()
anywhere in the app’s UI.
It does look nice. Check out Dean’s GitHub repo for more information. H/T R-Bloggers
Comments closedFrom the plots above I find that regardless the different levels of diastolic and systolic blood pressure there is no substantial correlation between cholesterol and blood pressure. However, it is better to build the correlation line with
geom_smooth
or to calculate the Spearman correlation, although in this post we focus only on the visualization.Lets build the correlation line.
Click through for several examples of visuals.
Comments closedDavid Smith points out a bunch of the ways that Microsoft integrates R into products:
You can call R from within some data oriented Microsoft products, and apply R functions (from base R, from packages, or R functions you’ve written) to the data they contain.
SQL Server (the database) allows you to call R from SQL, or publish R functions to a SQL Server for database adminstrators to use from SQL.
Power BI (the reporting and visualization tool) allows you to call R functions to process data, create graphics, or apply statistical models to data.
Visual Studio (the integrated development environment) includes R as a fully-supported language with syntax highlighting, debugging, etc.
R is supported in various cloud-based services in Azure, including the Data Science Virtual Machine and Azure Machine Learning Studio. You can also publish R functions to Azure with the AzureML package, and then call those R functions from applications like Excel or apps you write yourself.
They’re pretty well invested in both R and Python, which is a good thing.
Comments closedI have a post on extending ggplot2’s functionality with cowplot:
Notice that I used geom_path(). This is a geom I did not cover earlier in the series. It’s not a common geom, though it does show up in charts like this where we want to display data for three variables. The geom_line() geom follows the basic rules for a line: that the variable on the y axis is a function of the variable on the x axis, which means that for each element of the domain, there is one and only one corresponding element of the range (and I have a middle school algebra teacher who would be very happy right now that I still remember the definition she drilled into our heads all those years ago).
But when you have two variables which change over time, there’s no guarantee that this will be the case, and that’s where geom_path() comes in. The geom_path() geom does not plot y based on sequential x values, but instead plots values according to a third variable. The trick is, though, that we don’t define this third variable—it’s implicit in the data set order. In our case, our data frame comes in ordered by year, but we could decide to order by, for example, life expectancy by setting
data = arrange(global_avg, m_lifeExp)
. Note that in a scenario like these global numbers, geom_line() and geom_path() produce the same output because we’ve seen consistent improvements in both GDP per capita and life expectancy over the 55-year data set. So let’s look at a place where that’s not true.
The cowplot library gives you an easier way of linking together different plots of different sizes in a couple lines of code, which is much easier than using ggplot2 by itself.
Comments closedAbdul Majed Raja has a few tidyverse-friendly data transformation tips:
Splitting a column to many columns is a cliched Data Transformation case that’s hardly unseen while performing Data Transformation. While it’s straightforward to do this in Microsoft Excel, it’s slightly tricky using Data analytics languages. That is true until this function
separate()
fromtidyr
came.
These are small but helpful tips.
Comments closedEmrah Mete simulates a random walk in R:
Let’s consider a game where a gambler is likely to win $1 with a probability of p and lose $1 with a probability of 1-p.
Now, let’s consider a game where a gambler is likely to win $1 and lose $1 with a probability of 1. The player starts the game with X dollars in hand. The player plays the game until the money in his hand reaches N (N> X) or he has no money left. What is the probability that the player will reach the target value? (We know that the player will not leave the game until he reaches the N value he wants to win.)
The problem of the story above is known in literature as Gambler’s Ruin or Random Walk. In this article, I will simulate this problem with R with different settings and examine how the game results change with different settings.
Click through for the script and analysis. There’s a reason they call this game the gambler’s ruin.
Comments closed