Whilst down the rabbit hole, I discovered just in passing via a beanstalk article that there’s actually been a command line interface for PuTTY called
plink. D’oh! This changed the whole direction of the solution to what I present throughout.
plink.exeas the command line interface for PuTTY we can then connect to our remote network using the key pre-authenticated via pageant. As a consequence, we can now use the
shell()command in R to use plink. We can then connect to our database using the standard Postgres driver.
PuTTY is a must-have for any Windows box.
Mockaroo is a really impressive service with a wide spread of different data types. They also have simple ways of adding things like within group differences to data so that you can mock realistic class differences. They use the freemium model so you can get a thousand rows per download, which is pretty sweet. The big BUT you can feel coming on is this – it’s a GUI! I don’t want to have spend time hand cranking a data extract.
Thankfully, they have a GUI for getting data too and it’s pretty simply to use so I’ve started making a package for it.
Steph is working on an R package, so this is pretty exciting.
RTVS is an IDE and as such you can use it with any recent version of R such as 3.2.x. If you install the free Microsoft R Open, you automatically get some turbo options such as threading support on multi-processor machines, providing significant speedup for a variety of analytical functions, as well as package collections check-pointed to a particular date/version. Microsoft R Server provides Big Data support and additional advanced features that can be used with SQL Server.
This is an early release, so expect a few bugs and some missing functionality. It also isn’t RStudio—it’s RStudio several years ago. But what it does nicely is integrate with the rest of your stack: you can tie together the R code, the C#/F# code which helps clean data, the SQL Server project which holds your data, etc. etc.
If you have a database of credit-card transactions with a small percentage tagged as fraudulent, how can you create a process that automatically flags likely fraudulent transactions in the future? That’s the premise behind the latest Data Science Deep Dive on MSDN. This tutorial provides a step by step to using the R language and the big-data statistical models of the RevoScaleR package of SQL Server 2016 R Services to build and use a predictive model to detect fraud.
This looks to be a follow-up from the fraud detection series.
Operations that are conceptually simple can be difficult to perform using SQL. Consider the common requirements to pivot or transpose a dataset. Each of these actions are conceptually straightforward but are complex to implement using SQL. The examples that follow are somewhat verbose, but the details are not significant. The main point is to illustrate is that, by using specialized functions outside of SQL, R makes trivial some of those operations that would otherwise require complex SQL statements. The contrast in the amount of code required is striking. The simpler approach allows you to focus attention on the scientific or business problem at hand, rather than expending energy reading documentation or laboriously testing complex statements.
I consider this where the second-order value of R comes in. The initial “wow” factor is in how easy you can plot things, and this ease of data cleansing is the next big time-saver.
If you were using CTP 3.0 and later ran an in-place upgrade to CTP 3.2 this will silently break R Services. Uninstalling and reinstalling the R component will not fix the problem, but it can be fixed. There are a few interrelated issues here so bear with me.
Hopefully you don’t run into this issue, but if you do, at least there’s a fix.
Another mistake I see a lot in beginning R students is forgetting that R cares about case. In other words, the variable “a” is a separate thing than the variable “A”.
NOTE: Package names can be case-sensitive as well.
A lot of this comes down to “learn the syntax.”
However, it seems that there might be two kinks in the line:
The first kink occurs somewhere between the 800m distance and the mile. It seems that the sprinting distances (and the 800m is sometimes called a long sprint) has different dynamics from the events up to the marathon.
The analysis is done in R, and the code is available in the post. Check it out.
But R is also part of an entire ecosystem of open tools that can be linked together. For example, Markdown, Pandoc, and knitr combine to make R an incredible tool for dynamic reporting and reproducible research. If your chosen output format is HTML, you’ve linked into yet another open ecosystem with countless further extensions.
Generating a page from R is one of those good ideas that I probably don’t want to see in a production environment.
Not only can we create and download custom visuals from PowerBI.com to extend the capabilities of Power BI, we can use R to create a ridiculous amount of powerful visualizations. If you can get the data into Power BI, you can use R to perform interesting statistical analysis and create some pretty cool, interactive visuals.
Dustin and Jan Mulkens are working on similar posts at the same time, so watch both of them.