I had over 140 files to process. That’s not usually a big deal – I normally use SQL Server Integration Services to loop through network folders, connect to hundreds of spreadsheets and extract the source data.
But this relies on the data being in a tabular format (like a dataframe or database table).
A quick glance at the first few sheets confirmed I could not use this approach – the data was not in tabular format. Instead it was laid out in a format suited to viewing the data on screen – with the required data scattered in different ranges throughout each sheet ( over 100 rows and many columns). It wasn’t going to be feasible to point SSIS at different locations within each sheet. (It can be done, but it’s pretty complex and I didn’t have time to experiment).
The other challenge was that over time, changes to design meant that data moved location e.g. dates that were originally in cell C2 moved to D7, then moved again as requirements evolved. There were 14 different templates in all, each with subtle changes. Each template was going to need a custom solution to extract the data.
This is a good look at how R can be about more than “just” statistical analysis.
The main idea of support vector machine is to find the optimal hyperplane (line in 2D, plane in 3D and hyperplane in more than 3 dimensions) which maximizes the margin between two classes. In this case, two classes are red and blue balls. In layman’s term, it is finding the optimal separating boundary to separate two classes (events and non-events).
Deepanshu then goes on to implement this in R.
Let’s start thinking in a logical way the steps that one should perform once we have the data imported into R.
- The first step would be to discover what’s in the data file that was exported. To do this, we can:
- Use head function to view few rows from the data set. By default, head shows first 5 rows. Ex:
- str to view the structure of the imported data. Ex:
- summary to view the data summary. Ex:
There’s a lot to data analysis, but this is a good start.
a. Load the
b. Convert the
state.x77dataset to a dataframe.
c. Rename the
Life Expvariable to
HS.Grad. (This avoids problems with referring to these variables when specifying a model.)
Click through for the rest of the exercises as well as the answers.
In addition to R code, the ImageMagic program needs to be installed on your machine, as well. Also the speed, quality and many other parameters can be set, when creating animated gif.
Animated gif can be also included into your SSRS report, your Sharepoint site or any other site – like my blog 🙂 and it will stay interactive. In Power BI, importing animated gif as a picture, unfortunately will not work.
Be very careful with this, as not everything supports animated GIFs and you can make some really painful graphs if you try hard enough…
Matt Willis has a two-parter on R Tools for Visual Studio. First, an introduction:
Once all the prerequisites have been installed it is time to move onto the fun stuff! Open up Visual Studio 2015 and add an R Project: File > Add > New Project and select R. You will be presented with the screen below, name the project AutomobileRegression and select OK.
Microsoft have done a fantastic job realising that the settings and toolbar required in R is very different to those required when using Visual Studio, so they have split them out and made it very easy to switch between the two. To switch to the settings designed for using R go to R Tools > Data Science Settings you’ll be presented with two pop ups select Yes on both to proceed. This will now allow you to use all those nifty shortcuts you have learnt to use in RStudio. Anytime you want to go back to the original settings you can do so by going to Tools > Import/Export Settings.
Whilst in R you can implement very complex Machine Learning algorithms, for anyone new to Machine Learning I personally believe Azure Machine Learning is a more suitable tool for being introduced to the concepts.
Please refer to this blog where I have described how to create the Azure Machine Learning web service I will be using in the next section of this blog. You can either use your own web service or follow my other blog, which has been especially written to allow you to follow along with this blog.
Coming back to RTVS we want to execute the web service we have created.
RTVS has grown on me. It’s still not R Studio and may never be, but they’ve come a long way in a few months.
Subset and Apply means that I have a dataset of some rows where due to some conditions all the rows have to be put into a bucket and then a function has to be applied to each bucket.
The simple problem can be solved by a GROUP BY using T-SQL, the not so simple problem requires that all columns and rows of the dataset have to be retained for further processing, even if these columns are not used to subset or bucket the rows in your dataset.
One quick example of this is running totals of orders for each customer, which Tom answers using T-SQL, R, and Power BI. Click through for those three solutions.
JSON has very powerful statements for converting to and from JSON for storing into / from SQL Server engine (FOR JSON and JSON VALUE, etc). And since it is gaining popularity for data exchange, I was curious to give it a try with R combination.
I will simply convert a system table into array using for json clause.
There’s an R library. There’s always an R library.
So the R code said that, there are three clusters generating And I used medians to generate data around it. In addition I have also tested the result with Partitioning around medoids (which is opposite to hierarchical clustering) and the results from both techniques yield clean clusters.
Clustering models can be powerful for discovering commonalities, and that might help you find a number of queries which all behave in some sub-optimal way without having to trawl through every procedure’s code.
This post will demonstrate some examples of what pipeleaner can currently do. For example, the Figure below plots the results of a model fitted to 10% to 100% (in 10% increments) of training data in 50 cross-validation pairs. Fitting all of these models takes about four lines of code in pipelearner.
Click through for some very interesting examples.