Press "Enter" to skip to content

Category: R

Using The Map Function In R

Nicolas Attalides on using purrr:

The best place to start when exploring the purrr package is the map function. The reader will notice that these functions are utilised in a very similar way to the apply family of functions. The subtle difference is that the purrr functions are consistent and the user can be assured of the output – as opposed to some cases when using for example sapply as I demonstrate later on.

My considered belief is, Always Be Purrring.  H/T R-bloggers

Comments closed

Tuning xgboost Models In R

Gabriel Vasconcelos has a new series on tuning xgboost models:

My favourite Boosting package is the xgboost, which will be used in all examples below. Before going to the data let’s talk about some of the parameters I believe to be the most important. These parameters mostly are used to control how much the model may fit to the data. We would like to have a fit that captures the structure of the data but only the real structure. In other words, we do not want the model to fit noise because this will be translated in a poor out-of-sample performance.

  • eta: Learning (or shrinkage) parameter. It controls how much information from a new tree will be used in the Boosting. This parameter must be bigger than 0 and limited to 1. If it is close to zero we will use only a small piece of information from each new tree. If we set eta to 1 we will use all information from the new tree. Big values of eta result in a faster convergence and more over-fitting problems. Small values may need to many trees to converge.

  • colsample_bylevel: Just like Random Forests, some times it is good to look only at a few variables to grow each new node in a tree. If we look at all variables the algorithm needs less trees to converge, but looking at, for example, 2/3 of the variables may result in models more robust to over-fitting. There is a similar parameter called colsample_bytree that re-sample the variables in each new tree instead of each new node.

Read the whole thing.  H/T R-bloggers

Comments closed

Converting Between Time Series Classes In R

Christoph Sax announces a new R library:

tsbox, now freshly on CRAN, provides a set of tools that are agnostic towards existing time series classes. It is built around a set of converters, which convert time series stored as tsxtsdata.framedata.tabletibblezootsibble or timeSeries to each other.

If you have to work with time series data, this will be a useful library.  H/T R-Bloggers

Comments closed

Power BI Custom Visuals In Excel

David Smith notes that Excel is getting a bit of an upgrade:

This week at the BUILD conference, Microsoft announced that Power BI custom visuals will soon be available as charts with Excel. You’ll be able to choose a range of data within an Excel workbook, and pass those data to one of the built-in Power BI custom visuals, or one you’ve created yourself using the API.

David’s point is that you can bring in R charts, but it extends to more than that.

Comments closed

Building Flow Charts In R

Alan Haynes shows how to build flow charts in R using the grid Gmisc packages:

Flow charts are an important part of a clinical trial report. Making them can be a pain though. One good way to do it seems to be with the grid and Gmisc packages in R. X and Y coordinates can be designated based on the center of the boxes in normalized device coordinates (proportions of the device space – 0.5 is this middle) which saves a lot of messing around with corners of boxes and arrows.

A very basic flow chart, based very roughly on the CONSORT version, can be generated as follows…

Click through for sample code and a resulting image.  H/T R-bloggers

Comments closed

Building Palettes From Pictures In R

Andrea Cirillo takes inspiration from the great works to build palettes:

If you see this painting you will find a profound of colours with a great equilibrium between different hues, the hardy usage of complementary colours and the ability expressed in the “chiaroscuro” technique. While I was looking at the painting I started, wondering how we moved from this wisdom to the ugly charts you can easily find within today’s corporate reports ( find a great sample on the WTF visualization website)

This is where Paletter comes from: bring the Renaissance wisdom and beauty within the plots we produce every day.

Introducing paletter

PaletteR is a lean R package which lets you draw from any custom image an optimized palette of colours. The package extracts a custom number of representative colours from the image. Let’s try to apply it on the “Vergine con il Bambino, angeli e Santi” before looking into its functional specification.

It’s an interesting package.  I’ll have to play around with it.

Comments closed

Data Types In R

Ellen Talbot gives us an overview of the different data types in R:

Now here’s something we didn’t cover in the video and is especially helpful if something just WILL NOT work and you’ve spent all morning panic eating biscuits.

You can write checks to see if something is numeric, or an integer, with is.numeric() or is.integer().

The general “‘is.XXXXX()’” function will take many of the data types we cover here and more, and can be a real time/life saver.

We could also use class() here and inspect the result.^[You might recall that class(1) had the result of “numeric” – R was not by default considering 1 as an integer for the purpose of the class() function. ### Special numbers As well as i to denote imaginary numbers, there are some additional symbols you might encounter or want to use.

There’s a video as well as a full blog post.

Comments closed

Building Control Charts With R

Kamal Kumar covers one of my favorite types of charts:

Control charts are used during the Control phase of DMAIC methodology. Control charts, also known as Shewhart charts or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control. If analysis of the control chart indicates that the process is currently under control, then no corrections or changes to process control parameters are needed. Moreover, data from the method can be used to predict the future performance of the process. If the control chart indicates that the process is not in control, analysis of the chart can help determine the sources of variation, as this will result in degradation of process performance.

There are many packages in R, which can be used for analysis related to Six Sigma. Here, we will go through qcc package (R package for statistical quality control charts) and learn “How to create control chart (to know whether the process is in control)”.

Control charts are great for telling if a process has changed in some important way—if your machine is boring holes outside of tolerances, if your busy web server is getting closer to the breaking point, etc.

Comments closed

ggplot2 Coordinate Systems

Lea Waniek walks us through coordinate systems in ggplot2:

The coordinate system can be manipulated by adding one of ggplot’s different coordinate systems. When you are imagining a coordinate system, you are most likely thinking of a Cartesian one. The Cartesian coordinate system combines x and y dimension orthogonally and is ggplots default (coord_cartesian).

There also are several varaitions of the familiar Cartesian coordinate system in ggplot, namely coord_fixedcoord_flip and coord_trans. For all of them, the displayed section of the data can be specified by defining the maximal value depicted on the x (xlim =) and y (ylim =) axis. This allows to “zoom in” or “zoom out” of a plot. It is a great advantage, that all manipulations of the coordinate system only alter the depiction of the data but not the data itself.

I tend to avoid polar coordinates, but that’s mostly because I don’t work in a space which benefits from it.

Comments closed

Limitations Of Mapping In Power BI

David Stelfox points out a limitation in Power BI and tries to circumvent it with R to some limited effect:

This results in a row per ride and visualises pretty well in SSMS. If you are familiar with the geography of London you can make out the river Thames toward the centre of the image and Regents Park towards the top left:

This could be overlaid on a shape file of London or a map from another provider such as Google Maps or Mapbox.

However, when you try to load the dataset into Power BI, you find that Power BI does not natively support Geography data types. There is an idea you can vote on here to get them supported: https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/12257955-support-sql-server-geometry-geography-data-types-i

Hit up that idea link if you want to see geography type support within Power BI.

Comments closed