The importFile() function in H2O is extremely efficient due to the parallel reading. The benchmark comparison below shows that it is comparable to the read.df() in SparkR and significantly faster than the generic read.csv().
I’d wonder if there are cases where this would vary significantly; regardless, for reading a large data file, parallel processing does tend to be faster.
In order to work with Spark H2O using rsparkling and sparklyr in R, you must first ensure that you have both sparklyr and rsparkling installed.
Once you’ve done that, you can check out the working script, the code for testing the Spark context, and the code for launching H2O Flow. All of this information can be found below.
It’s a short post, but it does show how to kick off a job.
The above chart was created with the plotly package, but you can also use htmlwidgets or any other R package that creates interactive graphics. The only restriction is that the output must be HTML, which can then be embedded into the Power BI dashboard or report. You can also publish reports including these interactive charts to the online Power BI service to share with others. (In this case though, you’re restricted to those R packages supported in Power BI online.)
Power BI now provides four custom interactive R charts, available as add-ins:
Time-series forecasting using the forecast package, with interactive range selection, data selection and tooltips (code)
I’d avoided doing too much with R visuals in Power BI because the output was so discordant—Power BI dashboards are often lively things, but the R visual would just sit there, limp and lifeless. I’m glad to see that this has changed.
Maybe you want to explore the best boardgames but instead you find the top 100 filled with 10/10 scores. Experience many such false positives and you will lose faith in the rating system. Let’s be clear this isn’t exactly incidental either: most games have relatively few votes and suffer from this phenomenon.
The Bayesian average
Fortunately, there are ways to deal with this. BoardGameGeek’s solution is to replace the average by the Bayesian average. In Bayesian statistics we start out with a prior that represents our a priori assumptions. When evidence comes in we can update this prior, computing a so called posterior that reflects our updated belief.
Applied to boardgames this means: if we have an unrated game we might as well assume it’s average. If not, the ratings will have to convince us otherwise. This certainly removes outliers as we will see below!
This is a rather interesting article and you can easily apply it to other rating systems as well.
This site uses Hugo. Hugo is a “static site generator” which means you write a bunch of markdown and it generates html. This is great for building simple sites like company leafletware or blogs.
You can get Hugo across platforms and on Windows it’s just an executable you can put in your program files. You can then work with it like git in the command line.
Read on for a step-by-step process to get started. Steph also links to blogdown, which is an interesting R-friendly extension.
So far so good. Let’s now remove the “intercept term” by adding the “
0+” from the fitting command.
m2 <- lm(y~0+x, data=d)t(broom::glance(m2))
## [,1] ## r.squared 7.524811e-01 ## adj.r.squared 7.474297e-01 ## sigma 3.028515e-01 ## statistic 1.489647e+02 ## p.value 1.935559e-30 ## df 2.000000e+00 ## logLik -2.143244e+01 ## AIC 4.886488e+01 ## BIC 5.668039e+01 ## deviance 8.988464e+00 ## df.residual 9.800000e+01
d$pred2 <- predict(m2, newdata = d)
Uh oh. That appeared to vastly improve the reported
R-squaredand the significance (“
Read on to learn why this happens and how you can prevent this from tricking you in the future.
For us, the biggest surprise when using an R
data.frameis what happens when you try to access a nonexistent column. Suppose we wanted to do something with the prices of our
priceis a valid column of
diamonds, but say we forgot the name and thought it was title case. When we ask for
diamonds[["Price"]], R returns
NULLrather than throwing an error! This is the behavior not just for
tibble, but for
data.frameas well. For production jobs, we need things to fail loudly, i.e. throw errors, in order to get our attention. We’d like this loud failure to occur when, for example, some upstream data change breaks our script’s assumptions. Otherwise, we assume everything ran smoothly and as intended. This highlights the difference between interactive use, where R shines, and production use.
Read on for several good points along these lines.
This post explains how to draw connection lines between several localizations on a map, using R. The method proposed here relies on the use of the gcIntermediate function from the geosphere package. Instead of making straight lines, it offers to draw the shortest routes, using great circles. A special care is given for situations where cities are very far from each other and where the shortest connection thus passes behind the map.
Now we know how to make pretty-looking global route charts.
Syberia also encourages you to break up your process into a series of distinct steps, each of which can be run (and tested) independently. It also has a
make-like feature, in that results from intermediate steps are cached, and do not need to be re-run each time unless their dependencies have been modified.
Syberia can also be used to associate specific R versions with scripts, or even other R engines like Microsoft R. I was extremely impressed when during a 30-minute-break at the R/Finance conference last month, Robert was able to sketch out a Syberia implementation of a modeling process using the RevoScaleR library. In fact Robert’s talk from the conference, embedded below, provides a nice introduction to Syberia.
Interesting stuff. If you’re working with models in R today, this could be up your alley.
dplyr 0.7.0 is a major release including over 100 improvements and bug fixes, as described in the release notes. In this blog post, I want to discuss one big change and a handful of smaller updates. This version of dplyr also saw a major revamp of database connections. That’s a big topic, so it’ll get its own blog post next week.
Read on to learn about tidy evaluation and the Star Wars data set. There’s a lot to wrap your head around in this release.