In previous videos you’ve learned that we can demonstrate R visualization in Power BI, In this video you will learn how R visualization is working interactively with other elements in Power BI report. In fact Power BI works with R charts as a regular visualization and highlighting and selecting items in other elements of report will effect on that. Here is a quick video about this functionality
Check out the five-minute video.
The bug itself is triggered by a combination of > 2000 lines of data and a lot of fields; I’ve been able to reproduce it with 2189 lines and 130 delimiters as an example which you can download here.
Here’s hoping CU1 will have that fixed.
Dimensionality reduction is a common techique to visualize observations in a dataset, by combining all features into two, that can then be used to draw the observation in an scatter plot.
One popular algorithm that implements this technique is PCA (Principal Components Analysis), which is available in R through the prcomp() function.
The algorithm was applied to observations of sthe dataset, and ggplot2’s geom_point() function was used to draw the results in a 2D chart.
I would want to see this done for a couple hundred thousand domains, but I do like the idea of taking advantage of statistical modeling tools to find security threats.
Apparently SSIS doesn’t agree with my code. So opening the editor of the raw file connection, changing the access mode to “File name” showed me this:
There are spaces and tabs in front of the path! SSIS doesn’t work well with spaces and that’s one of the reasons why you should not use spaces in file names in the first place.
This is one of the trickier bits of XML-based languages (like Biml): spacing inside tags can matter…sometimes…
With the emergence of Spark as a unified computing engine, developers can perform ETL and advanced analytics in both continuous (streaming) and batch mode either programmatically (using Scala, Java, Python, or R) or with procedural SQL (using Spark SQL or Hive QL).
With MapR converging the data management platform, you can now take a preferential Spark-first approach. This differs from the traditional approach of starting with extended Hadoop tools and then adding Spark as part of your big data technology stack. As a unified computing engine, Spark can be used for faster batch ETL and analytics (with Spark core instead of MapReduce and Hive), machine learning (with Spark MLlib instead of Mahout), and streaming ETL and analytics (with Spark Streaming instead of Storm).
MapReduce is so 2012…
This tools extends IntelliJ to support Spark job life cycle from create, author, debug and submit job to Azure cluster and view results. This IntelliJ HDInsight tool integrates well with Azure to allow user navigate HDInsight Spark clusters and view associated Azure storage account. To further boost productivity, the IntelliJ HDInsight tool also offers the capability to view Spark job history, display detailed job logs, and the job output to boost developer productivity. A few usability improvements have been implemented upon user preview feedback, which includes auto locate artifact, add intelligence to remember assembly location, caches spark logs, etc.
It looks like this is specifically designed for Spark-enabled clusters.
If missing values are something which haunts you then
MICEpackage is the real friend of yours.
When we face an issue of missing values we generally go ahead with basic imputations such as replacing with 0, replacing with mean, replacing with mode etc. but each of these methods are not versatile and could result into a possible data discrepancy.
MICEpackage helps you to impute missing values by using multiple techniques, depending on the kind of data you are working with.
I’d heard of a couple of these, but most of them are new to me.
I’m at a conference, specifically a security conference. So I looked at the available WiFi connections. Among the conference and hotel specific connections and the MiFi and cellphone uplinks I spotted this one
My little Wifi hotspot has an SSID of Flowers By Irene.
In this module you will learn how to use the Enlighten Aquarium Power BI Custom Visual. While it might not be the most practical visualization it does provide a fun way to show categorical data and can have multiple series shown as well.
From now on, all dashboards must look like screensavers from the 1990s.
SQL Server 2016 went RTM this week and so naturally, we’re going to write about it. Here are a few writing prompts for you:
Check out what’s new. Microsoft has written a lot about their new features. Thomas Larock has written a really nice landing page for those posts, SQL Server 2016: It Just Runs Faster – Thomas Larock. Look through those links. Do you feel optimistic about 2016? Or maybe a bit disappointed? Let us know either way
Haven’t had time to download the bits, install them, explore and form thoughts on 2016 yet? Have no fear, check out Microsoft’s Virtual Labs. It lets you explore features without worrying about all the setup. In minutes you’ll be typing
SELECT 'hello world';