Press "Enter" to skip to content

Category: Visualization

AR and VR in Data

Corrinna Peters differentiates augmented reality from virtual reality:

Virtual Realty (VR) and Augmented Reality (AR) is everywhere, with a broad variety of applications across many industries, and the potential to revolutionise many others. The potential of VR and AR technology is endless and drives digital transformation. Lots of market research studies are projecting that VR and AR is forecasted to grow exponentially in the next few years. With this in mind, the questions people are starting to ask themselves are – what does this mean for me? What does this mean for my business? How will this change data and analytics? What are the differences?

In the medium term, I am quite pessimistic on the topic. There are specific use cases where virtual reality can be interesting, such as a virtual house walk-through. But for the most part, the problem with VR is that optical quality is still not good enough, meaning that a lot of people struggle to use a VR headset for more than an hour or so before getting nauseous. There are also problems with the lack of tactile sensation (and haptic feedback can only go so far) and ergonomic challenges when you’re constantly raising your arms to perform actions.

Augmented reality has an easier sell, though, in cases where you’re willing to hold a phone or tablet up against something. For this scenario, think museum pieces, where you hold the phone up and get more information about the piece, artist, and style. Google does have AR for walking directions, with the cost of burning a whole bunch of battery life. But the general failure of HoloLens and wearable AR devices, as well as the inherent privacy concerns from flashing your active camera around crowded areas, dampen the mood a bit for AR.

Comments closed

Visualizing PyTorch Models

Adrian Tam describes a model:

PyTorch is a deep learning library. You can build very sophisticated deep learning models with PyTorch. However, there are times you want to have a graphical representation of your model architecture. In this post, you will learn:

  • How to save your PyTorch model in an exchange format
  • How to use Netron to create a graphical representation.

Click through for the article, which is mostly about training the PyTorch model. Visualizing it turns out to be pretty easy with the right tool.

Comments closed

Thoughts on the New Power BI Accessible Themes

Meagan Longoria is moderately pleased:

Everyone’s vision is a little different. It is rare (impossible?) that a color theme is accessible for everyone. For instance, while many people with color vision deficiency have trouble distinguishing red and green hues, others have trouble distinguishing blue hues. So when we optimize to accommodate one condition, we may make things more difficult for another condition. This happened with the change in accent color in Power BI Desktop from yellow to teal. Changing to teal increased color contrast, which was great for people with low vision, but it caused new issues for some people with color vision deficiency.

While I am very happy to see these new color themes, I hope everyone understands that they aren’t just generally accessible for all uses. As mentioned in the blog post, they specifically have better color contrast to achieve a contrast of at least 3:1, which is the contrast recommended by WCAG for non-text content.

Read the whole thing. There’s a delicate balancing act between having a complete color scheme and satisfying a variety of needs. It sounds like this theme doesn’t quite cut it, though hopefully there will be some improvements in the future.

Comments closed

Calculating Log Likelihood Ratios with jeva

Peter M.B. Cahusac takes us through a jamovi package:

Ever wanted to try doing an evidential analysis? You may have found it difficult to find a statistical platform to do it. Now there is the jamovi module jeva which can provide log likelihood ratios for a range of common statistical tests.

Imagine for a moment that we wish to carry out a statistical test on our sample of data. We do not want to know whether the procedure we routinely use gives us the correct answer with a specified error rate (such as the Type I error) – the frequentist approach. Nor do we want to concern ourselves with possible a priori probabilities of hypotheses being true – the Bayesian approach. We need to know whether a statistic from this particular set of data is consistent with one or more hypothetical values. Also, let’s say that we weren’t happy with how much data we had collected (a familiar problem?), and just added more when convenient. Welcome to the likelihood (or evidential) approach!

Read on for an explanation and how to try jeva out.

Comments closed

Calibrating and Plotting a Time Series with healthyR.ts

Steven Sanderson builds a plot:

In time series analysis, it is common to split the data into training and testing sets to evaluate the accuracy of a model. However, it is important to ensure that the model is calibrated on the training set before evaluating its performance on the testing set. The {healthyR.ts} library provides a function called calibrate_and_plot() that simplifies this process.

Click through for the function’s input parameters and an example of how to use it.

Comments closed

ADX Dashboards Now Generally Available

Michal Bar provides an overview of Azure Data Explorer functionality now generally available :

Each ADX dashboard is a collection of tiles, optionally organized in pages, where each tile has an underlying query and a visual representation. Using the web UI, you can natively export Kusto Query Language (KQL) queries to a dashboard as visuals and later modify their underlying queries and visual formatting as needed. In addition to ease of data exploration, this fully integrated Azure Data Explorer dashboard experience provides improved query and visualization performance.

Read on to learn more.

Comments closed

Visualizing Moving Averages in R with healthyR.ts

Steven Sanderson shows off a useful R library:

Are you interested in visualizing time series data in a clear and concise way? The R package {healthyR.ts} provides a variety of tools for time series analysis and visualization, including the ts_ma_plot() function.

The ts_ma_plot() function is designed to help you quickly and easily create moving average plots for time series data. This function takes several arguments, including the data you want to visualize, the date column from your data, the value column from your data, and the frequency of the aggregation.

Read on to learn more about this plot and see an example of it in action.

Comments closed

Making Star Maps in R

Benjamin Smith builds a map:

Continuing my explorations in developing custom map art, I decided to take a detour from developing the mapBliss package to explore another type of map which is very popular in the map-art space- star and constellation maps! This initially started out as an issue opened on the mapBliss Github. However, I quickly realized the framework required for making star maps is completely different from making regular maps for custom fight paths and road trips.

Read on to learn more about the problem and what libraries are available to help in R.

Comments closed

Automated Data Visualization in Python

Brendan Tierney saves some time:

Creating data visualizations in Python can be a challenge. For some it an be easy, but for most (and particularly new people to the language) they always have to search for the commands in the documentation or using some search engine. Over the past few years we have seem more and more libraries coming available to assist with many of the routine and tedious steps in most data science and machine learning projects. I’ve written previously about some data profiling libraries in Python. These are good up to a point, but additional work/code is needed to explore the data to suit your needs. One of these Python libraries, designed to make your initial work on a new data set easier is called AutoViz. It’s good to see there is continued development work on this library, which can be really help for creating initial sets of charts for all the variables in your data set, plus it has some additional features which help to make it very useful and cuts down on some of the additional code you might need to write.

This looks like it’s worth a try and could serve well as a first-glance approach to exploratory data analysis.

Comments closed