Linear Discriminant Analysis takes a data set of cases (also known as observations) as input. For each case, you need to have a categorical variable to define the class and several predictor variables (which are numeric). We often visualize this input data as a matrix, such as shown below, with each case being a row and each variable a column. In this example, the categorical variable is called “class” and the predictive variables (which are numeric) are the other columns.
Following this is a clear example of how to use LDA. This post is also the second time this week somebody has suggested The Elements of Statistical Learning, so I probably should make time to look at the book.
In this post, I am going to share my experiment in how to do file management in ADLS using R studio,
to do this you need to have below items
1. An Azure subscription
2. Create an Azure Data Lake Store Account
3. Create an Azure Active Directory Application (for the aim of service-to-service authentication).
4. An Authorization Token from Azure Active Directory Application
It’s pretty easy to do, as Leila shows.
The Azure Data Lake store is an Apache Hadoop file system compatible with HDFS, hosted and managed in the Azure Cloud. You can store and access the data within directly via the API, by connecting the filesystem directly to Azure HDInsight services, or via HDFS-compatible open-source applications. And for data science applications, you can also access the data directly from R, as this tutorial explains.
To interface with Azure Data Lake, you’ll use U-SQL, a SQL-like language extensible using C#. The R Extensions for U-SQL allow you to reference an R script from a U-SQL statement, and pass data from Data Lake into the R Script. There’s a 500Mb limit for the data passed to R, but the basic idea is that you perform the main data munging tasks in U-SQL, and then pass the prepared data to R for analysis. With this data you can use any function from base R or any R package. (Several common R packages are provided in the environment, or you can upload and install other packages directly, or use the checkpoint package to install everything you need.) The R engine used is R 3.2.2.
Click through for the details.
Selecting the variables in the Deducer GUI:
Outcome variable: Y, or the dependent variable, should be put on this list
As numeric: Independent variables that should be treated as covariates should be put in this section. Deducer automatically converts a factor into a numeric variable, so make sure that the order of the factor level is correct
As factor: Categorically independent variables (language, ethnicity, etc.).
Weights: This option allows the users to apply sampling weights to the regression model.
Subset: Helps to define if the analysis needs to be done within a subset of the whole dataset.
Deducer is open source and looks like a pretty decent way of seeing what’s available to you in R.
Python has been getting some attention recently for its impressive growth in usage. Since both R and Python are used for data science, I sometimes get asked if R is falling by the wayside, or if R developers should switch course and learn Python. My answer to both questions is no.
First, while Python is an excellent general-purpose data science tool, for applications where comparative inference and robust predictions are the main goal, R will continue to be the prime repository of validated statistical functions and cutting-edge research for a long time to come. Secondly, R and Python are both top-10 programming languages, and while Python has a larger userbase, R and Python are both growing rapidly — and at similar rates.
I had a discussion about this last night. I like the language diversity: R is more statistician-oriented, whereas Python is more developer-oriented. They both can solve the same set of problems, but there are certainly cases where one beats the other. I think Python will end up being the more popular language for data science because of the number of application developers moving into the space, but for the data analysts and academicians moving to this field, R will likely remain the more interesting language.
We are excited to announce the release of
tibbletime v0.0.2on CRAN. Loads of new functionality have been added, including:
- Generic period support: Perform time-based calculations by a number of supported periods using a new
- Creating series: Use
create_series()to quickly create a
tbl_timeobject initialized with a regular time series.
- Rolling calculations: Turn any function into a rolling version of itself with
- A number of smaller tweaks and helper functions to make life easier.
As we further develop
tibbletime, it is becoming clearer that the package is a tool that should be used in addition to the rest of the
tidyverse. The combination of the two makes time series analysis in the tidyverse much easier to do!
Check out their demos comparing New York and San Francisco weather. It looks like it’ll be a useful package. H/T R-bloggers
ANOVA – or analysis of variance, is a term given to a set of statistical models that are used to analyze differences among groups and if the differences are statistically significant to arrive at any conclusion. The models were developed by statistician and evolutionary biologist Ronald Fischer. To give a very simplistic definition – ANOVA is an extension of the two way T-Test to multiple cases.
ANOVA is an older test and a fairly simple process, but is quite useful to understand.
Imbalanced data refers to classification problems where one class outnumbers other class by a substantial proportion. Imbalanced classification occurs more frequently in binary classification than in multi-level classification. For example, extreme imbalanced data can be seen in banking or financial data where majority credit card uses are acceptable and very few credit card uses are fraudulent.
With an imbalanced dataset, the information required to make an accurate prediction about the minority class cannot be obtained using an algorithm. So, it is recommended to use balanced classification dataset.
Rathnadevi uses fraudulent transactions for his sample, but medical diagnoses is also a good example: suppose 1 person in 10,000 has a particular disease. You’re 99.99% right if you just say nobody has the disease, but that’s a rather unhelpful model.
By the end of this tutorial you will:
- Understand what sentiment analysis is and how it works
- Read text from a dataset & tokenize it
- Use a sentiment lexicon to analyze the sentiment of texts
- Visualize the sentiment of text
If you’re the hands-on type, you might want to head directly to the notebook for this tutorial. You can fork it and have your very own version of the code to run, modify and experiment with as we go along.
Check it out. There’s a lot more to sentiment analysis—cleaning and tokenizing words, getting context right, etc.—but this is a very nice introduction.
In this article, we continue our discussion on visualizations, but switch the focus to sparklines and other spark graphs. As with many aspects of the R language, there are multiple options for generating spark graphs. For this article, we’ll focus on using the sparkTable package, which allows us to create spark graphs and build tables that incorporate those graphs directly, a common use case when working with spark images.
In the examples to follow, we’ll import the sparkTable package and generate several graphs, based on data retrieved from the AdventureWorks2014 sample database. We’ll also build a table that incorporates the SQL Server data along with the spark graphs. Note, however, that this article focuses specifically on working with the sparkTable package. If you are not familiar with how to build R scripts that incorporate SQL Server data, refer to the previous articles in this series. You should understand how to use the sp_execute_external_script stored procedure to retrieve SQL Server data and run R scripts before diving into this article.
Sparklines and associated visuals have their place in the world. Read on to see how you can build a report displaying them.