SQL Server 2019 big data clusters provide a complete AI platform. Data can be easily ingested via Spark Streaming or traditional SQL inserts and stored in HDFS, relational tables, graph, or JSON/XML. Data can be prepared by using either Spark jobs or Transact-SQL (T-SQL) queries and fed into machine learning model training routines in either Spark or the SQL Server master instance using a variety of programming languages, including Java, Python, R, and Scala. The resulting models can then be operationalized in batch scoring jobs in Spark, in T-SQL stored procedures for real-time scoring, or encapsulated in REST API containers hosted in the big data cluster.
SQL Server big data clusters provide all the tools and systems to ingest, store, and prepare data for analysis as well as to train the machine learning models, store the models, and operationalize them.
Data can be ingested using Spark Streaming, by inserting data directly to HDFS through the HDFS API, or by inserting data into SQL Server through standard T-SQL insert queries. The data can be stored in files in HDFS, or partitioned and stored in data pools, or stored in the SQL Server master instance in tables, graph, or JSON/XML. Either T-SQL or Spark can be used to prepare data by running batch jobs to transform the data, aggregate it, or perform other data wrangling tasks.
Data scientists can choose either to use SQL Server Machine Learning Services in the master instance to run R, Python, or Java model training scripts or to use Spark. In either case, the full library of open-source machine learning libraries, such as TensorFlow or Caffe, can be used to train models.
Lastly, once the models are trained, they can be operationalized in the SQL Server master instance using real-time, native scoring via the PREDICT function in a stored procedure in the SQL Server master instance; or you can use batch scoring over the data in HDFS with Spark. Alternatively, using tools provided with the big data cluster, data engineers can easily wrap the model in a REST API and provision the API + model as a container on the big data cluster as a scoring microservice for easy integration into any application.
I’ve wanted Spark integration ever since 2016 and we’re going to get it.
So what exactly does reticulate do? It’s goal is to facilitate interoperability between Python and R. It does this by embedding a Python session within the R session which enables you to call Python functionality from within R. I’m not going to go into the nitty gritty of how the package works here; RStudio have done a great job in providing some excellent documentation and a webinar. Instead I’ll show a few examples of the main functionality.
Just like R, the House of Python was built upon packages. Except in Python you don’t load functionality from a package through a call to
librarybut instead you import a module. reticulate mimics this behaviour and opens up all the goodness from the module that is imported.
This is a good intro to a package which is already useful but I think will be even better over time as R & Python interoperability becomes the norm. H/T R-Bloggers
One of the key points in Deep Learning is to understand the dimensions of the vector, matrices and/or arrays that the model needs. I found that these are the types supported by Keras.
In Python’s words, it is the shape of the array.
To do a binary classification task, we are going to create a one-hot vector. It works the same way for more than 2 classes.
- The value
1will be the vector
- The value
0will be the vector
Keras provides the
to_categoricalfunction to achieve this goal.
This example doesn’t include using CUDA, but the data sizes are small enough that it doesn’t matter much. H/T R-Bloggers
In my recent post, Installing External Modules into SQL Server’s Python I had a look at just how simple it is to import external modules into Python so that they can be used within SQL Server.
In this post I’d like to show you a little something to demonstrate how we can integrate one of these modules into SQL Server and just how powerful this can be.
This is really just for fun and it may not really be something that you’d want to put out into production but when I happened to notice that there’s a Python module that interfaces with Google Translate, I wondered to myself if it’d be possible to write a procedure that could take a string and automatically translate it into our native language.
There are a couple of setup steps but once you get past them, it’s easy going.
The data I used was created to demonstrate this task in Power BI but there are many real-world network datasets to experiment with provided by Stanford Network Analysis Project. This small dummy dataset represents a co-purchasing network of books.
The data I loaded into Power BI consisted of two separate CSVs. One, Books.csv, consisted of metadata pertaining to the top 40 bestselling books according to Wikipedia and their assigned IDs. The other, Relationship.csv, was an edgelist of the book IDs which is a popular method for storing/ delivering network data. The graph I wanted to create was an undirected, unweighted graph which I wanted to be able to cross-filter accurately. Because of this, I duplicated this edgelist and reversed the columns so the ToNodeId and FromNodeId were swapped. Adding this new edge list onto the end of the original edgelist has created a dataset with can be filtered on both columns later down the line. For directed graphs, this step is unnecessary and can be ignored.
Once loaded into Power BI, I duplicated the Books table to create the following relationship diagram as it isn’t possible to replicate the relationship between FromNodeId to Book ID and ToNodeId to Book ID with only one Books table.
Read on for an example using this data set.
But how do we go about installing them into SQL Server? Now I’m a DBA and not a Python wizz so had to do a little digging to figure it out but to be honest, it’s fairly easy.
I don’t know how many other DBAs know that we can install these modules or even how to do it so I thought I’d write up a quick post explaining it.
First things first, you’re going to need to do this from your SQL Server.
Read on for the instructions.
Here I am sharing the slides for a webinar I gave for SAP about Explaining Keras Image Classification Models with LIME.
Read on for links to additional resources as well.
The visualization tools include applications like Excel, Power BI and Tableau; languages and libraries including R, Stata, and Python’s matplotlib); and frameworks like D3. The data visualizations range from the standard to the esoteric, and follow the taxonomy of the book Data Visualisation (also by Andy Kirk). The chart categories are color coded by row: categorical (including bar charts, dot plots); hierarchical (donut charts, treemaps); relational (scatterplots, sankey diagrams); temporal (line charts, stream graphs) and spatial (choropleths, cartograms).
Check out the Chartmaker Directory.
Linear Regression is a very useful statistical tool that helps us understand the relationship between variables and the effects they have on each other. It can be used across many industries in a variety of ways – from spurring value to gaining customer insight – to benefit business.
The Simple Linear Regression model allows us to summarize and examine relationships between two variables. It uses a single independent variable and a single dependent variable and finds a linear function that predicts the dependent variable values as a function of the independent variables.
If you want real linear regression, drop in an R or Python script.
Naive Bayes in the Industry
Now that you have an idea of what exactly Naive Bayes is and how it works, let’s see where it is used in the industry.
Our first industrial use case is News Categorization, or we can use the term ‘text classification’ to broaden the spectrum of this algorithm. News on the web is rapidly growing where each news site has its own different layout and categorization for grouping news. Companies use a web crawler to extract useful text from HTML pages of news articles to construct a Full Text RSS. The contents of each news article is tokenized (categorized). In order to achieve better classification results, we remove the less significant words, i.e. stop, from the document. We apply the naive Bayes classifier for classification of news content based on news code.
It’s a good overview of the topic and a particular implementation in Python. Naive Bayes is a technique which you want in the bag: there are a lot of techniques which tend to be better in specific domains, but Naive Bayes is easy to implement and usually provides acceptable performance.