Press "Enter" to skip to content

Category: Machine Learning

Machine Learning From Kafka

Kai Waehner has a post covering a recent talk he did on using Kafka as a data source for neural networks:

This talk shows how to build Machine Learning models at extreme scale and how to productionize the built models in mission-critical real time applications by leveraging open source components in the public cloud. The session discusses the relation between TensorFlow and the Apache Kafka ecosystem – and why this is a great fit for machine learning at extreme scale.

The Machine Learning architecture includes: Kafka Connect for continuous high volume data ingestion into the public cloud, TensorFlow leveraging Deep Learning algorithms to build an analytic model on powerful GPUs, Kafka Streams for model deployment and inference in real time, and KSQL for real time analytics of predictions, alerts and model accuracy.

Sensor analytics for predictive alerting in real time is used as real world example from Internet of Things scenarios. A live demo shows the out-of-the-box integration and dynamic scalability of these components on Google Cloud.

Check out the slide deck as well for more details.

Comments closed

Open Source ML With Azure

David Smith shares his Build conference slides:

The topic for my talk at the Microsoft Build conference yesterday was “Migrating Existing Open Source Machine Learning to Azure”. The idea behind the talk was to show how you can take the open-source tools and workflows you already use for machine learning and data science, and easily transition them to the Azure cloud to take advantage of its capacity and scale. The theme for the talk was “no surprises”, and other than the Azure-specific elements I tried to stick to standard OSS tools rather than Microsoft-specific things, to make the process as familiar as possible.

Click through for the slides and additional resources.

Comments closed

Toward Interpretable Machine Learning

Cristoph Molnar shows off a couple of R packages which help interpret ML models:

Machine learning models repeatedly outperform interpretable, parametric models like the linear regression model. The gains in performance have a price: The models operate as black boxes which are not interpretable.

Fortunately, there are many methods that can make machine learning models interpretable. The R package imlprovides tools for analysing any black box machine learning model:

  • Feature importance: Which were the most important features?
  • Feature effects: How does a feature influence the prediction? (Partial dependence plots and individual conditional expectation curves)
  • Explanations for single predictions: How did the feature values of a single data point affect its prediction? (LIME and Shapley value)
  • Surrogate trees: Can we approximate the underlying black box model with a short decision tree?
  • The iml package works for any classification and regression machine learning model: random forests, linear models, neural networks, xgboost, etc.

This is a must-read if you’re getting into model-building. H/T R-Bloggers

Comments closed

Natural Language Generation With Markov Chains

Abdul Majed Raja shows off Markovify, a Python package which builds sentences using Markov chains:

Markov chains, named after Andrey Markov, are mathematical systems that hop from one “state” (a situation or set of values) to another. For example, if you made a Markov chain model of a baby’s behavior, you might include “playing,” “eating”, “sleeping,” and “crying” as states, which together with other behaviors could form a ‘state space’: a list of all possible states. In addition, on top of the state space, a Markov chain tells you the probability of hopping, or “transitioning,” from one state to any other state — -e.g., the chance that a baby currently playing will fall asleep in the next five minutes without crying first. Read more about how Markov Chain works in this interactive article by Victor Powell.

Click through for a fun example of headline generation.

Comments closed

TensorFlow Lite

Laurence Maroney explains TensorFlow Lite:

TensorFlow Lite is TensorFlow’s lightweight solution for mobile and embedded devices. It enables on-device machine learning inference with low latency and a small binary size. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API.

It’s designed to be low-latency, with optimized kernels for mobile apps, pre-fused activations and much more. It’s also *really* easy to use, and there’s a great demo app that will get you up and running with image classification from the device camera on both Android and iOS.

It comes in two parts:

  • A set of tools that you can use to prepare your models for use on mobile. These let you freeze your model to make it smaller, and then optimize and convert it in a process also called flattening the model, so that it will run happily on mobile

  • A mobile runtime with an easy API that lets you pass data to the model and get classifications back.

You don’t build the neural network on a phone, but the fact that you can run one on your phone is pretty crazy.

Comments closed

Learn Machine Learning In Just 7 Years

Rwiddhi Chakraborty explains that machine learning isn’t a topic you pick up overnight:

Of course you could write a Hello World program in C++ in 24 hours, or a program to find the area of a circle in 24 hours, but that’s not the point. Do you grasp object oriented programming as a paradigm? Do you understand the use cases of namespaces and templates? Do you know your way around the famed STL? If you do, you certainly didn’t learn all this in a week, or even a month. It took you a considerable amount of time. And the more you learned, the more you realised that the abyss is deeper than it looks from the cliff.

I’ve found a similar situation in the current atmosphere surrounding Machine Learning, Deep Learning, and Artificial Intelligence as a whole. Feeding the hype, thousands of blogs, articles, and courses have popped up everywhere. Thousands of them have the same kind of headlines — “Machine Learning in 7 lines of code”, “Machine Learning in 10 days”, etc. This has, in turn led people on Quora to ask questions like “How do I learn Machine Learning in 30 days?”. The short answer is, “You can’t. No one can. And no expert (or even one comfortable with its ins and outs) did.”

This is a good antidote to the “I read a blog post and now I’m an expert” mentality which is particularly pernicious.

Comments closed

Using The Bot Framework

Jakub Kaczmarek demonstrates using the Microsoft Bot Framework:

Before starting a new bot project, you need to consider if it really is a solution for your business case. It’s not recommended to start bot development just because it’s a hot topic. However, in some cases, this kind of software can save a lot of time, money and resources. The following list of bot example use cases might help in making the decision:

  • Answer for typical questions

    • A bot can make use of Q&A knowledge to receive user question and provide an appropriate answer.
    • Questions can be matched to correct answers using a LUIS (language understanding intelligent service) cognitive service.
    • Reduced time can be spent by help desk staff answering typical questions.
    • Example use cases are help chat, contact pages and web stores.
  • Alternative system interface

    • By integrating a bot with external systems (e.g. Outlook, Jira, CRM, SharePoint) a bot can become an alternative interface to work with these systems.
    • A bot can simply ask some questions and gather the answers given by the user to submit data that normally would be filled in on a form.
    • Example use cases are creating support tickets, uploading SharePoint documents, making calendar appointments, and providing translations.
  • Entertainment & education

    • A bot can be also used to entertain and educate its recipients by sending various kinds of content to the user.
    • It’s a good idea to use media types like videos, audio, images and links to knowledge base articles.
    • Example use cases are workout coach, recipes book and product adviser.
  • Notification bot

    • A bot can be scheduled to initialize conversations at appropriate time, notifying the user about some actions or reminding about things he should do.

    • It’s important to remember that sending proactive messages is not always possible – it depends on the channel used for communication.

    • Example use cases are meeting reminders and timesheet reminders.

I try to avoid the term “intelligent bots” because we’re at least 2 or three generations away from that.  But it’s definitely worth getting your hands dirty with them today, at least to learn their limitations.

Comments closed

Reproducibility And ML Projects

Pete Warden explains some of the difficulties around reproducing ML models:

Why does this all matter? I’ve had several friends contact me about their struggles reproducing published models as baselines for their own papers. If they can’t get the same accuracy that the original authors did, how can they tell if their new approach is an improvement? It’s also clearly concerning to rely on models in production systems if you don’t have a way of rebuilding them to cope with changed requirements or platforms. At that point your model moves from being a high-interest credit card of technical debt to something more like what a loan-shark offers. It’s also stifling for research experimentation; since making changes to code or training data can be hard to roll back it’s a lot more risky to try different variations, just like coding without source control raises the cost of experimenting with changes.

It’s not all doom and gloom, there are some notable efforts around reproducibility happening in the community. One of my favorites is the TensorFlow Benchmarks project Toby Boyd’s leading. He’s made it his team’s mission not only to lay out exactly how to train some of the leading models from scratch with high training speed on a lot of different platforms, but also ensures that the models train to the expected accuracy. I’ve seen him sweat blood trying to get models up to that precision, since variations in any of the steps I listed above can affect the results and there’s no easy way to debug what the underlying cause is, even with help from the authors. It’s also a never-ending job, since changes in TensorFlow, in GPU drivers, or even datasets, can all hurt accuracy in subtle ways. By doing this work, Toby’s team helps us spot and fix bugs caused by changes in TensorFlow in the models they cover, and chase down issues caused by external dependencies, but it’s hard to scale beyond a comparatively small set of platforms and models.

I see two separate problems:  reproducing the process and reproducing the result.  Reproducing the process is why you want to use something like notebooks:  it’s a proof that you (and others!) can generate the same type of model the same way multiple times.  Reproducing the result is harder given the stochastic nature of ML, but if you’re following the same process, you’re at least more likely to end up close to the same result.

Comments closed

XGBoost With Python

Fisseha Berhane looked at Extreme Gradient Boosting with R and now covers it in Python:

In both R and Python, the default base learners are trees (gbtree) but we can also specify gblinear for linear models and dart for both classification and regression problems.
In this post, I will optimize only three of the parameters shown above and you can try optimizing the other parameters. You can see the list of parameters and their details from the website.

It’s hard to overstate just how valuable XGBoost is as an algorithm.

Comments closed

Calling Azure Cognitive Services From SSIS

Rolf Tesmer shows off how easy it is to call Azure Cognitive Services from SQL Server Integration Services:

My SQL SSIS package leverages the Translator Text API service.  For those who want to learn the secret sauce then I suggest to check here – https://azure.microsoft.com/en-us/services/cognitive-services/translator-text-api/

essentially this API is pretty simple;

  1. It accepts source textsource language and target language.  (The API can translate to/from over 60 different languages.)

  2. You call the API with your request parameters + API Key

  3. The API will respond with the language translation of the source text you sent in

  4. So Simple, so fast, so effective!

Click through for the full post.  It really is simple.

Comments closed