Press "Enter" to skip to content

Category: Notebooks

Learning the Basics of Kafka via Notebook

Francesco Tisiot shares a way to learn about the basics of Apache Kafka using Jupyter notebooks:

One of the best ways to learn a new technology is to try it within an assisted environment that anybody can replicate and get working within few minutes. Notebooks represent an excellence in this field by allowing people to share and use pre-built content which includes written descriptions, media and executable code in a single page.

This blog post aims to teach you the basics of Apache Kafka Producers and Consumers through building an interactive notebook in Python. If you want to browse a full ready-made solution instead, check out our dedicated github repository.

The classic tutorials tend to use a couple command prompts and the built-in producer and consumer shell scripts. I like this approach as a way of being able to review the code and results later as a refresher.

Comments closed

T-SQL Tuesday 137 Round-Up

Steve Jones wraps up the latest T-SQL Tuesday:

I hosted the blog party this month, with the invite to write about notebooks. These are a neat technology, and I’ve written about them at SQLServerCentral.

This post is a wrap-up of the various responses to my invitation. First, quite a few people give credit to either Aaron Nelson or Rob Sewell for their writings and work with notebooks, so check out their blogs.

Click through for the list of respondents.

Comments closed

Lessons from using Notebooks

Glenn Berry takes us through some of the past (and sometimes present) challenges of running notebooks in Azure Data Studio:

I have to admit that I do not use Jupyter notebooks or Azure Data Studio (ADS) everyday. Last August, I made separate Jupyter notebook versions of my SQL Server Diagnostic Information Queries. There was a separate version for SQL Server 2012 through SQL Server 2019, along with one for Azure SQL Database. This was after a number of requests from people in the community.

Creating these notebooks was a pretty decent amount of work. Luckily, this was right around the time that Azure Data Studio was making it much easier to edit and format markdown for the text blocks. Since then, Azure Data Studio is even easier to use for editing and formatting. Even more fortuitous was the fact that Julie Koesmarno (@MsSQLGirl) volunteered to greatly improve my formatting!

Unfortunately, there has not been as much interest in my Jupyter notebooks as I hoped for. There are probably a number of reasons for this.

Read on for Glenn’s notes.

Comments closed

Using Notebooks in Azure Machine Learning Studio

Lina Kovacheva takes us through the process of working with notebooks in Azure Machine Learning Studio:

I discovered Jupiter notebooks not that long ago, but the more I use them the more I see how powerful they could be. For those of you who are not familiar whit Jupiter Notebook: It is an open-source web application where you can combine code, output, visualizations and explanatory text all in one document allowing you to write a code that tells a story. Now that you have an idea of what Jupiter notebook is I will walk you through how you can use it in Azure Machine Learning Studio.

Click through for the process. One advantage to notebooks in an environment like Azure ML over Azure Data Studio is that you have a much wider variety of languages, although Azure Data Studio has a SQL Server kernel, which other platforms currently do not have.

Comments closed

Running Jupyter Notebooks from Powershell

Rob Farley has a change of heart:

The concept is that if I have a notebook with a bunch of queries in it, I can easily call that using Invoke-SqlNotebook, and get the results of the queries to be stored in an easily-viewable file. But I can also just call Invoke-SqlCmd and get the results stored. Or I can create an RDL to create something that will email me based on a subscription. And I wasn’t sure I needed another method for running something.

Read on to see what changed Rob’s mind.

Comments closed

Auto-Generating Relative Links in Azure Data Studio Notebooks

Julie Koesmarno points out a new feature:

As you enrich your collection of notebooks (organized in a Jupyter Book, hopefully), you will likely want to link from one notebook to another notebook in the directory you are working on.

If you are familiar with markdown, you know that this process can be painful as you’d need to know where the target link is located and where it is located in relation to the notebook that you want to link from.

Luckily in Azure Data Studio v1.27.0, there is a new Insert Link button in the Text Cell that does the automatic translation “hard coded path” to “relative path” link. Check this out!

Click through for a demo. I like the idea as a way of preventing a common problem when sending artifacts somewhere: all of those hard-coded links to a network share I can’t access or a folder on somebody else’s laptop.

Comments closed

Executing Parameterized Notebooks via Azure Data Studio

Julie Koesmarno takes us through three methods for executing parameterized notebooks in Azure Data Studio:

In Feb 2021 release, Azure Data Studio (v1.26.1) has added parameterized URI execution. See the “Preview of passing parameters through URI” section and the Parameterization of Notebooks in Azure Data Studio on Microsoft Docs.

So, in total there are three ways of executing parameterized notebook (from another notebook). Check out the demo files here:

Click through for the notebooks.

Comments closed

Survival Analysis Notebooks

Dan Morris, et al, walk us through a survival analysis scenario:

In contrast to other methods that may seem similar on the surface, such as linear regression, survival analysis takes censoring into account. Censoring occurs when the start and/or end of a measured value is unknown. For example, suppose our historical data includes records for the two customers below. In the case of customer A, we know the precise duration of the subscription because the customer churned in December 2020. For customer B, we know that the contract started four months ago and is still active, but we do not know how much longer they will be a customer. This is an example of right censoring because we do not yet know the end date for the measured value. Right censoring is what we most commonly see with this form of analysis.

Click through for an intro as well as a half-dozen notebooks.

Comments closed

Using Spark Pools in Azure Synapse Analytics

Rahul Mehta shows how to create and use an Apache Spark pool in Azure Synapse Analytics:

In the last part of the Azure Synapse Analytics article series, we learned how to create a dedicated SQL pool. Azure Synapse support three different types of pools – on-demand SQL pool, dedicated SQL pool and Spark pool. Spark provides an in-memory distributed processing framework for big data analytics, which suits many big data analytics use-cases. Azure Synapse Analytics provides mechanisms to use SQL on-demand pool to query data as a service, SQL dedicated pool for data warehousing using distributed data processing engine, and Spark pool for analytics using in-memory big data processing engine. This article shows how to create a Spark pool in Azure Synapse Analytics and further how to process the data using it.

Click through for a demo on setup and a sample notebook to get started.

Comments closed

Spark Streaming in a Databricks Notebook

Tomaz Kastrun shows off Spark Streaming in a Databricks notebook:

Spark Streaming is the process that can analyse not only batches of data but also streams of data in near real-time. It gives the powerful interactive and analytical applications across both hot and cold data (streaming data and historical data). Spark Streaming is a fault tolerance system, meaning due to lineage of operations, Spark will always remember where you stopped and in case of a worker error, another worker can always recreate all the data transformation from partitioned RDD (assuming that all the RDD transformations are deterministic).

Click through for the demo.

Comments closed