Press "Enter" to skip to content

Category: Synapse Analytics

Accessing ADLS Gen2 Data in Serverless SQL Pools with SAS Tokens

Neel Ball shows how you can use various techniques, including SAS tokens, to access data stored in Azure Data Lake Storage Gen2 from Azure Synapse Analytics serverless SQL pools:

You have a data lake that contains employee and social feed data. You have data residing in an employee folder that is used by HR team members and twitter for live social feeds that is usually used by marketing folks. If you use SAS token or RBAC, you cannot control to the folder level.

How do you allow users to perform data exploration using synapse serverless with fine grain control on underlying storage.

Read on for one solution.

Comments closed

Creating a dacpac for a Dedicated SQL Pool

Kevin Chant shows us how to build out a dacpac file for an Azure Synapse Analytics dedicated SQL Pool:

In reality, you can create a dacpac for a database that’s inside an Azure Synapse Analytics dedicated SQL Pool using a lot of the methods that you use to create them for SQL Server databases.

Azure Data Studio can be an appealing alternative SQL Server Data Tools (SSDT) for tasks like this. Due to various reasons. For instance, it’s a multi-platform solution that is easy to install.

With this in mind, I decided in this post to cover how to create a dacpac for an Azure Synapse Analytics dedicated SQL Pool using Azure Data Studio.

Click through to see how.

Comments closed

Moving Synapse Databases Across Subscriptions

Steve Hughes hits on one of the tricky administrative bits of Azure Synapse Analytics:

So you can copy Azure SQL Database using the Azure Portal, PowerShell, Azure CLI, and T-SQL. However, this functionality is limited to Azure SQL Database and does not work for Azure Synapse databases (a.k.a. SQL Pools). Early in 2021, the ability to use the copy functionality to copy databases between subscriptions is also supported but requires security work to make sure the permissions in the database servers and networking allow that to happen.

There’s a lot involved in the process, leaving me to provide the sage wisdom that it’s easier not to put it in the wrong subscription to begin with if you can avoid it.

Comments closed

Querying Delta Lake Files with T-SQL in Azure Synapse Analytics

Jovan Popovic shows off a new feature in public preview for Azure Synapse Analytics serverless SQL pools:

You can use Azure Synapse and Azure Databricks to prepare and modify your Delta Lake data sets placed in the Azure Data Lake storage. Once your data engineers have prepared the data, your data analysts can create reports using the tools such as Power BI.

Using the serverless query endpoint in Azure Synapse, you can create a relational layer on top of your Delta Lake files that directly references the location where Azure Synapse and Azure Databricks are used to modify data. This way, you can get the real-time analytics on top of the Delta Lake data set without any need to wait for a pipeline to copy and prepare data.

Read on to see how this works.

Comments closed

Azure Synapse Analytics Supports Apache Spark 3.0

Euan Garden has some great news for us:

Starting today, the Apache Spark 3.0 runtime is now available in Azure Synapse. This version builds on top of existing open source and Microsoft specific enhancements to include additional unique improvements listed below. The combination of these enhancements results in a significantly faster processing capability than the open-source Spark 3.0.2 and 2.4.

The public preview announced today starts with the foundation based on the open-source Apache Spark 3.0 branch with subsequent updates leading up to a Generally Available version derived from the latest 3.1 branch.

It still won’t be as fast as Databricks, but it should be a good bit faster than the Spark 2 they were running.

Comments closed

Querying Serverless SQL Pools from Spark Notebooks in Scala

Jovan Popovic shows off one integration point between the data services in Azure Synapse Analytics:

Azure Synapse Analytics provides multiple query runtimes that you can use to query in-database or external data. You have the choice to use T-SQL queries using a serverless Synapse SQL pool or notebooks in Apache Spark for Synapse analytics to analyze your data.

You can also connect these runtimes and run the queries from Spark notebooks on a dedicated SQL pool.

In this post, you will see how to create Scala code in a Spark notebook that executes a T-SQL query on a serverless SQL pool.

Read on to see how. You can also query Spark pool and dedicated SQL pool tables from serverless SQL pools.

4 Comments

Synapse Studio in 5 Minutes

Kevin Chant wants 4 minutes and 58 seconds of your time:

In this post I want to do a five minute crash course about Synapse Studio. Because I have recently been asked to do this by colleagues.

In addition, I want to clear up some confusion about what you need to do before you can access Synapse Studio.

Aim of this post is for you will have a better overview of Synapse Studio within five minutes. Which happens to be the estimated reading time of this post.

Click through and be sure to start the stopwatch.

Comments closed

Loading Azure Synapse Analytics using PolyBase

Gauri Mahajan needs to load some data:

Azure Synapse Analytics is Microsoft’s data warehousing offering on Azure Cloud. It supports three types of runtimes – SQL Serverless Pool, SQL Dedicated Pool, and Spark Pools. As there are a variety of data sources on Azure, it’s very obvious that there can be varying types and volumes of data that would have to be loaded into Azure Synapse pools. There are three major types of data ingestion approaches that can be used to load data into Synapse. The COPY command is the most flexible and elaborate mechanism, where someone can execute this command from a SQL pool to load data from supported data repositories. This command is convenient to load ad-hoc and small to medium-sized data loads into Synapse. The second method of loading data is the Bulk Insert, where the method name is self-relevant regarding the approach functionality. To ingest the data from supported repositories into dedicated SQL pools, PolyBase is as efficient and at times it’s even more efficient than the COPY command. This article will help you understand the process to ingest data into Azure Synapse Analytics using PolyBase to load the data.

Click through for the process.

Comments closed

Hash Distributions, Shuffling, and Data Types

Reiss McSporran explains an issue in Azure Synapse Analytics dedicated SQL pools:

Oh dear. Shuffle. On the surface, shuffle is not necessarily a bad thing and may even be expected, but not in this scenario. So what is shuffle and why can it be bad?

Shuffle occurs when a part of a distributed table is moved to a different node during query execution. To do this a hash value is computed using the join columns, the node is then found that has that hash value and the row is then sent to that node for processing. If the tables you’re joining on have different hash distributions, you’d expect this to happen, (hence why it’s not always a bad thing!), but in this case I’ve used the same value to hash both of my tables, so what’s gone wrong?

Click through to see what’s gone wrong.

Comments closed