Press "Enter" to skip to content

Category: Polybase

Polybase Statistics

I dig into a non-trivial Polybase query:

Polybase offers the ability to create statistics on tables, the same way that you would on normal tables.  There are a few rules about statistics:

  1. Stats are not auto-created.  You need to create all statistics manually.

  2. Stats are not auto-updated.  You will need to update all statistics manually, and currently, the only way you can do that is to drop and re-create the stats.

  3. When you create statistics, SQL Server pulls the data into a temp table, so if you have a billion-row table, you’d better have the tempdb space to pull that off.  To mitigate this, you can run stats on a sample of the data.

Round one did not end on a high note, so we’ll see what round two has to offer.

Comments closed

Connecting SQL Server To Hadoop Using Polybase

I have a post up on using Polybase to create an external table which points to Hadoop:

An interesting thing about FIELD_TERMINATOR is that it can be multi-character.  MSDN uses ~|~ as a potential delimiter.  The reason you’d look at a multi-character delimiter is that not all file formats handle quoted identifiers—for example, putting quotation marks around strings that have commas in them to indicate that commas inside quotation marks are punctuation marks rather than field separators—very well.  For example, the default Hive SerDe (Serializer and Deserializer) does not handle quoted identifiers; you can easily grab a different SerDe which does offer quoted identifiers and use it instead, or you can make your delimiter something which is guaranteed not to show up in the file.

You can also set some defaults such as date format, string format, and data compression codec you’re using, but we don’t need those here.  Read the MSDN doc above if you’re interested in digging into that a bit further.

It’s a bit of a read, but the end result is that we can retrieve data from a Hadoop cluster as though it were coming from a standard SQL Server table.  This is easily my favorite feature in SQL Server 2016.

Comments closed

Configuring Polybase

I have a blog post up on configuring Polybase:

Microsoft’s next recommendation is to make sure that predicate pushdown is enabled.  To do that, we’re going to go back to the Hadoop VM and grab our yarn.application.classpath from there.  To do that, cd to /etc/hadoop/conf/ and vi yarn-site.xml (or use whatever other text reader you want).  Copy the value for yarn.application.classpath, which should be a pretty long string.  Mine looks like:

1
<value>$HADOOP_CONF_DIR,/usr/hdp/current/hadoop-client/*,/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/*</value>

Now that you have a copy of that value, go to your SQL Server installation directory (by default, C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Binn\Polybase\Hadoop\conf) and open up yarn-site.xml.  Paste the value into the corresponding yarn.application.classpath setting and you’re good to go.

This is part one of a series on using Polybase.

Comments closed

Polybase Row Size Limits

Manoj Pandey notes that Polybase has a row size limit:

With the error description its quiet evident that the External tables does not support row size more than 32768 bytes. But still I take a look online and found in Azure Documentation that this is a limitation right now with Polybase. The Azure document mentions:

Wide rows support is not supported yet, “If you are using Polybase to load your tables, define your tables so that the maximum possible row size, including the full length of variable length columns, does not exceed 32,767 bytes. While you can define a row with variable length data that can exceed this figure, and load rows with BCP, you will not be be able to use Polybase to load this data quite yet. Polybase support for wide rows will be added soon. Also, try to limit the size of your variable length columns for even better throughput for running queries.”

You can still use varchar(max) and nvarchar(max) for data types (unlike the Hive provider, which has a strict limit of 8000 characters for a single column) but can’t break that 32K mark.

Comments closed

Polybase

Ayman El-Ghazali talks Polybase:

HDFS is a distributed file system that works differently than what we’re used to in the Windows OS side of things; the general principle is to use cheap commodity hardware that replicates data in order to account for availability and to prevent loss of data. With that in mind, it makes a great use case to store a lot of data cheaply for archiving purposes or can be used to store large quantities of data that been to be processed in large quantities as well.

For more information please visit: https://msdn.microsoft.com/en-us/library/mt143171.aspx

Now if you want to try it out for yourself, make sure you install the PolyBase Engine (from the SQL Server setup) and feel free to try the modified code sample below.

Polybase is, without a doubt, my favorite SQL Server 2016 feature.  I am excited to put this through its paces in a production environment.

Comments closed

Configuring Polybase

David Benoit walks through how to configure a Polybase cluster:

When you get to the PolyBase configuration screen, you have the option to run this as a standalone instance, or to use the SQL Server as part of a scale-out group. Personally, I don’t think that anyone should ever choose the standalone instance option. You can always run a SQL Server configured for a scale-out group as a standalone instance, BUT you can’t change (at least not today) a SQL Server configured as a standalone PolyBase instance to run as part of a PolyBase cluster once you have completed the install.

This note sounds like the argument for clustering all SQL Server instances.

Comments closed

Loading Azure SQL Data Warehouse

Grant Fritchey enumerates various pains he went through to load data into Azure SQL Data Warehouse:

Let’s bottom line this. Data is dirty. Any ETL process is likely to expose the trashy data, so what are the real issues here? First up, Powershell is my bestest buddy. Scripting everything through Posh right out of the gate made a huge difference in my ability to constantly reset. The fact that our most common processes spit out UTF16, while not a problem, is something you need to get on top of (see Posh above). AZCopy works really well and the command line is easy to implement, but you’ll need to go through the process of setting up all the appropriate connections. Polybase is wicked cool. Yes, I had to hop through hoops to get the data clean, but, what you should note in the above descriptions, at no point did I need to go and recreate my EXTERNAL TABLE from Azure SQL Data Warehouse. I just kept updating the files through my scripts and as soon as they were loaded up to blob storage, I ran a query against them. No additional work required. That’s nice. You’re going to want to go through your data types on your data before you start the migration. You’re going to have to plan for a lot of time cleaning your data if you haven’t already.

Grant’s method is the right way, especially for early tests.  In practice, for the type of data you’d put into Polybase, you might want to create the external table to allow rejecting a certain number of rows—Grant didn’t specify the REJECT_TYPE and REJECT_VALUE attributes creating his external table, so the default of 0 rows was used.  In a warehouse with billions of rows, hand-fixing all of that data is a nasty proposition, and if you’re writing queries whose results likely won’t change if a few (dozen?) records get dropped, rejecting bad data might be a good way to keep some of your sanity.

Comments closed