Press "Enter" to skip to content

Month: September 2018

String Or Binary Data Would Be Truncated

Denis Gobo shares one of the biggest improvements in SQL Server 2019 CTP 2.0:

And there we go, you get the table name, the column name as well as the value, notice that the message id changed from 8152 to 2628 now

Msg 2628, Level 16, State 1, Line 20
String or binary data would be truncated in table ‘truncatetest.dbo.TruncateMe’, column ‘somevalue’. Truncated value: ‘33333’.
The statement has been terminated.

So it looks it only returns the first value that generates the error, let’s change the first value to fit into the column and execute the insert statement again

It’s not perfect, as it only shows one column from the first failed row, but that is still a lot more information than we had before and I’m happy that this is making into the product.

Comments closed

Estimating Columnstore Index Size

Niko Neugebauer is somewhat impressed with an update to sp_estimate_data_compression_savings:

The final results were pretty much amazing – 2.1 GB, making the estimation of the sys.sp_estimate_data_compression_savings System Stored Procedure much more precise then my own function! This leaves me very happy and makes me want to investigate and learn how this new stored procedure is capable of providing better estimations.

I decided to test on the other tables within TPCH database and my test on the Orders table have shown a different situation where the 0.7 GB estimation of the sys.sp_estimate_data_compression_savings Stored Procedure were pretty much offbeat when comparing to the CISL dbo.cstore_sp_estimate_columnstore_compression_savings – showing 0.92 GB while the end result was 0.89 GB.
I guess the final answer is that it will depend, but that the estimation of the new stored procedure are not totally offbeat is an incredibly good sign, though I would still keep an eye or even two on the provided estimated results.

Read the whole thing for more details on these examples.

Comments closed

Parsing Numeric Values From Multiple Cultures

Bert Wagner shows us a good way of converting strings to numbers when multiple cultures are in play:

Why are the salaries stored as nvarchar and formatted with commas, spaces, and periods?

Great question!  Someone wanted to make sure these amounts would look good in the UI so storing the formatted values in the database would be the way to go…

Pretty for the UI, not really great for needing to do analysis on.

Admittedly, TRY_PARSE is slow, but for a one-time conversion as you try to fix a bad idea, it’s fine.

Comments closed

Hadoop + SQL Server In 2019

Travis Wright shows off a big part of what the SQL Server team has been working on the last couple of years:

SQL Server 2019 big data clusters provide a complete AI platform. Data can be easily ingested via Spark Streaming or traditional SQL inserts and stored in HDFS, relational tables, graph, or JSON/XML. Data can be prepared by using either Spark jobs or Transact-SQL (T-SQL) queries and fed into machine learning model training routines in either Spark or the SQL Server master instance using a variety of programming languages, including Java, Python, R, and Scala. The resulting models can then be operationalized in batch scoring jobs in Spark, in T-SQL stored procedures for real-time scoring, or encapsulated in REST API containers hosted in the big data cluster.

SQL Server big data clusters provide all the tools and systems to ingest, store, and prepare data for analysis as well as to train the machine learning models, store the models, and operationalize them.
Data can be ingested using Spark Streaming, by inserting data directly to HDFS through the HDFS API, or by inserting data into SQL Server through standard T-SQL insert queries. The data can be stored in files in HDFS, or partitioned and stored in data pools, or stored in the SQL Server master instance in tables, graph, or JSON/XML. Either T-SQL or Spark can be used to prepare data by running batch jobs to transform the data, aggregate it, or perform other data wrangling tasks.

Data scientists can choose either to use SQL Server Machine Learning Services in the master instance to run R, Python, or Java model training scripts or to use Spark. In either case, the full library of open-source machine learning libraries, such as TensorFlow or Caffe, can be used to train models.

Lastly, once the models are trained, they can be operationalized in the SQL Server master instance using real-time, native scoring via the PREDICT function in a stored procedure in the SQL Server master instance; or you can use batch scoring over the data in HDFS with Spark. Alternatively, using tools provided with the big data cluster, data engineers can easily wrap the model in a REST API and provision the API + model as a container on the big data cluster as a scoring microservice for easy integration into any application.

I’ve wanted Spark integration ever since 2016 and we’re going to get it.

Comments closed

Combining Parameters And Prompts In Powershell

Shane O’Neill compromises with a co-worker on building a Powershell function:

As you may have guessed…

…this did not go down well with my co-worker.

That?! How are we supposed to know to write “!?” there? How does that even mean help anyway? Can we not just get the help message to show up on it’s own!

To which I said…

We can but it’s going to create a massive bottleneck! If we try and put in the help messages then it’s going to slow things down eventually because we’re not going to be able to run this without someone babysitting it! Do you want to be the person that has to sit there watching this every single time it’s run?

Compromise is a wonderful thing though and we eventually managed to merge the best of both worlds…

Shane gives us a solution and has a couple of updates for simpler solutions to boot.

Comments closed

Formatting Summary Tables In R

Laura Ellis shows us how to create formatted tables using the formattable package in R:

We are going to narrow down the data set to focus on 4 key health metrics. Specifically the prevalence of obesity, tobacco use, cardiovascular disease and obesity. We are then going to select only the indicator name and yearly KPI value columns. Finally we are going to make extra columns to display the 2011 to 2016 yearly average and the 2011 to 2016 metric improvements.

Tables are an area of data visualization that we tend to forget at our own peril.

Comments closed

Tol Color Schemes In R

Jason C. Fisher walks us through a color scheme generator based on Paul Tol’s research;

Choosing colors for a graphic is a bit like taking a trip down the rabbit hole, that is, it can take much longer than expected and be both fun and frustrating at the same time. Striking a balance between colors that look good to you and your audience is important. Keep in mind that color blindness affects many individuals throughout the world and it is incumbent on you to choose a color scheme that works in color-blind vision. Luckily there are a number of excellent R packages that address this very issue, such as the colorspace,RColorBrewer, and viridis packages. And because this is R, where diversity is king, why not offer one more function for creating color blind friendly palettes.

Let me introduce the GetTolColors function in the R-package inlmisc. This function generates a vector of colors from qualitative, diverging, and sequential color schemes by Paul Tol (2018). The original inspiration for developing this function came from Peter Carl’s blog post describing color schemes from an older issue of Paul Tol’s Technical Note (issue 2.2, released Dec. 2012). And the qualitative color schemes described in his blog post found their way into the ptol_pal function in the R-package ggthemes. My intent with this document is to exhibit the latest Tol color schemes (issue 3.0, released May 2018) and show that they are not only visually pleasing but also well thought out.

Read on for step-by-step instructions and to see some of the palettes.  The package authors have taken care in color design, so check it out.

Comments closed

Labeling Line Ends In ggplot2

Simon Jackson shows how you can use the secondary axis to label line endings in ggplot2:

Now we can use scale_y_*, with the argument sec.axis to create a second axis on the right, with numbers to be displayed at breaks, defined by our vector of line ends:

ggplot(d, aes(age, circumference, color = Tree)) +
      geom_line() +
      scale_y_continuous(sec.axis = sec_axis(~ ., breaks = d_ends))

This is good.  I’d really prefer to show the labels instead of the value; that way it’d be possible to eliminate the legend altogether.  H/T R-Bloggers.

Comments closed

Always Encrypted With Secure Enclaves In SQL Server 2019

Jakub Szymaszek walks us through Virtualization Based Security memory enclaves in Windows Server 2019 and SQL Server 2019:

Today, we are super excited to announce that you can now try and evaluate Always Encrypted with secure enclaves in the preview of SQL Server 2019.

Always Encrypted with secure enclaves in SQL Server 2019 preview uses an enclave technology called Virtualization Based Security (VBS) memory enclaves in the upcoming version of Windows (Windows Server 2019 and Windows 10, version 1809), which is currently also in preview. A VBS enclave is an isolated region of memory within the address space of a user-mode process. The isolation of VBS enclaves is provided by the Windows hypervisor, which makes VBS enclaves appear as black boxes, not only to the processes containing them, but also all other processes and the Windows OS on the machine. Even machine administrators are not able to see the memory of the enclave. The below screenshot shows what an admin would get to see when browsing the enclave memory using a debugger (note the question marks, as opposed to the actual memory content).

The compliance regime is shifting toward preventing high-privilege users (DBAs, sysadmins, etc.) from accidentally or maliciously exposing sensitive information, so it makes sense that this is the primary security push.  I think that these changes are starting to make Always Encrypted a better option than a roll-your-own data encryption model.

Comments closed

Chicago Parking Ticket Data Set

Bob Pusateri shows us a new data set to mess with:

A few weeks ago I came across this blog post by Matt Chapman. Matt filed FOIA requests with the City of Chicago and, after multiple attempts, was able to get access to over 36 million parking tickets written between 2003 and 2016. Matt goes on to explain Chicago’s parking ticket database, how he processed the data, analyzed it, and in one location got Chicago to put up additional “No Parking” signs to reduce parking tickets in that spot by 50%. That is most definitely using analytics for a great cause!

But let’s get back to that data for a second, that’s what really interests me. Matt shared his raw data for others to analyze, but it was formatted as a PostgreSQL dump. Now PostgreSQL is a great tool with an even greater price, but it’s not always the easiest to use. After spinning up a Linux VM and spending hours setting everything up as best I could, I still couldn’t get the dump to restore properly. Apparently I didn’t have all the exact versions of certain extensions installed, and because of that the tables couldn’t be loaded. Grrrr.

Bob has our backs, though, and has a properly-formatting, normalized parking ticket data set that weighs in at about 500MB.

Comments closed