Press "Enter" to skip to content

Month: November 2017

Data Type Conversions In 4 Database Systems

Eleni Markou has samples for converting strings to dates, numerals, or currency in SQL Server, Postgres, Redshift, and BigQuery:

The TO_DATE function in PostgreSQL is used to converting strings into dates. Its syntax is TO_DATE(text, text) and the return type is a date.

In contrast with MS SQL Server which has strictly specified date formats, in Redshift, any format constructed using the patterns of the table found in the corresponding documentation can be correctly interpreted.

When using the TO_DATE() one has to pay attention as even if an invalid date is passed, it will convert it into a nominally valid date without raising any error.

There are a few other tricks in SQL Server for some of these (for example, on 2012 or newer, I’d use TRY_CONVERT rather than CONVERT).  That said, it’s a good overview of how to translate skills in one relational system to another.

Comments closed

Handling Imbalanced Data

Tom Fawcett shows us how to handle a tricky classification problem:

The primary problem is that these classes are imbalanced: the red points are greatly outnumbered by the blue.

Research on imbalanced classes often considers imbalanced to mean a minority class of 10% to 20%. In reality, datasets can get far more imbalanced than this. —Here are some examples:

  1. About 2% of credit card accounts are defrauded per year. (Most fraud detection domains are heavily imbalanced.)
  2. Medical screening for a condition is usually performed on a large population of people without the condition, to detect a small minority with it (e.g., HIV prevalence in the USA is ~0.4%).
  3. Disk drive failures are approximately ~1% per year.
  4. The conversion rates of online ads has been estimated to lie between 10-3 to 10-6.
  5. Factory production defect rates typically run about 0.1%.

Many of these domains are imbalanced because they are what I call needle in a haystackproblems, where machine learning classifiers are used to sort through huge populations of negative (uninteresting) cases to find the small number of positive (interesting, alarm-worthy) cases.

Read on for some good advice on how to handle imbalanced data.

Comments closed

Using DMVs To Plan Out Your Indexes

Eric Blinn explains how to use two particular DMVs to see which index changes you might want to make:

Missing Indexes

This group of DMVs records every scan and large key lookups.  When the optimizer declares that there isn’t an index to support a query request it generally performs a scan.  When this happens a row is created in the missing index DMV showing the table and columns that were scanned.  If that exact same index is requested a second time, by the same query or another similar query, then the counters are increased by 1.  That value will continue to grow if the workload continues to call for the index that doesn’t exist.  It also records the cost of the query with the table scan and a suspected percentage improvement if only that missing index had existed.  The below query calculated those values together to determine a value number.

Click through for sample scripts for this and the index usage stats DMV.  The tricky part is to synthesize the results of these DMVs into the minimum number of viable indexes.  Unlike the optimizer—which is only concerned with making the particular query that ran faster—you have knowledge of all of the queries in play and can find commonalities.

Comments closed

What Update Locks Do

Guy Glantser explains the process around updating data in SQL Server, particularly the benefit of having update locks:

In order to update a row, SQL Server first needs to find that row, and only then it can perform the update. So every UPDATE operation is actually split into two phases – first read, and then write. During the read phase, the resource is locked for read, and then it is converted to a lock for write. This is better than just locking for write all the way from the beginning, because during the read phase, other sessions might also need to read the resource, and there is no reason to block them until we start the write phase. We already know that the SHARED lock is used for read operations (phase 1), and that the EXCLUSIVE lock is used for write operations (phase 2). So what is the UPDATE lock used for?

If we used a SHARED lock for the duration of the read phase, then we might run into a deadlock when multiple sessions run the same UPDATE statement concurrently.

Read on for more details.

Comments closed

More On Microsoft SQL Operations Studio

Dan Guzman shares some thoughts on Microsoft SQL Operations Studio:

Microsoft made the new cross-platform SQL Operations Studio (SOS) tool available on Github this week as a free open-source project. This SOS preview allows one to develop and manage SQL Server and Azure SQL Database from Windows, Linux, and macOS. The current preview can be downloaded from the SOS portal page, which also contains links to impressive quick start guides, how-to, and tutorials. I encourage you to try out the preview and improve it by reporting issues and offering suggestions.

If you are a developer, consider contributing to this project on Github. SOS is built on the Electron framework, which leverages JavaScript, HTML, and Node.js technologies to build rich cross-platform desktop applications. This is the same stack that the popular VS Code IDE employs so it’s not surprising SOS has a similar look and feel.

Click through for Dan’s thoughts and also a link to try it yourself.

Comments closed

Using The Restore-DbaDatabase Pipeline

Stuart Moore describes the updated Restore-DbaDatabase cmdlet:

The biggest change is that Restore-DbaDatabase is now a wrapper around 5 public functions. The 5 functions are:

  • Get-DbabackupInformation
  • Select-DbabackupInformation
  • Format–DbabackupInformation
  • Test–DbabackupInformation
  • Invoke-DbaAdvancedRestore

These can be used individually for advanced restore scenarios, I’ll go through some examples in a later post.

Stuart then provides additional information at the various steps, explaining at a high level how things work.

Comments closed

What Happens In Deep Neural Networks?

Adrian Colyer has a two-parter summarizing an interesting academic paper regarding deep neural networks.  Part one introduces the theory:

Section 2.4 contains a discussion on the crucial role of noise in making the analysis useful (which sounds kind of odd on first reading!). I don’t fully understand this part, but here’s the gist:

The learning complexity is related to the number of relevant bits required from the input patterns X for a good enough prediction of the output label Y, or the minimal I(X; \hat{X}) under a constraint on I(\hat{X}; Y) given by the IB.

Without some noise (introduced for example by the use of sigmoid activation functions) the mutual information is simply the entropy H(Y)independent of the actual function we’re trying to learn, and nothing in the structure of the points p(y|x) gives us any hint as to the learning complexity of the rule. With some noise, the function turns into a stochastic rule, and we can escape this problem. Anyone with a lay-person’s explanation of why this works, please do post in the comments!

Part two digs in deeper:

The different colours in the chart represent the different hidden layers (and there are multiple points of each colour because we’re looking at 50 different runs all plotted together). On the x-axis is I(X;T), so as we move to the right on the x-axis, the amount of mutual information between the hidden layer and the input X increases. On the y-axis is I(T;Y), so as we move up on the y-axis, the amount of mutual information between the hidden layer and the output Y increases.

I’m used to thinking of progressing through the network layers from left to right, so it took a few moments for it to sink in that it’s the lowest layer which appears in the top-right of this plot (maintains the most mutual information), and the top-most layer which appears in the bottom-left (has retained almost no mutual information before any training). So the information path being followed goes from the top-right corner to the bottom-left traveling down the slope.

This is worth a careful reading.

Comments closed

Troubleshooting Memory-Optimized Index Performance

Kunal Karoth has a post up on performance troubleshooting with In-Memory OLTP:

In the previous blog post In-Memory OLTP Indexes – Part 1: Recommendations, we gave you an update on the latest features of In-Memory OLTP technology. We also summarized the key characteristics of memory-optimized indexes and shared some guidelines and recommendations on how to best choose and configure an index for your memory-optimized table. At this point, if you haven’t read through the previous blog post, we strongly recommend you do so. In this blog post we continue onwards; take the learnings from the previous blog (Part 1) and using some sample examples, apply them in practice. The learnings from this blog post (Part 2) will be particularly useful if you are experiencing query performance issues with memory-optimized tables; either after migration from disk-based tables or in general, with your production workload leveraging memory-optimized tables.

To summarize this blog post covers the following:

  • Common mistakes and pitfalls to avoid when working with memory-optimized indexes.

  • Best practices to follow when configuring your memory-optimized indexes for optimal performance.

  • Troubleshooting and Mitigating your query performance issues with memory-optimized indexes.

  • Monitoring your query performance with memory-optimized indexes.

There’s a lot of detail in this post, and tuning these types of indexes isn’t quite the same as normal, disk-based indexes.

Comments closed

Alerting In Azure SQL Database

Arun Sirpal shows how to set up an alert for an Azure SQL Database:

I keep things simple and like to look at certain performance based metrics but before talking about what metrics are available let’s step through an example.

For this post I want to setup an alert for CPU percentage utilised that when it is greater than 50% over the last 5 minutes I would like to know about it. First step is to navigate to your Azure SQL Database.

Click through for a screenshot-driven guided tour.

Comments closed

Indexing Tradeoffs

Jeff Schwartz continues his series on index tuning, including a section on what happens when you have too many indexes on a table:

Larger numbers of indices create exponentially more query plan possibilities. When too many choices exist, the Optimizer will give up partway through and just pick the best plan thus far. As more indices are added the problem worsens and compilation times, i.e., processor times, increase to a point. This can be illustrated best by reviewing an actual customer example. In this case, one table had 144 indices attached to it and several others had between 20 and 130 indices. The queries were quite complex with as many as fifteen joins, many of which were outer joins. Query and index tuning were impossible because query performance was often counterintuitive and sometimes nonsensical. Adding an index that addressed a specific query need often made the query run worse one time and better the next. Note: Cached query plan issues, e.g., parameter sniffing or plan reuse were not problems in this case. The only solution was to tear down ENTIRE indexing structure and rebuild it with SQL Server’s guidance and nine days’ worth of production queries. Table 5 summarizes the results of the index restructuring project. The performance of 98 percent of the queries was comparable to or better than it was when the large numbers of indices were present.

Don’t be that company.

Comments closed