Press "Enter" to skip to content

Category: T-SQL Tuesday

Text Search

Anders Pedersen discusses one method he used to implement fast text search in SQL Server:

Looking into what was needed, I quickly realized there was a LOT of data, guess 50+ years of news broadcasts will do this.  Consider this was in the early 2000s, some innovation was needed from anything I had coded before.  Obviously LIKE searches was out of the question, full text search was not available.  So what to do?

Basically I decided to break down each broadcast to words into a separate table, the entire application fit in 2 tables: Story and Words.

This is a case in which thinking about the grain of data helps solve an otherwise-intractable problem.

Comments closed

Making Text Search Faster

Rob Farley looks at binary collations and specifically hash functions on aggregation:

If there’s an index on the column you’re grouping, then changing the collation is going to hurt a bit. Grouping could take advantage of a Stream Aggregate under our indexed collation, but changing the column is like throwing it away the index order (ORDER BY doesn’t get handled well by changing the collation) means a Hash is required. But comparing two query plans that both use Hash Match (Aggregate), one on a case-insensitive collation and one on a binary collation, then I found the latter was slightly faster. Not as drastic a change as searching, but still 10-30% better. One would run in about 12 seconds, and one in about 10.

Be sure to check out his comments for more details.

Comments closed

Power BI Roundup

Jorge Segarra has a roundup of this month’s T-SQL Tuesday:

This month I challenged the blogging community to share their own creations in Power BI. We got a ton of great entries this month, thank you everyone who participated! My overarching goal for this month’s topic was to get folks who may not normally play in the BI space to use this fantastic solution and maybe get some ideas flowing on how they may be able to apply it in their everyday work.

The part I like most about T-SQL Tuesday is that it introduces you to a whole new set of bloggers and a whole new set of perspectives on any particular topic.

Comments closed

Power BI With SSAS

Jens Vestergaard’s T-SQL Tuesday entry involves Power BI feeding from Analysis Services:

My story with this half-baked product (the Dashboard you are about to see), is that I needed some way of tracking performance on a couple of Analysis Services (SSAS) query servers. There are a lot of good posts and talks about how to collect and store performance counters and SSAS logs out there, and I suggest you look into this, this or that, if you need inspiration.

The current data set is about 200K rows, as I am sampling each server every 5th minute.

Both of these are valuable tools in a Microsoft BI environment.

Comments closed

Power BI Plus Fitbit

Rob Farley visualizes his Fitbit data with Power BI:

I added a column: RollinAvgSteps = AVERAGEX(FILTER(fitbit_export_20160214, EARLIER(fitbit_export_20160214[Date])>=fitbit_export_20160214[Date]),fitbit_export_20160214[Steps])

…which takes the average of my steps to date. There are a bunch of ways to achieve this, but this is the way that I chose. And you can see that the average line is (happily) improving! Oh, and because I pulled down the extract on the 14th, there’s a dip at the end. My numbers were much healthier by the end of the day, and despite spending way too long NOT walking, I did about 7244 steps that day.

You can see the result at http://bit.ly/RobFitbit

I like the rolling average that Rob added in.

Comments closed

T-SQL Tuesday Roundup

Robert Davis hosted T-SQL Tuesday, and he has a nice roundup of posts:

Thanks to everybody that participated in this month’s T-SQL Tuesday. A big thanks to everyone who wrote a participating blog post, and a really huge thanks to everyone who read the posts shared by this month’s participants. If you follow one of the links on this round-up page, I will kindly ask that you leave a comment on a blog post that you read if it teaches you something, gives you a new perspective, or makes you think. A blog post is just a bunch of words until somebody reads it, and blog writers love it when they hear that their post resonated with someone on some level.

Thanks to Robert for hosting T-SQL Tuesday #74.

Comments closed

Reducing Ad Hoc Query Risk

Kenneth Fisher has some tips to reduce the risk of running ad hoc queries:

  • Make sure that this is the ONLY code in your window or that you are protected by a RETURN or SET EXECUTION OFF at the top of your screen. I have this put in place by default on new query windows. This protects you from running too much code by accident.

  • Make a habit of checking what instance you are connected to before running any ad-hoc code. Running code meant for a model or test environment in production can be a very scary thing.

This is good advice.

Comments closed

Tracking Changed Data In Standard Edition

Mickey Stuewe wants to track changed data, but has to use Standard Edition:

I use a pattern that includes four fields on all transactional tables. This (absolutely) includes lookup tables too. The two table types that are an exception to this pattern are audit tables and error tables. I’ll cover why later in this article.

Four fields include CreatedOn, CreatedBy, UpdatedOn, and UpdatedBy. The dates should be DateTime2. CreatedOn is the easiest to populate. You can create a default on the field to be populated with GetDate().

This is a common pattern and works pretty well.  The trick is making sure that you keep that metadata up to date.

Comments closed

Shredding XML

Tim Peters introduces us to shredding multi-level XML:

The below XML has data nested in different levels that requires the nodes method to join them together. The nodes method accepts a XML string and returns a rowset. That rowset can then be used with CROSS APPLY to effectively link your way down.

nodes (XQuery) as Table(Column)

The tabular format I need requires data from 3 different levels of this XML gob and I need to wade through 5 “tables” to get there.

Shredding XML is something you occasionally need to do.

Comments closed

UPDATE FROM Clause Usage

Rob Farley shows us UPDATE FROM:

1. It can let you access data in the columns of those tables, to use in predicates or expressions.

2. It can let you filter the data in the base table, by only allowing rows which match, such as when using an inner join or right outer join.

3. It can cause rows in the base table to be returned multiple times, if multiple rows in the joined table match a single row in the base table.

4. It can introduce NULL rows, if a full or right outer join is being done (or a left outer join with the base table second) and there are rows in the joined table that don’t match any rows in the base table.

This is a useful bit of T-SQL-specific syntax, but it’s a sharper edge than most UPDATE statements.  For a look back in history, Hugo Kornelis wanted to deprecate this syntax with the release of SQL Server 2008 (though MERGE has its own bugs and “Won’t Fix” problems, so in retrospect, perhaps it’s best that we still have UPDATE FROM).

Comments closed