This table stores data for an application that has many different types of Pages. Each Page stores different types of data, but instead of creating a separate table for each type, we store all the different data in the varchar
DataValuecolumn and maintain the original data type in the
This structure reduces the complexity required for maintaining our database (compared to creating possibly hundreds of tables, one for each PageName) and makes querying easier (only need to query one table). However, this design could also lead to some unexpected query results.
This is your daily reminder that an attribute should be a thing which describes an entity, not one of multiple things.
I need to find a relatively efficient way to advance the CSILOCATOR because I need to do it over 2 billion times, if my theory is correct about the maximum allowed value. Both updating all of the rows in a delta rowgroup and deleting and reinserting advance the CSILOCATOR. I expected that small batch sizes would work best, and they did. For my table’s schema, the sweet spot for updates is about 275 rows and the sweet spot for delete/inserts is about 550 rows. Delete/inserts appeared to be faster than updates for the purpose of constantly reloading the same rows over and over.
Great post, Brent!
The other day, I was <sarcasm>blessed / honored / delighted</sarcasm> to waste several hours attempting to install SQL Server 2019 CTP 2.5 over and over again. Each time it would get through the first several steps of the installation process, but then encounter some condition causing it to rollback and finally end with the <sarcasm>super helpful</sarcasm> error message of:
An error occurred for a dependency of the feature causing the setup process for the feature to fail.
Use the following information to resolve the error, and then try the setup process again.
That might have been ok had there actually been any information that followed. But no, there was none, not even a small piece of unhelpful information.
Solomon takes us through the blow-by-blow accounting as well as a quick rundown of the solution.
When running R-scripts in Power BI, I got all sorts of error-messages who all had one thing in common: They were complaining about one or more packages being installed by an R version with different internals.
Click through for the solution. I’m not sure I’ve run into this issue before and I’d rather keep it that way.
Time limits: ending the neverending
Here’s one to ponder: how long can a long-running action go on before the customer (even a very patient, very digital customer) loses all interest in the outcome?
Pull up a chair. With no upper bound, we could be here a while.
Sometimes, when working with DAX, you might get the following error:
The expression refers to multiple columns. Multiple columns cannot be converted to a scalar value.
This error occurs whenever the DAX engine was expecting a single value, or scalar, and instead received a table of values instead. This is an easy error to make because many DAX functions, such as FILTER, SUMMARIZE and ALL, return table values.
Eugene lays out when each scenario occurs, so check it out.
The column name in the temp table is missing an I, probably just a typo, but it has some rather pronounced effects.
The obvious next question is why the select with the subquery in it didn’t fail, after all, the query asks for ClientID from #TempClients, and there’s no such column. However there is a ClientID column available in that query, and it’s in the Orders table. And that’s a valid column for the subquery, because column binding order, when we have subqueries, is first to tables within the subquery, and then, if no match is found, to tables in the outer query.
I know that the first time I experienced this, I thought it was a bug as well. But no, it’s by design and Gail explains why.
Last month I was fortunate to have my first ever article published on Simple-Talk, among the best quality website for sql server articles ever. During the process of writing this article I ran into several errors related to temporal tables that I have not seen before. Some of these are documented by Microsoft, some are fairly obvious to understand and others are not. Below I summarize the list of errors you can possibly run into if you are using this really cool feature.
Click through for the list.
Let me tell you about one of my least favorite things I like to see in PolyBase:
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
This error is not limited to PolyBase but is instead an issue when trying to run MapReduce jobs in Hadoop. There are several potential causes, so let’s cover each of them as they relate to PolyBase and hopefully one of these solves your issue.
Click through for four potential solutions to what ails you.
Don’t leave this trace flag enabled.
There’s at least one bug with it as of today on SQL Server 2017 CU13: table variables will throw errors saying their contents are being truncated even when no data is going into them. Andrew Pruski reports:
Special shout out to three of my co-workers on finding that issue. I had nothing to do with it but will take credit nonetheless.