With both of these concepts combined, the file size was reduced from the original 264 MB to 238 MB, a reduction of almost 10%. You can see where the space savings have come from by comparing the before and after column sizes in the 2 tables below. The SalesValueExTax column (65MB) was replaced with the Margin column (44MB) and the CostValue column (63MB) was replaced with the CostPerCase column (50MB).
Check it out, as well as the memory tool.
Jovan Popovic shows how to use data in GeoJSON format.
In geometry object are placed type of the spatial data and coordinates. In “property” object can be placed various custom properties such as address line, town, postcode and other information that describe object. SQL Server stores spatial information as geometry or geography types, and also stores additional properties in standard table columns.
Since GeoJSON is JSON, it can be formatted using new FOR JSON clause in SQL Server.
In this example, we are going to format content of Person.Address table that has spatial column SpatialLocation in GeoJSON format using FOR JSON clause.
New OPENJSON function in SQL Server 2016 enables you to parse and load GeoJSON text into SQL Server spatial types.
In this example, I will load GeoJSON text that contains a set of bike share locations in Washington DC. GeoJSON sample is provided ESRI and it can be found in https://github.com/Esri/geojson-layer-js/blob/master/data/dc-bike-share.json
Check them out.
In the past, there were a number of things that caused entire plans, or sections of plans, to be serial. Scalar UDFs are probably the first one everyone thinks of. They’re bad. Really bad. They’re so bad that if you define a computed column with a scalar UDF, every query that hits the table will run serially even if you don’t select that column. So, like, don’t do that.
What else causes perfectly parallel plan performance plotzing?
Commenting on one of his comments, I can name at least one good reason to use a table variable.
Turns out SQL 2008R2 (where the original script worked) returns different fields than 2012 and 2014 (where it didn’t).
I figured I didn’t want to find out which version of the script to use every time I needed to run it on a server, so I told the script to figure that out by itself, and then run the appropriate hunk of code (example below)
This is a good explanation of how to back out of a complex situation.
You can change the compatibility level of an Azure SQL Database.
It’s true! I know!
OK, so I’m a little excited about this one. See, I’ve been giving this talk on cardinality for the past couple of years now, so this is a hidden gem to me. When I found out this was possible I took out my demo scripts to see if changing the compatibility level would have any effect.
This is interesting, especially given that Management Studio doesn’t give you that option. Know your T-SQL, folks.