Jos de Bruijn shares a couple scenarios in which In-Memory OLTP can improve performance—using memory-optimized table types and replacing certain types of temp tables with schema-only memory-optimized tables:
Tempdb can be a performance bottleneck for many applications. Workloads that intensively use table-valued parameters (TVPs), table variables and temp tables can cause contention on things like metadata and page allocation, and result in a lot of IO activity that you would rather avoid.
What if TVPs and temp tables could live just in memory, in the memory space of the user database? In-Memory OLTP can help! Memory-optimized table types and SCHEMA_ONLY memory-optimized tables can be used to replace traditional table types and traditional temp tables, bypassing tempdb completely, and providing additional performance improvements through memory-optimized data structures and data access methods.
I’ve used both of these techniques to good effect, but the harsh limitations in 2014 prevented me from doing as much with them as I wanted.
We are currently working on testing and publishing SQL Server Container Images that could speed up the process of getting started with SQL Server in Windows Containers significantly. Stay tuned for an update!
Windows getting into the Docker world is interesting.
Question: If the log is stamped with 0xC0’s instead of 0x00’s how is it a performance gain?
Many of the new hardware implementations detect patterns of 0x00’s. The space is acquired and zero’s written to stable media, then a background, hardware based garbage collector reclaims the blocks.
This is a very interesting background article which shows an integration pain point between the database platform and the storage platform.
I’ve mapped suburbs to County because that was the lowest level I’ve found in data category for geographic information. (Place and Address cannot be used for Filled Map at the time or writing this post). and I got Nothing! Not event a small area on the map. I’ve tried then removing the district and putting suburb, region, country format with County as the data category which didn’t helped again.
I’ve found that I can map some locations based on Postal Code as you see below. However not Postal Code is not always good distinguishing field for a region, as multiple regions might have a postal code shared.
Filled maps have the potential to be powerful tools, but they aren’t perfect. Check out Reza’s post for the full scoop.
What follows is an overview of my experiments that i have published into a GitHib repo. The “Examples” folder are what i would term “simple learnings” and “Full Scripts” are scripts that to a lesser or greater extent do something “useful”. Im also not suggesting that anything here is “best practice” or method A performs better than method B, I simply do not have the required size of data to make that call. My aim was to learn the language.
TLDR: Check out the script MovieLens09-CosineSimilarityFromCSVWithMax.usql for a U-SQL movie recommender.
U-SQL was introduced last year, but word of mouth about the language has been quite limited to date. I’ll be interested in seeing what other examples pop up over the next few months.
Performance Monitor uses incorrect calculation for certain types of counters in Windows 8, Windows Server 2012, Windows 7 SP1, or Windows Server 2008 R2 SP1
This only cost us a week of reviewing results.
Follow up on the link because there’s a fix available through Windows Update.
There is a new line in the properties of the iterator, showing the number of locally aggregated rows and that number equals 619255, that should be exactly the number of rows that is missing from the arrow connecting 2 iterators:
1 select 12008353 + 619255
Gives us our perfect 12627608 rows.
Is there any more information on this operation?
Indeed, just right-click on the Columnstore Index Scan and select it’s properties:
This is tied to some columnstore performance improvements in SQL Server 2016.
Note that this script requires SQL Server 2016 (or later) because the database engine team made some great changes to columnstore indexes, allowing us to use REORGANIZE to clear out deleted rows and compact row groups together, as well as its previous job of marking open delta stores as available for compression.
The code is available as a Gist for now, at least until I decide what to do with it. Comments are welcome, especially if I’m missing a major reorganize condition.
As mentioned, comments are welcome.
While it’s fairly common to need to load fixed-width files using Power Query or Power Query (and there’s a nice walkthrough of how to do this here), occasionally you might want to use Power Query and Excel to create a fixed-width output for another system, or maybe to create some test data. You might not want to do it often but I can imagine that when/if Power Query is integrated into SSIS this will be a slightly less obscure requirement; at the very least, this post should show you how to use a couple of M functions that are under-documented.
I don’t see this being a particularly common request, but I guess I can see some scenario in which we’re loading data into a legacy system.
Specifying WITH CHECK in a statement tells to SQL Server the user wants it to validate the constraint against every single row in the table, then, if successful, enable it.
In contrast, specifying WITH NOCHECK, which is the default for an existing constraint, means that the constraint is enabled but no validation has been made on it. Even if this mode is faster to run, it can lead to severe side effects on performance: SQL Server doesn’t trust the constraint as it has not validated it. We refer to such a foreign key as an « untrusted foreign key ». As a consequence, the query optimizer won’t use the constraint to do his job…
There are benefits to having trusted foreign key constraints. Check out the article for more details as well as how to fix this issue.