Press "Enter" to skip to content

Month: December 2018

The Kitchen Sink Powershell Prompt

Jeffery Hicks shares a heavy-duty Powershell prompt:

Since it is Friday and time for some more PowerShell fun, and I’ve been sharing some of my prompt functions, I thought I’d re-share my kitchen sink prompt. This PowerShell prompt function does *a lot* to things and gives you a snapshot view of your system everytime you press enter. It will work cross-platform, but because the function is using Get-CimInstance to retrieve system information it needs Windows. The prompt function will not only customize the onscreen prompt but also the title bar.

Click through for a link to the prompt as well as seeing it in action.

Comments closed

Working With Files In Azure Cloud Shell

Melissa Coates shows us how to export a Power BI report to Azure using Azure Cloud Shell:

Cloud Shell is a lightweight way to run scripts using either Bash or PowerShell. You can run scripts in a browser using the Azure portal or shell.azure.com, with the Azure mobile app, or using the VS Code Azure Account extension. If you have seen the “Try it now” links in Azure documentation pages, that will direct you to use Cloud Shell.

The rest of this post focuses on using PowerShell with Cloud Shell.

Click through for the demo.

Comments closed

Azure Data Lake Storage Generation 2

James Baker announces updates to Azure Data Lake Storage Gen2:

As we’ve discussed many times, the performance of the storage layer has an outsized impact on the total cost of ownership (TCO) for your complete analytics pipeline. This is due to the fact that every percentage point improvement in storage performance results in that same percentage reduction in the requirement for the very expensive compute layer. Given that the disaggregated storage model allows us to scale compute and storage independently, that percentage reduction in compute requirement results in almost the same (compute typically equates to 90 percent of the TCO) reduction in TCO.
So, when I say that ADLS Gen2 provides performance improvements ranging from 10-50 percent, depending on the nature of the workload over existing storage solutions, this equates to VERY significant reductions in the monthly analytics spend. It also has the added benefit of providing your insights sooner!

Check out all of the changes.

Comments closed

Understanding Analysis Services Memory Behavior

Shabnam Watson walks us through SQL Server Analysis Services memory settings and application behavior under memory stress:


If memory consumption is below the Low limit everything is fine and it is free to stay in memory. Once the consumption passes the Low limit a cleaner thread wakes up and tries to clean up memory. At this point price of memory is no longer zero. It starts from 2 at the Low limit and goes as high as 1000 when memory consumption reaches the Total limit. The higher the memory pressure the more aggressive the cleaner gets. Once memory consumption reaches the Hard limit all connections/sessions are closed and queries are cancelled with an out of memory error.

This is a thorough explanation with some good demos and terrible queries.  Give it a read.

Comments closed

Passing Data To Java In SQL Server 2019

Niels Berglund continues looking at the SQL Server 2019 Extensibility Framework’s support for Java:

In Java, there are also helper components, (a topic for future posts), but the integration is not as tight, so when we want to pass data into and out of Java we need to code somewhat more explicit to make data passing possible.
In our Java code, we need to represent the data passed in and out as class member column arrays. You define in your classes, one array per column passed in, and one array per column returned. These column arrays are some of the “magic” members I mentioned above, and they are the equivalent to InputDataSet, and OutputDataSet.
The components that are part of the Java extension need to know about these members as the components either populate them when pushing data into Java or read from them when returning data from Java. The way that the components know about the members is based on a naming standard.

It’s definitely easier to pass data in and get data back from R and Python, but I suppose part of that is Java being a static, compiled language.

Comments closed

The Intuition Behind Principal Component Analysis

Holger von Jouanne-Diedrich gives us an intuition behind how principal component analysis (PCA) works:


Principal component analysis (PCA) is a dimension-reduction method that can be used to reduce a large set of (often correlated) variables into a smaller set of (uncorrelated) variables, called principal components, which still contain most of the information.
PCA is a concept that is traditionally hard to grasp so instead of giving you the n’th mathematical derivation I will provide you with some intuition.
Basically PCA is nothing else but a projection of some higher dimensional object into a lower dimension. What sounds complicated is really something we encounter every day: when we watch TV we see a 2D-projection of 3D-objects!

Click through for the rest of the story.

Comments closed

Thoughts On Exclusive Locks

Louis Davidson shares some thoughts on exclusive locks in SQL Server:


You will find that the SELECT statement executes, ignoring the exclusive lock, because it is not a write lock, and the data on the page has not been changed.
The main reason people try to do this is to force access to a row in a single threaded manner. For example, building their own sequence number, either in a row they update, or by trying to do MAX() on all of the data in a table to make sure only one reader gets the same value.
This is generally a bad idea, since locking an entire table is a generally bad idea, but if you needed to block readers, you can couple the XLOCK with a PAGLOCK. So, change the first reader to:

BEGIN TRANSACTION;
SELECT *
FROM   Demo.Test WITH (XLOCK,PAGLOCK);

As Louis points out in the summary, locking is complicated.  Having a good understanding of the locking model will serve you well, though.

Comments closed

NoSQL? No! MoSQL

Steve Jones points out a bit of a shift at Google:


Google is doing more SQL, or at least shifting towards relational SQL databases as a way of storing data. At least, some of their engineers see this as a better way to store data for many problems. Since I’m a relational database advocate, I found this to be interesting.
When Google first started to publish information on BigTable and other new ways of dealing with large amounts of data, I felt that these weren’t solutions I’d use or problems that many people had. The idea of Map Reduce is interesting and certainly applicable to the problem space Google had of a global database of sites, but that’s not a problem I’ve ever encountered. Instead, most of the struggles I’ve had with relational systems are still better addressed in a relational system.

Read the whole thing.  Note that this is slightly different than Feasel’s Law, as Steve is focusing more on the consistency side of things rather than the interface.

Also, just going to leave this here:

Comments closed

When Azure SQL DB Tells You You’re Hitting Your DTU Limits

Brent Ozar shows us how to use sp_BlitzFirst to determine if you’re hitting DTU limits:


You’ve got an Azure SQL DB, and your queries are going slow. You’re wondering, “Am I hitting the performance limits? Is Microsoft throttling my queries?”
There’s an easy way to check: run sp_BlitzFirst. sp_BlitzFirst is our free performance health check stored procedure that analyzes a lot of common performance issues and then gives you a prioritized list of reasons why your server might be slow right now.

Click through for an example.  The same signal can mean “it’s time to move to a higher level in your tier,” “it’s time to move up to the next tier,” or “it’s time to tune that cursor in a cursor in a cursor which performs several scans of a very large table for each operation.”

Comments closed

Finding The SQL Server Port With T-SQL

Jack Vamvas shows us how to find the port SQL Server is listening on using T-SQL:


Question: Without going into the SQL Server Configuration manager via the GUI is there a command oriented method to extract the port number SQL Server is listening on?
 
Answer: There are a few different methods to extract the port number without going into the Configuration interface.
Method 1 – use xp_readererrorlog

Read on for an explanation of this technique as well as links to a couple other methods.  I hadn’t thought about using the error log as a source, but it works.

Comments closed