Method 2 : Use xp_cmdshell – although this does mean enabling xp_cmdshell , which is in many organisations as security violation
exec master..xp_cmdshell 'systeminfo'
Click through for several less controversial methods.
Comments closedA Fine Slice Of SQL Server
Method 2 : Use xp_cmdshell – although this does mean enabling xp_cmdshell , which is in many organisations as security violation
exec master..xp_cmdshell 'systeminfo'
Click through for several less controversial methods.
Comments closedDave Bland shows how to set up and read an extended event file on Azure SQL Database:
This first step when using T-SQL to read Extended Files that are stored in an Azure Storage Account is to create a database credential. Of course the credential will provide essential security information to connect to the Azure Storage Account. This first data point you will need is the URL to a blog storage container in you storage account. If you look below, you can see where you would place your storage account name and the blob storage container name.
Dave gives us the grand tour of the configuration process, including where things differ between on-prem SQL Server and Azure SQL Database (which is quite a bit)
Comments closedDmitry Tolpeko shows how you can collate Hadoop metrics from several ElasticMapReduce clusters:
The first step is to dynamically get the list of clusters and their IPs. Hadoop clusters are often reprovisioned, added and terminated, so you cannot use the static list and addresses. In case of Amazon EMR, you can use the following Linux shell command to get the list of active clusters:
aws emr list-clusters --active
From its output you can get the cluster IDs and names. As a cluster ID and IP can change over time, its name is usually permanent (like
DEV
orAdhoc-Analytics
cluster) so it can be useful for various aggregation reports.
Read on to see what you can do with this list of clusters.
Comments closedSamir Behara takes us through a few fallacies with distributed computing:
The network is reliable
Service calls made over the network might fail. There can be congestion in network or power failure impacting your systems. The request might reach the destination service but it might fail to send the response back to the primary service. The data might get corrupted or lost during transmission over the wire. While architecting distributed cloud applications, you should assume that these type of network failures will happen and design your applications for resiliency.To handle this scenario, you should implement automatic retries in your code when such a network error occurs. Say one of your services is not able to establish a connection because of a network issue, you can implement retry logic to automatically re-establish the connection.
There are some very good points in here.
Comments closedPamela Mooney shows how you can find three-part or four-part naming on a SQL Server instance:
The script below searches the metadata for views, sprocs and functions for occurrences of 3 and 4 part names. Three-part names consist of databasename.schemaname.objectname, and four-part names consist of servername.databasename.schemaname.objectname. Because the code searches metadata, it isn’t always perfect. If your comments mention a servername followed by a period, for example, it will be caught. Nevertheless, it’s a great place to begin looking, and a real help in getting rid of problems before they really bite you.
Click through for the script.
Comments closedRandolph West has a proposal for default max server memory on a SQL Server instance:
As noted in the previous post in this series, memory in SQL Server is generally divided between query plans in the plan cache, and data in the buffer pool (other uses for memory in SQL Server are listed later in this post).
The official documentation tells us:
[T]he default setting for max server memory is 2,147,483,647 megabytes (MB).Look carefully at that number. It’s 2 billion megabytes. In other words, we might think of it as either 2 million gigabytes, 2,048 terabytes, or 2 petabytes.
Randolph is writing this like we don’t all have multiple petabytes of RAM on each machine.
Comments closedThe message actually says:
“Several errors occurred during data refresh. Please try again later or contact your administrator.”
SessionID: 1b80301e-3898-417a-af9c-2e77ec490728
[0] -1055784932: Credentials are required to connect to the SQL source. (Source at SQLServerName;DBA_Pro.). The exception was raised by the IDbCommand interface.
[1] -1055784932: The command has been canceled.. The exception was raised by the IDbCommand interface.
[2] -1055784932: The command has been canceled.. The exception was raised by the IDbCommand interface.
In my case the cause of the problem was very silly thing. PowerBI Server assigned only one data source connection string to my report, while in my report I had two data sources with only the difference in a Database Name capitalization:
This was a weird scenario.
Comments closedMax Vernon shows how you can automatically expand log files to optimize VLF counts:
SQL Server Database Log file expansion can be fairly tedious if you need to make the log bigger in many reasonably-sized growth increments. It can be tedious because you may need to write and execute a large number of
ALTER DATABASE ... MODIFY FILE ...
commands.The following code automatically grows a SQL Server Database log file, using the size and growth increments you configure in the script. If you set the
@DebugOnly
flag to1
, the script will only print the commands required, instead of executing them. This allows you to see what exactly will be executed ahead of time. Alternately, you could copy-and-paste the commands into a query window and execute them one-by-one.
Click through for that code.
Comments closedErik Darling looks at one of my favorite sp_WhoIsActive features:
Using sp_WhoIsActive in a slightly different way, we can see what a query has been up to over a duration of our choosing.
The delta columns are what happened over the span of time we pick. The columns that come back normally are cumulative to where the query is at now.
Click through to see what it does and how you might benefit from it.
Comments closedJeff Mlakar talks about a topic I like—dropping lots and lots of stuff:
Let’s assume that you have lots of tables that need to be dropped according to some criteria. Trying to do them all at once isn’t a good idea. Even with a powerful server it will either take forever or simply never finish.
For example – you may have millions of tables in sys.tables or millions of indexes you need to drop. SQL Server won’t process them well if you try to run it as one big statement.
I’ve never had millions of tables or millions of indexes to drop and now I am jealous. Regardless, Jeff has two techniques for us when you have a lot of work to do. And if you do need to figure out key dependencies, I have a script for that.
Comments closed