Performance improvements for Prepared Statements
Improved performance for Prepared Statements through caching (including prepared statement handle re-use). This behavior can be tuned using new properties to fit your application’s needs.
Azure Active Directory (AAD) support for Linux
Connect your Linux applications to Azure SQL Database using AAD authentication via username/password and access token methods.
Federal Information Processing Standard (FIPS) enabled Java virtual machines
The JDBC Driver can now be used on Java virtual machines (JVMs) that run in FIPS 140 compliance mode to meet federal standards and compliance.
Click through for more information, including a couple interesting features like additional timeouts you can set.
Me: Umm, boss… Does this report allow users to enter in search criteria?
Boss: But of course!
Me: Well, I really hate to tell you this, but we have a SQL Injection problem.
And after a bit of back and forth where the developers were insisting that no way was there a SQL Injection problem, I sat down with the dev team lead and the boss and proved it to them. We created a dummy table in the database, went to the report criteria form, and I dropped the table.
Development Team: -1000
Injection attacks are still the most common form of attack out there. Sadly.
Since Apache Spark separates compute from storage, every Spark Job requires a set of credentials to connect to disparate data sources. Storing those credentials in the clear can be a security risk if not stringently administered. To mitigate that risk, Databricks makes it easy and secure to connect to S3 with either Access Keys via DBFS or by using IAM Roles. For all other data sources (Kafka, Cassandra, RDBMS, etc.), the sensitive credentials must be managed by some other means.
This blog post will describe how to leverage an IAM Role to map to any set of credentials. It will leverage the AWS’s Key Management Service (KMS) to encrypt and decrypt the credentials so that your credentials are never in the clear at rest or in flight. When a Databricks Cluster is created using the IAM Role, it will have privileges to both read the encrypted credentials from an S3 bucket and decrypt the ciphertext with a KMS key.
That’s only one data source, but an important one.
With a database restore, the users are within a database and if their SID matches the SQL Login you are ready to go. But with the logins it is a different story!
If you have to reinstall the engine because your master database backup is corrupt or someone hs changed the login password and you want to put it back or even – maybe the most common scenario – you want to keep track of the login permissions you need to have them saved somewhere.
Imagine that you have to re-create a login and give all the permissions that it has, imagine that this login has multiple roles on different databases. Did you know that beforehand? Do you keep track of it? How much time would take to gather all that information from the application owner? How much time will you have to spend resolving all the permission issues until everything is running smoothly? How many tickets will need to be raised? How many users will this be affecting?
Read on for Claudio’s easy solution.
If you query sys.databases, such as:
SELECT is_encrypted,name,user_access_desc FROM sys.databases WHERE database_id = 2 OR database_id = 7
It “might” throw you off. Would you not expect to see is_encrypted set to 1 for TempDB?
I thought I remembered earlier editions of SQL Server showing is_encrypted = 1 for tempdb, and I definitely remember 2016 showing 0 even when the database is encrypted.
Let us say you have SQLServer1 and you want to setup a linked server to SQLServer2 using “pass-through authentication”, a double-hop happens as explain in the article below. Basically, the first hop is when the user authenticates to SQLServer1 and the second hop when that gets passed on from SQLServer1 to SQLServer2.
The below article is a must-read before you proceed:
The three nodes involved in the double-hop as illustrated in the example are
Client – The client PC from which the user is initiating connection to SQLServer1
Middle server – SQLServer1
Second server – SQLServer2
Dealing with the double-hop problem is far trickier than it should be; if you’ve had to deal with this, I recommend Jana’s guide.
Four Letter Words (acronym as 4lw) is a very popular feature of the Apache ZooKeeper project. In a nutshell, 4lw is a set of commands that you can use to interact with a ZooKeeper ensemble through a shell interface. Because it’s simple and easy to use, lots of ZooKeeper monitoring solutions are built on top of 4lw.
The simplicity of 4lw comes at a cost: the design did not originally consider security, there is no built in support for authentication and access control. Any user that has access to the ZooKeeper client port can send commands to the ensemble. The 4lw commands are read only commands: no actions can be performed. However, they can be computing intensive, and sending too many of them would effectively create a DOS attack that prevents the ensemble’s normal operation.
Read on for details.
There is an awesome set of PowerShell cmdlets out there written by MVP Chrissy LeMaire. This method is my personal choice. It works great and is easy to automate. You can run it with SQLAgent or you can just use Scheduled Tasks in the OS. The scheduled tasks method is a little cleaner, but you don’t get to see it in SQL Server. Also if you are on a cluster and running Windows 2012 you can cluster the task scheduler as an added benefit.
Chrissy wrote this with the intent of making migrations easier, and she succeeded. In fact, I made it a point to thank her at MVP Summit last year because it made my life insanely easier. The advantage here is that you can automate a lot more than than just logins. In fact you can migrate and automate pretty much anything at the server level. Here is the link that I guarantee you are going to bookmark followed by a video demo where I show how to install and automate the syncing of logins using both the SQLAgent method and the Scheduled Tasks method.
DBATools would be my preference in this situation as well, but click through to see four other methods, as well as code.
In the previous blog post we created an Azure cloud service. Now we are going to create a private virtual Azure network. The importance of this is that when you create a virtual machine in Azure you will use this virtual network to connect to your virtual machine.
This is a screenshot-driven, step-by-step post that makes setting these up easy.
First lets check that the User Does actually exist, we know the Server login exists otherwise the user would be complaining that they cannot connect to the SQL server instance.
Sure enough – there is the user ‘SQLUndercoverUser’ lets check out the permissions:
No problems there – the user has [db_datareader], [db_datawriter] and [db_owner] so we know there is not a permissions issue, so lets test this login by connecting to SQL server with the user credentials:
Connected to the Server with no issues, lets open a new query against SQLUnderCoverDB:
Hmm so despite having permissions to access the database we are receiving this error – we know that the password is correct too otherwise we wouldn’t be able to access the Server at all….
Adrian does a nice job of walking through the troubleshooting process, going from simple problems (does the user actually exist? does the user have permissions?) and into the real cause, which was orphaned SQL authenticated users. Read the whole thing.