Press "Enter" to skip to content

Category: Security

Checking For Credentials

Denny Cherry uses a try-catch block to figure out if you can authenticate automatically and prompts you otherwise:

Runbooks are very powerful tools which allow you to automate PowerShell commands which need to be run at different times.  One of the problems that I’ve run across when dealing with Azure Runbooks is that there is no way to use the same script on prem during testing and the same script when deploying. This is because of the way that authentication has to be handled when setting up a runbook.

The best way to handle authentication within a runbook is to store the authentication within the Azure Automation configuration as a stored credential.  The problem here is that you can’t use this credential while developing your runbook in the normal Powershell ISE.

This is a clever idea.

Comments closed

Vendors And Privileges

Dave Mason has a good post about onerous third-party software requirements:

If you’re not familiar with SQL Server, the “sysadmin” server role conveys the highest level of authorization available to a login. “db_owner” also conveys a high level of authorization. Both requirements are far more than what is necessary and violate the Principle of Least Privilege. While I strongly disagree with the install-time requirements, I can at least understand the argument: it’s a one-time activity. But elevated permissions at run time are inexcusable.

Most of the time, software companies publish that because they want to avoid the hassle of support calls when people don’t grant privileges correctly.  I’ve worked with one third-party vendor in the past who sent me the actual permissions requirements after I pestered them a bit, as I wasn’t going to let just anyone have sysadmin on my servers.  But that’s not a scalable approach and does nothing for the next guy who reads the documentation and just gives sysadmin away.

Comments closed

Table-Valued Parameters With Always Encrypted

Arvind Shyamsundar wants to use Table-Valued Parameters to load data in batches into an Always Encrypted table:

With this setup on the database side of things, we proceed to develop our client application to work around the TVP limitation. The key to doing this is to use the SqlBulkCopy class in .NET Framework 4.6 or above. This class ‘understands’ Always Encrypted and should need minimal rework on the developer front. The reason for the minimal rework is that this class actually accepts a DataTable as parameter, which is previously what the TVP was passed as. This is an important point, because it will help minimize the changes to the application.

Let’s get this working! The high-level steps are outlined below; there is a full code listing at the end of this blog post as well.

The upshot is that, at least as of today, Table-Valued Parameters are not supported with Always Encrypted.  Arvind does give an alternative, however, so click through for more information.

Comments closed

Write-Only Permissions

Kenneth Fisher looks at granting write permissions but no read permissions to a user:

Now wait, why are they getting a read error when trying to UPDATE or DELETE? Because of the WHERE clause. The WHERE requires reading the data to see if a row meets the required conditions.

It turns out that write-only permissions don’t really work the way you’d want, as typically you want to read data even if your final goal is to update or delete rows.

Comments closed

Securing Spark Shuffle

Cheng Xu uses Apache Commons Crypto to secure data when Spark shuffles off to disk:

The basic steps can be described as follows:

  1. When a Spark job starts, it will generate encryption keys and store them in the current user’s credentials, which are shared with all executors.

  2. When shuffle happens, the shuffle writer will first compress the plaintext if compression is enabled. Spark will use the randomly generated Initial Vector (IV) and keys obtained from the credentials to encrypt the plaintext by using CryptoOutputStream from Crypto.

  3. CryptoOutputStream will encrypt the shuffle data and write it to the disk as it arrives. The first 16 bytes of the encrypted output file are preserved to store the initial vector.

  4. For the read path, the first 16 bytes are used to initialize the IV, which is provided to CryptoInputStreamalong with the user’s credentials. The decrypted data is then provided to Spark’s shuffle mechanism for further processing.

Once you have things optimized, the performance hit is surprisingly small.

Comments closed

Azure SQL Threat Detection

Ron Matchoro discusses use cases for Azure SQL Threat Detection:

Thanks to SQL Threat Detection, we were able to detect and fix code vulnerabilities to SQL injection attacks and prevent potential threats to our database. I was extremely impressed how simple it was to enable threat detection policy using the Azure portal, which required no modifications to our SQL client applications. A while after enabling SQL Threat Detection, we received an email notification about ‘An application error that may indicate a vulnerability to SQL injection attacks’.  The notification provided details of the suspicious activity and recommended concrete actions to further investigate and remediate the threat.  The alert helped me to track down the source my error and pointed me to the Microsoft documentation that thoroughly explained how to fix my code.  As the head of IT for an information technology and services company, I now guide my team to turn on SQL Auditing and Threat Detection on all our projects, because it gives us another layer of protection and is like having a free security expert on our team.”

Anything which helps kill SQL injection for good makes me happy.

Comments closed

Audit Select Statements

Jason Brimhall shows how to build an extended event session which audits all SELECT statements:

I have to be a little honest here. Prior to somebody asking how they could possibly achieve a statement audit via extended events, I had not considered it as a tool for the job. I would have relied on Audit (which is Extended Event related), or some home grown set of triggers. In this particular request, Audit was not fulfilling the want and custom triggers was not an option. Another option might have included the purchase of third party software but there are times when budget does not allow for nice expensive shiny software.

So, with a little prodding, I hopped into the metadata and poked around a bit to see what I could come up with to achieve this low-budget audit solution.

Read the whole thing.

Comments closed

Managing Power BI Group Workspace Members

Melissa Coates shows how to mange Power BI groups with larger numbers of members:

Dozens or hundreds of users in a group is what is prompting me to write this post. Manually managing the members within the Power BI workspace is just fine for groups with a very small number of members – for instance, your team of 8 people can be managed easily. However, there are concerns with managing members of a large group for the following reasons:

  • Manual Maintenance. The additional administrative effort of managing a high number of users is a concern.
  • Risk of Error. Let’s say there is an Active Directory (A/D) group that already exists with all salespersons add to the group. System admins are quite accustomed to centrally managing user permissions via A/D groups. Errors and inconsistencies will undoubtedly result when changes in A/D are coordinated with other applications, but not replicated to the Power BI Group.Depending on how sensitive the data is, your auditors will also be unhappy.

To avoid the above two main concerns, I came up with an idea. It didn’t work unfortunately, but I’m sharing what I learned with you anyway to save you some time.

Even though Melissa’s plan didn’t work, it’s a good concept, so I recommend reading.

Comments closed

Failed Logins

Kevin Hill discusses failed logins:

We’ve all seen them.

Login failed for user ‘MyDomain\Bob’ (password issue)
Login failed for user ‘MyDomain\Nancy’ (default database issue)
Login failed for user ‘blah, blah, blah…’

But what about Login Failed for user ‘Insert Chinese characters here’, Reason, An attempt to logon using SQL Authentication failed.

Wait…nobody in the company has a username with Chinese characters.   And we don’t have SQL Authentication turned on….

I generally agree with Kevin’s assessment, but have one big point of contention:  he recommends turning off successful login logging.  I think that’s not a great thing to do, particularly for a company with a mature security team.  Think about this scenario:  if you see four or five failed login attempts for sa, and you don’t use sa in your environment, you know somebody’s trying something sneaky.  If you see four or five failed login attempts for sa and then a successful login attempt for sa, you know they succeeded.  If you don’t log successful login attempts, you lose that critical piece of information.

Comments closed

Dropping Masking From A Column

Steve Jones shows how to drop Dynamic Data Masking from a column:

This is a quick one. As I experimented with Dynamic Data Masking for the Stairway to Dynamic Data Masking, and writing my Using SQL Compare with Dynamic Data Masking, I needed to remove masking from a column. I didn’t want to rebuild tables, and hoped there was an easy way to ALTER a column.

There is.

The more I’ve seen of DDM, the less I like it.  So I’m more a fan of scripts to remove it than scripts to add it…

Comments closed