To help promote the seperation of duties one of the things my company has done is to divide our permissions into two accounts. We have one account that is for our daily tasks. Reading email, searching the internet, basic structure changes in a database etc. The other account is our admin account. It’s for remoting to servers, security tasks, really anything that requires sysadmin. I’m not going to argue the advisability of this because honestly, I’m kind of on the fence. That said, I do have to deal with it and there are a few tips in case you have to deal with it as well.
And if you’re not on the domain as well,
runas /netonly /user:[domain\username] ssms.exe will do the job.
In conjunction with the Cassandra authenticator, we have also published an open-source Kerberos authenticator plugin for the Cassandra Java driver.
The plugin supports multiple Kerberos quality of protection (QOP) levels, which may be specified directly when configuring the authenticator. The driver’s QOP level must match the QOP level configured for the server authenticator, and is only used during the authentication exchange. If confidentiality and/or integrity protection is required for all traffic between the client and Cassandra, it is recommended that Cassandra’s built-in SSL/TLS be used (note that TLS also protects the Kerberos authentication exchange, when enabled).
An (optional) SASL authorization ID is also supported. If provided, it specifies a Cassandra role that will be assumed once the Kerberos client principal has authenticated, provided the Cassandra user represented by the client principal has been granted permission to assume the role. Access to other roles may be granted using the GRANT ROLE CQL statement.
Click through for more details and check out the GitHub repo.
SMO connects to SQL Server using the ADO.NET SQLClient library which has 13+ years of features which help mask the passwords you pass in for SQL Authentication. SMO bypasses some of those features to often leak the passwords in clear-text.
- Even where it would normally be hidden.
- Even where you use
Persist Security Infointroduced in 2005.
- Even where you use
System.Security.SecureStringintroduced in 2012.
- Though thankfully not where you use
System.Data.SqlClient.SqlCredentialalso introduced in 2012. However… there’s some caveats here too.
We’ll prove it through repeatable tests that can be used to track if Microsoft fix the problem or not.
Read the whole thing.
Roughly two years ago there were a spate of attacks against the open source database solution MongoDB, as well as Hadoop. These attacks were ransomware: the attacker wiped or encrypted data and then demanded money to restore that data. Just like the recent attacks, the only Hadoop clusters affected were those that were directly connected to the internet and had no security features enabled. Cloudera published a blog post about this threat in January 2017. That blog post laid out how to ensure that your Hadoop cluster is not directly connected to the internet and encouraged the reader to enable Cloudera’s security and governance features.
That blog post has renewed relevance today with the advent of XBash and DemonBot.
The origin story of XBash and DemonBot illustrates how security researchers view the Hadoop ecosystem and the lifecycle of a vulnerability. Back in 2016 at the Hack.lu conference in Luxembourg, two security researchers gave a talk entitled Hadoop Safari: Hunting for Vulnerabilities. They described Hadoop and its security model and then suggested some “attacks” against clusters that had no security features enabled. These attacks are akin to breaking in to a house while the front door is wide open.
Their advice is simple, but simple is good here: it means you should be able to implement the advice without much trouble.
Recently Manish Kumar asked an interesting question, what do you do if your proc accesses multiple or even all the databases on the server?
So, instead of giving him a fuzzy answer in reply, I thought I’d write up exactly how you can deal with that sort of situation.
We’ve got two options and we’ll have a look at both of them (I’m not going to go into details about how signing procs works, please see the post mentioned earlier for a basic overview, here I’m going to look specifically at procs that access multiple databases).
Click through to see both solutions.
With my use of scripting and Azure Cloud Shell, I’m automating and building my environment, including SQL Database resources and then have a requirement to access and build the logical objects. This means that I need a firewall rule build for the Azure Cloud Shell I’m working from. The IP for this cloud shell is unique to the session I’m running at that moment.
The requirement to add this enhancement to my script is:
Capture and read the IP Address for the Azure Cloud shell session.
Populate the IP Address to a Firewall rule
Log into the new SQL Server database that was created as part of the bash script and then execute SQL scripts.
Click through for instructions.
Transparent data encryption (TDE) helps you to secure your data at rest, this means the data files and related backups are encrypted, securing your data in case your media is stolen.
This technology works by implementing real-time I/O encryption and decryption, so this implementation is transparent for your applications and users.
However, this type of implementation could lead to some performance degradation since more resources must be allocated in order to perform the encrypt/decrypt operations.
On this post we will compare how much longer take some of the most common DB operations, so in case you are planning to implement it on your database, you can have an idea on what to expect from different operations.
These results fit in reasonably well with what I’d heard, but it’s nice to have someone run the numbers.
Over the past several quarters, we have made major security enhancements to Confluent Platform, which have helped many of you safeguard your business-critical applications. With the latest release, we increased the robustness of our security feature set to help with:
- Using standard and central directory services like Active Directory (AD)/Lightweight Directory Access Protocol (LDAP)
- Simplifying the management of access control lists (ACLs)
- Proactive management and monitoring of security configurations to address the gaps as soon as possible
The following new security features are available in both Confluent Platform 5.0 and Apache Kafka 2.0:
- Support for ACL-prefixed wildcards to simplify the management of access control
- Kafka Connect password protection with support for externalizing secrets (to “secrets stores,” etc., like Hashicorp Vault)
The following security features are available only in Confluent Platform 5.0:
- AD/LDAP group support
- Feature access controls in Confluent Control Center
- Viewing of broker configurations in Confluent Control Center, including differences in security configurations between brokers
Let’s walk through each of these enhancements in detail.
Read on for examples.
In my blog Calculating a Security Principal’s Effective Rights. I built a view, named Utilty.EffectiveSecurity that you could query to fetch a security principal’s rights to objects in a database. In that blog I tested the code and showed how it works. Now I have taken this to the extreme and expanded the view to include all of the user’s security by finding all of their rights to all of the things that the get rights for.
The list of possible permissions you can fetch can be retrieved from:
SELECT DISTINCT class_descFROM fn_builtin_permissions(default)ORDER BY class_desc;
This returns the following 26 types of things that can have permissions assigned and returned by the sys.fn_my_permissions function:
Read on for the code.
Up until now Azure Container Instances only had one option to allow us to connect. That was assigning a public IP address that was directly exposed to the internet.
Not really great as exposing SQL Server on port 1433 to the internet is generally a bad idea: –
Now I know there’s a lot of debated about whether or not you should change the port that SQL is listening on to prevent this from happening. My personal opinion is, that if someone wants to get into your SQL instance, changing the port isn’t going to slow them down much. However, a port change will stop opportunistic hacks (such as the above).
But now we have another option. The ability to deploy a ACI within a virtual network in Azure! So let’s run through how to deploy.
Click through for those instructions.