Press "Enter" to skip to content

Category: Security

Kafka and SIEM/SOAR Tools

Kai Waehner wraps up a series on Apache Kafka and network security:

SIEM combines security information management (SIM) and security event management (SEM). They provide analysis of security alerts generated by applications and network hardware. Vendors sell SIEM as software, as appliances, or as managed services; these products are also used for logging security data and generating reports for compliance purposes.

SOAR tools automate security incident management investigations via a workflow automation workbook. The cyber intelligence API enables the playbook to automate research related to the ticket (lookup potential phishing URL, suspicious hash, etc.). The first responder determines the criticality of the event. At this level, it is either a normal or an escalation event. SOAR includes security incident response platforms (SIRPs), Security orchestration and automation (SOA), and threat intelligence platforms (TIPs).

In summary, SIEM and SOAR are key pieces of a modern cybersecurity infrastructure. The capabilities, use cases, and architectures are different for every company.

Click through to see where Kafka can fit in all of this.

Comments closed

Row Level Security in Azure Analysis Services and Power BI PPU

Gilbert Quevauvilliers continues a series on moving from Azure Analysis Services to Power BI Premium Per User:

In this blog post I am going to cover how to implement Row Level Security (RSL) when using AAS and how this can be done on PPU.

In my example below I am going to show creating to simple RLS roles which will limit data for the users who belong to 2 roles.

Despite the simplification, we can see how row-level security applies to both products and how the two differ.

Comments closed

Finding and Removing Custom Roles, Schemas, and Users from a Database

Thomas Williams wants to go back to square one:

I’m a fan of the built-in database roles like db_datareader to standardise & simplify permissions (sorry Dr. Greg Low!) and recently I needed to do just that in a database created using SQL Server 2000, and remove old defaults and a lot of custom roles, schemas and users.

I wrote the set of queries below to generate scripts to remove non-built-in roles, schemas and users, when compared to the model database on a new SQL Server 2019 server.

After running the script generated by the queries, I added back users and gave appropriate roles (like db_datareader).

Read on for the script.

Comments closed

Executing T-SQL with a Proxy Account

Tom Collins answers a question:

I have some t-sql code added to a job step on a SQL Server Agent job. The problem is I need to run the code as RUNAS . I though of executing the job with a proxy account – so progressed with the Credential & proxy set up.    But I still can’t view the Proxy\Credential in the RunAs list . Is there a way around this problem?

Read on to learn why and for the answer.

Comments closed

A Warning: VPCs and Distributed Database Platforms

Wade Trimmer takes us through a reason why you might not want to use VPC endpoints to separate applications from distributed database platforms:

AWS PrivateLink (also known as a VPC endpoint) is a technology that allows the user to securely access services using a private IP address. It is not recommended to configure an AWS PrivateLink connection with Apache Kafka or Apache Cassandra mainly due to a single entry point problem. PrivateLink only exposes a single IP to the user and requires a load balancer between the user and the service. Realistically, the user would need to have an individual VPC endpoint per node, which is expensive and may not work. 

Using PrivateLink, it is impossible by design to contact specific IPs within a VPC in the same way you can with VPC peering. VPC peering allows a connection between two VPCs, while PrivateLink publishes an endpoint that others can connect to from their own VPC.

Read on to understand how this affects platforms like Cassandra and Kafka.

Comments closed

Working with App Secrets in .NET Core

Santosh Hari shows us how to use application secrets when building .NET Core applications:

I was writing a sample dotnetcore console application for a talk because why I felt using a sample aspnet core web app was overkill. The app was connecting to a bunch of Azure cloud and 3rd party services (think Twilio API for SMS or LaunchDarkly API for Feature Flags) and I had to deal with connection strings.

Now I have a nasty habit of “accidentally” checking in connection string and secrets into public GitHub repositories, so I wanted to do this right from the get go.

That’s a bad habit to be in, and Santosh shows us how we can avoid doing that via use of application secrets.

Comments closed

Handling Content Access Requests in Power BI

Marc Lelijveld walks us through the process of requesting (and granting) access to content in Power BI:

When we look at the Power BI ecosystem, we can identify a bunch of different artifacts. For example, dataflows, datasets, reports, dashboards and many derivatives. As I explained in the previous post, the best practice for sharing content is through a Power BI App, which includes a list of users or active directory group containing multiple users. With that, the content becomes available after publishing to those who are granted access. Though, it can happen that one of the users shares the link with other users who do not have access to the content. As a property of the Power BI App, you can allow users to share the app and underlying dataset with share permissions. Though, working with sensitive data this might now be what you are looking for, as you might loose control over who has access.

Read on to see what constitutes a content access request and what you can do about them.

Comments closed

Digital Forensics with Apache Kafka

Kai Waehner continues a series on using Apache Kafka as the backbone for computer security:

Storing data long-term in Kafka is possible since the beginning. Each Kafka topic gets a retention time. Many use cases use a retention time of a few hours or days as the data is only processed and stored in another system (like a database or data warehouse). However, more and more projects use a retention time of a few years or even -1 (= forever) for some Kafka topics (e.g., due to compliance reasons or to store transactional data).

The drawback of using Kafka for forensics is the huge volume of historical data and its related high cost and scalability issues. This gets pretty expensive as Kafka uses regular HDDs or SDDS as the disk storage. Additionally, data rebalancing between brokers (e.g., if a new broker is added to a cluster) takes a long time for huge volumes of data sets. Hence, rebalancing takes hours can impact scalability and reliability.

But there is a solution to these challenges: Tiered Storage.

Click through to learn more.

Comments closed

Threat Intelligence and Kafka

Kai Waehner continues a series on using Apache Kafka as the foundation for a security solution:

Threat intelligence, or cyber threat intelligence, reduces harm by improving decision-making before, during, and after cybersecurity incidents reducing operational mean time to recovery, and reducing adversary dwell time for information technology environments.

Threat intelligence is evidence-based knowledge, including context, mechanisms, indicators, implications, and action-oriented advice about an existing or emerging menace or hazard to assets. This intelligence can be used to inform decisions regarding the subject’s response to that menace or hazard.

Threat intelligence solutions gather raw data about emerging or existing threat actors & threats from various sources. This data is then analyzed and filtered to produce threat intel feeds and management reports that contain information that automated security control solutions can use.

Threat intelligence keeps organizations informed of the risks of advanced persistent threats, zero-day threats and exploits, and how to protect against them.

Read the whole thing.

1 Comment

Unlocking a Login the Bad Way

Kenneth Fisher needs bad things done fast:

We recently had an application login (SQL Server authenticated) in one of our training environments start locking out on a regular basis. I won’t go into why other than to say we did resolve it eventually. This was a major problem that escalated rather quickly up our management chain. We needed to solve it ASAP. And because of that we needed not only a long term solution but a short term one as well. The short term solution involved creating a script that unlocked the login (if it’s currently locked) and sticking it in a job that runs every 5 minutes.

Note: This is not something you should be doing in production! This creates a major security hole.

The process itself isn’t, strictly speaking, a bad idea—for example, maybe you’re testing out some integration and accidentally lock the account. The bad idea is more the script to keep doing this every few minutes.

Comments closed