Press "Enter" to skip to content

Month: February 2019

SQL Injection Explained

Hugo Kornelis explains SQL injection and how to fix it:

Translating this back to technical terms, the root cause of actual SQL injection vulnerabilities is a front-end program (often, but not always, a web application) that allows the user to enter data, forms a query by concatenating that data into pre-made query strings, and sends the result to the backend database server.

Once you know the root cause, the fix becomes rather obvious.

When I explain SQL injection, I like to explain it using two concepts: code versus data and technology boundaries. Ideally, your code is separate from your data, and within a single technology (e.g., in that .NET web app), that’s typically the case. But when you start to traverse boundaries, it’s convenient (and wrong!) to combine code and data together to pass everything across as a single stream of information. Parameterization is the way of keeping code and data separate as you cross those boundaries. Your data isn’t code and your code isn’t data and conflating the two is how attackers can inject arbitrary code into your system.

Incidentally, technology boundaries can happen within a single product, too: dynamic SQL is an example of this.

Comments closed

SQL Server Permissions Manager Updates

Eric Cobb has updates to the SQL Server Permissions Manager project:

I recently made some updates to my SQL Server Permissions Manager project on GitHub. These updates mainly consisted of bug fixes and documentation updates, although I did also add the ability to snapshot server level permissions such as “sysadmin” or “securityadmin”.

Given how much we want to audit permissions and how difficult it is to do a really good job of that in SQL Server, there’s space for a good auditing tool.

Comments closed

Apps To Manage SQL Server On Azure VMs

Kevin Chant has a list of tools you can use to manage SQL Server on Azure VMs:

From experience I know it’s important to know what applications you can use locally with Azure to manage SQL Server solutions. So you have the right tools for the job.

For instance, I was talking with some people at a client’s site the other day about deciding what application to use to future proof themselves.

In this post I will cover applications for use with Windows, MacOS and Linux distributions. 

I don’t think I’m spoiling too much in saying that about 80% of these are the same tools you would use for on-prem work.

Comments closed

Where To Store SQL Server DB Files In Azure

James Serra gives us a few options for storing database files (that is, your MDF, NDF, and LDF files) when running SQL Server in an Azure VM:

If using managed disks, the recommendation is to attach several premium SSDs to a VM.  From Sizes for Windows virtual machines in Azure you can see for the size DS5_v2 you can add 64 of 4TB data disks (P50) yielding the potential total volume size of up to 256TB per VM, and if you use the preview sizes (P80) your application can have up to around 2PB of storage per VM (64 of 32TB disks).  With premium storage disks, your applications can achieve 80,000 I/O operations per second (IOPS) per VM, and a disk throughput of up to 2,000 megabytes per second (MB/s) per VM.  Note the more disks you add the more storage you get and the more IOPS you get, but only up to a certain point due to VM limits, meaning at some point more disks get you more storage but not more IOPS.

But there are a few other options too, so check them out.

Comments closed

Stored Procedure IF Branching and Performance

Erik Darling explains that the IF block in a stored procedure won’t help you with performance:

Making plan choices with IF branches like this plain doesn’t work.
The optimizer compiles a plan for both branches based on the initial compile value.
What you end up with is a stored proc that doesn’t do what it’s supposed to do, and parameter sniffing times two.

Read on to see an example of this. If you really, really want to use an IF block, you could separate the components out into individual stored procedures and call those stored procedures independently.

Comments closed

Power BI Palette In Charticulator

David Eldersveld has submitted a pull request for Charticulator:

This morning, I submitted a pull request on GitHub to bring the default Power BI color palette to Charticulator. While Charticulator can export a Power BI custom visual .pbiviz file, there are no color-related Format options in Power BI. Any color choices must occur in Charticulator before exporting (you currently cannot change them later in Power BI). As a result, having the Power BI default palette easily accessible in Charticulator could help bring more consistency between the two tools.

I think this is a good idea. Consistency between products allows people to combine them more effectively.

Comments closed

Using AWS Lambda To Get Into Nice Restaurants

Stephane Maarek gives us the best use of AWS Lambda I’ve seen yet:

One attentive eye would have noticed that the booking platform is not hosted on the restaurant website at http://www.septime-charonne.fr/en/ but instead on https://module.lafourchette.com.

Upon using the Chrome Web Developer Tools to analyze the network calls being made between my browser and the booking service, I stumbled upon an easy to use and completely unprotected REST API:

I love the bonus hack at the end.

Comments closed

Enabling Cloudera Manager Debug Mode

Guy Shilo has a quick tip around debugging in Cloudera Manager:

This is a short post but it can save you some wandering and searching.

Sometimes when you try to find and fix issues with Cloudera Manager you will want to increase the log level to debug so you can see what’s wrong.

The procedure cannot be found in the documentation (or at least cannot be found easily), so here how it’s done:

As you’d expect, going into debug mode generates a lot of data on a real cluster, so use sparingly.

Comments closed

Things To Know About Databricks UAP

Kara Annanie has five things you should know about the Databricks Unified Analytics Platform:

4.     A Spark Dataframe is not the same as a Pandas/R Dataframe
Spark Dataframes are specifically designed to use distributed memory to perform operations across a cluster whereas Pandas/R Dataframes can only run on one computer. This means that you need to use a Spark Dataframe to realize the benefits of the cluster when coding in Python or R within Databricks.

This is a nice set of tips for people getting started with Spark and Databricks.

Comments closed