Press "Enter" to skip to content

Curated SQL Posts

High-Throughput REST APIs with Dapper and Azure SQL DB

Davide Mauri builds out an example of a WebAPI project using Dapper to query Azure SQL Database:

I was able to execute 1100 Requests Per Seconds with a median response time of 20msec. If you can accept a bit higher latency, you can also reach 1500 RPS but the median response time becomes 40msec and the 95 percentile is set at 95msec. Database usage never goes above 20% in such cases…and in fact the bottleneck is the Web App (better, the Web App Plan) and more specifically the CPU. Time to scale up or out the Web App Plan.

By scaling up and out a bit, I was able to reach almost 10.000 request per second with just an HS_Gen5_4. Quite impressive.

I like Dapper as a micro-ORM. Products like it and FSharp.Data.SqlClient are good examples of how you can remove a lot of middleware goop without taking on the performance burdens of Entity Framework and Hibernate.

Comments closed

Dealing with Large SQL Scripts

Kevin Chant has some advice when you have to deal with a giant SQL script:

If you have been given a script that is thousands of lines long from a developer, the first thing I would ask is if they can split it up.

I say this because a lot of developers who write long scripts tend to have come from various backgrounds. Hence, some of them are used to developing on other programming languages.

So, they do not always appreciate that SQL is a set-based language. In addition, they do not always appreciate SQL Server is optimised for set based queries.

Sometimes you can break these scripts down, though there are of course good ways and bad ways to do so.

Comments closed

Using Pester with .NET Powershell Notebooks

Rob Sewell has Powershell in notebooks, so of course Rob is going to write tests:

Using Pester to validate that an environment is as you expect it is a good resource for incident resolution, potentially enabling you to quickly establish an area to concentrate on for the issue. However, if you try to run Pester in a .NET Notebook you will receive an error

Click through for the reason why this error appears and a workaround until it’s fixed for real.

Comments closed

Writeback in Power BI Using Power Apps

Shabnam Watson shows how you can use Power Apps to write back to data sources in Power BI:

The Power Apps visual first became available as a custom visual in 2018 and then as one of the default visuals as of the October 2019 release of Power BI Desktop.

The Power Apps visual provides an important functionality to refresh a Power BI report page automatically which eliminates the need for the end user to manually refresh the page by clicking on the Refresh option from the Power BI menu to see changes in the data.

In this post, I will show you how to add a simple app to a Power BI report to update the data in the report and have the app automatically refresh the page. All of this can be done with a few lines of code thanks to all the work that has been done in Power Apps to make the app creation experience extremely user friendly and relatively easy to learn.

Click through for the demo.

Comments closed

SQL Server 2019 and sys.syslogins Changes

Taryn Pratt goes into a change in the sys.syslogins system view in SQL Server 2019:

Sigh ok, something is really broken because this was working before we failed over.

The code for the login replication basically does the following via a cursor (yeah, I know, but it works…normally):

1. Select from the primary via OPENQUERY to query the logins and passwords
2. Using sp_hexadecimal convert the varbinary password to a string value
3. Create a string to be executed, i.e. dynamic SQL that runs a CREATE LOGIN

Read on for the whole story and how you can protect yourself as you upgrade to SQL Server 2019.

Comments closed

Security Changes in ML Services

Dennes Torres goes over some of the security changes with Machine Learning Services in SQL Server 2019:

I have a confession to make. Why, in my last article about shortest_path in SQL Server 2019, have I used Gephi in order to illustrate the relationships, instead of using a script in for the same purpose and demonstrate Machine Learning Services as well?

The initial plan was to use an R script; however, the R script which works perfectly in SQL Server 2017 doesn’t work in SQL Server 2019.

The change is a positive one from the standpoint of security, but it also makes life more difficult. I found this particularly tricky when installing TensorFlow and Keras in R via ML Services.

Comments closed

Creating Sources and Sinks with Blink

Seth Wiesman has a tutorial showing how to create sources and sinks using Apache Flink’s SQL interface, Blink:

A lot of work focused on improving runtime performance and progressively extending its coverage of the SQL standard. Flink now supports the full TPC-DS query set for batch queries, reflecting the readiness of its SQL engine to address the needs of modern data warehouse-like workloads. Its streaming SQL supports an almost equal set of features – those that are well defined on a streaming runtime – including complex joins and MATCH_RECOGNIZE.

As important as this work is, the community also strives to make these features generally accessible to the broadest audience possible. That is why the Flink community is excited in 1.10 to offer production-ready DDL syntax (e.g., CREATE TABLEDROP TABLE) and a refactored catalog interface.

Click through for a demonstration. One of the nicest things about the ANSI SQL standard is that it was intended to be a one-language solution, where the language used for administration is the same as the language used for regular queries. That cuts down on the number of languages you need to know to get your job done.

Comments closed

MR3: Hive on Kubernetes

Alex Woodie reports on a DataMonad production:

MR3 is a software product developed by a team led by Sungwoo Park. The software, which is not open source, is sold by a Delaware-based software company called DataMonad. After prototyping a Java-based execution engine called MR2 in the 2013 timeframe, development of Scala-based MR3 began in 2015. The first release of MR3 was delivered in early 2018, and version 1.0 was released yesterday.

According to DataMonad, MR3 is an execution engine for big data processing, and Hive is the first and main application that’s been configured to run on it (Tez is also supported). The company says MR3 offers comparable performance to the latest release of Hive, dubbed LLAP, but without the technical complexity.

The closed-sourcedness is a bit of a downer, but I like having more competition in the space.

Comments closed