Press "Enter" to skip to content

Author: Kevin Feasel

SQL Server Updates For MacOS

Meet Bhagdev has a couple MacOS-related announcements for SQL Server.  First, the SQL Server team has released command line tools:

We are delighted to share the production-ready release of the SQL Server Command Line Tools (sqlcmd and bcp) on macOS El Capitan and Sierra.

The sqlcmd utility is a command-line tool that lets you submit T-SQL statements or batches to local and remote instances of SQL Server. The utility is extremely useful for repetitive database tasks such as batch processing or unit testing.

The bulk copy program utility (bcp) bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files.

Second, there’s a new ODBC driver available:

  • Azure AD support – You can now use Azure AD authentication (username/password) to centrally manage identities of database users and as an alternative to SQL Server authentication.

  • Always Encrypted support – You can now use Always Encrypted. Always Encrypted lets you transparently encrypt the data in the application, so that SQL Server will only handle the encrypted data and not plaintext values. Even if the SQL instance or the host machine is compromised, an attacker gets ciphertext of the sensitive data.

  • Table Valued Parameters (TVP) support – TVP support allows a client application to send parameterized data to the server more efficiently by sending multiple rows to the server with a single call. You can use the ODBC Driver 13.1 to encapsulate rows of data in a client application and send the data to the server in a single parameterized command.

Multi-platform is the catchword of the day.  If you’re a MacOS user, this might be a portent of things to come.

Comments closed

Pester For Presentations

Rob Sewell takes Pester to the edge:

If you have PowerShell version 5 then you will have Pester already installed although you should update it to the latest version. If not you can get Pester from the PowerShell Gallery follow the instructions on that page to install it. This is a good post to start learning about Pester

What can you test? Everything. Well, specifically everything that you can write a PowerShell command to check. So when I am setting up for my presentation I check the following things. I add new things to my tests as I think of them or as I observe things that may break my presentations. Most recently that was ensuring that my Visual Studio Code session was running under the correct user. I did that like this

Rob’s scenario is around giving presentations, but while reading this, think about those services which should be running on your SQL Server instance—the same concept applies.

Comments closed

An Introduction To “Modern Excel”

Rob Collie explains how he talks about “Modern Excel,” by which he means DAX + Power Query/Power Pivot:

So the magic of Power Query is instantly apparent and tangible to basically any Excel Pro.  They can immediately see how PQ will save them oodles of time and anguish.

The benefits of DAX and relationships, by contrast, are less readily-apparent on first glance.  Portable/re-useable formulas that enable rapid iteration, the answering of “emergent” questions in near real-time, as well as a “subdivide and segment” capability?  Or how about multi-data-table capabilities that provide an integrated and convenient view across many different sources of formerly-siloed data?  These concepts are simply alien to the longtime Excel user, even though they are MONSTERS in terms of their biz value (as well as time-savers and anguish-reducers).  None of the impact “lands” up front because it can’t adequately be contemplated until you’ve started DOING it.

Rob’s looking at this from the standpoint of an educator helping train people with Excel expertise.

Comments closed

Against Shrinking Database Log Files

Kenneth Fisher is wary of shrinking your database log file:

It’s too big
I find that people who say this frequently don’t have a firm idea of what is too big or even why it might be as big as it is. This also goes in with the I need to free up disk space with no good reason why the space needs to be freed up.

There are good reasons to shrink the log and they do revolve around space. For example:

  • I had a one-time explosive growth of the log due to a large data load.

  • The usage of the database has changed and we aren’t using as much of the log as we used to.

  • We are billed at 2 am based on space used. We will grow the log back again after the billing period.

  • I need to clean up a large number of VLFs. (of course, then you are going to grow it back manually right?)

I quoted the caveats but Kenneth makes a solid case against shrinking log files without a good counterbalancing reason.

Comments closed

Which Write- Cmdlet Should I Use?

Jana Sattainathan has some thoughts on when to use each of the Write- cmdlets in Powershell:

PowerShell has matured as the automation tool of choice on the Microsoft platform, be it on Windows or Azure. However, there is no official guidance on best-practices and standards around some things. At times, bloggers do things incorrectly in their examples thereby reinforcing bad practices. Hopefully, this small post will help connect some dots for you! Please comment if I am stating something that is not a generally accepted best practice. Specifically, we are going to glance at the following cmdlets

  • Write-Host

  • Write-Output

  • Write-Debug

  • Write-Warning

  • Write-Error/Throw

  • Write-Verbose

  • Write-Progress

Write-Host is the easiest and probably most controversial of the set (because its messages are outside the pipeline), but there’s a place for each of these.

Comments closed

Automatic Plan Correction

Jovan Popovic introduces a new tuning feature in SQL Server 2017:

How to identify plans that should be corrected?

SQL Server 2017 provides a new system view called sys.dm_db_tuning_recommendations that shows all identified plan regressions. You can select data from this view, find the difference between last known good plan and regressed plan, and the script that you can execute to manually fix regression.

Automatic plan correction

As a next step, you can let SQL Server 2017 to automatically correct any plan that regressed.

I like it when the database engine gets smarter, but I get the feeling I’d like there to be some finer-grained options around what gets considered a regression and when a sub-optimal plan gets swapped out.

Comments closed

Ship It

James Anderson compiled the latest T-SQL Tuesday results and shipped it:

I’d like to say a huge thank you to everyone who read or published a post for T-SQL Tuesday #90. I had a great time reading through all the posts and I learnt a lot!

I feel that the real takeaway here is that Continuous Integration and DevOps are not just about putting the right tools in place, it’s all about putting the right working practices in place.

Read on for the wrap-up.

Comments closed

R And Python Support In VS 2017

David Smith announces that Visual Studio 2017 now supports R Tools for Visual Studio and Python Tools for Visual Studio:

The new Visual Studio 2017 has built-in support for programming in R and Python. For older versions of Visual Studio, support for these languages has been available via the RTVS and PTVS add-ins, but the new Data Science Workloads in Visual Studio 2017 make them available without a separate add-in. Just choose the “Data Science and analytical applications” option during installation to install everything you need, including Microsoft R Client and the Anaconda Python distribution.

I’m personally going to wait a little bit before jumping onto Visual Studio 2017, but I’m glad that RTVS is now available.

Comments closed

What’s New In Hadoop 3.0?

Shubham Sinha explains some of the changes coming to Hadoop:

Integrating EC with HDFS can maintain the same fault-tolerance with improved storage efficiency. As an example, a 3x replicated file with 6 blocks will consume 6*3 = 18 blocks of disk space. But with EC (6 data, 3 parity) deployment, it will only consume 9 blocks (6 data blocks + 3 parity blocks) of disk space. This only requires the storage overhead up to 50%.

Since Erasure coding requires additional overhead in the reconstruction of the data due to performing remote reads, thus it is generally used for storing less frequently accessed data. Before deploying Erasure code, users should consider all the overheads like storage, network and CPU overheads of erasure coding.

Now to support the Erasure Coding effectively in HDFS they made some changes in the architecture. Lets us take a look at the architectural changes.

There are some nice features coming to Hadoop version 3.

Comments closed

Optimizing Kafka

Yeva Byzek explains different tuning options available within Apache Kafka:

Without needing to make any changes to Kafka configuration parameters, you can setup a development Kafka environment and test basic functionality. Yet the fact that Kafka runs straight off the shelf does not mean you won’t want to do some tuning before you go into production. The reason to tune is that different use cases will have different sets of requirements that will drive different service goals. To optimize for those service goals, there are Kafka configuration parameters that you should change. In fact, the Kafka design itself provides configuration flexibility to users, and to make sure your Kafka deployment is optimized for your service goals, you absolutely should investigate tuning the settings of some configuration parameters and benchmarking in your own environment. Ideally, you should do that before you go to production, or at least before you scale out to a larger cluster size.

We have written a white paper to help you identify those service goals, configure your Kafka deployment to optimize for them, and ensure that you are achieving them through monitoring.

Read the whole thing, especially the part about throughput versus latency.

Comments closed