Press "Enter" to skip to content

Curated SQL Posts

Appending Date to File Name in SSIS

Jason Jenkins needs a dynamic filename:

In a recent project we needed to create an SSIS package to export data from a SQL Server table to a CSV file.  One of the requirements was to dynamically append the date to the file name each time a file was generated. This post will cover how to create a dynamic file name with the date included (YYYYMMDD format) in the file name.

Click through to see how. Hint: it’s expressions. It’s always expressions with SSIS.

Leave a Comment

Extending Page Compression Checks

Eitan Blumin builds on a prior Paul Randal post:

So, here is my own expansion to Paul’s script, which adds the following:

* Outputs a remediation command to REBUILD each relevant index with ROW compression.

* The script has parameters for:

– Minimum number of compression attempts

– Maximum success rate percentage

– Index rebuild options

* The script outputs additional details such as:

– Whether the index/table in question is partitioned or not.

– Range-scan percentage

– Leaf-level updates percentage

* It runs the check for ALL accessible and writeable databases.

Click through for the script.

Leave a Comment

Setting up a Power BI Workspace

Marc Lelijveld has a new two-parter:

Time for a more governance related blog this time. It will be a series of two blogs, where in this first part, I will elaborate on the overall setup of your Power BI workspace and naming. The second part will continue about workspace permissions, sharing and ownership. I feel this is a wide topic and therefore deserves a separate blog.

In this part, focus on the overall setup of workspaces. In my work at various clients, I regularly encounter situations where there is a lot of confusion around workspaces, the scope of a workspace, audience and naming. A blog not only for Power BI tenant administrators, but also for passionate Power BI content creators to better understand each other’s standpoints.

Click through for part one and stay tuned for part two.

Leave a Comment

Simple Mapping Data Flows in Synapse

Joshuha Owen announces a new feature:

This week, we are excited to announce the public preview for Map Data, a new feature for Azure Synapse Analytics and Database Templates! The Map Data tool is a guided process to help users create ETL mappings and mapping data flows from their source data to Synapse lake database tables without writing code. This experience will help you get started with transformations into your Synapse Lake database quickly but still give you the power of Mapping Data Flows.

This process starts with the user choosing the destination tables in Synapse lake databases and then mapping their source data into these tables. We will be following up with a demo video shortly.

Click through for more details on how it works.

Leave a Comment

MLOps on Databricks

Piotr Majer and Michael Shtelma complete a series on MLOps on Databricks:

This is the second part of a two-part series of blog posts that show an end-to-end MLOps framework on Databricks, which is based on Notebooks. In the first post, we presented a complete CI/CD framework on Databricks with notebooks. The approach is based on the Azure DevOps ecosystem for the Continuous Integration (CI) part and Repos API for the Continuous Delivery (CD). This post extends the presented CI/CD framework with machine learning providing a complete ML Ops solution.

Check it out.

Leave a Comment

Azure ML Deployments and Endpoints

I continue a series on low-code machine learning with Azure ML:

The first thing we need to do is create an inference pipeline. Inference pipelines differ from training pipelines in that they won’t use the training dataset, but they will accept user input and provide a scored response. There are two types of inference pipeline: real-time and batch. Real-time inference pipelines are intended for small-set work. We’ll host a service on some compute resource in Azure and people will make REST API calls to our service, sending in a request with a few items to score and we send back classification results.

By contrast, a batch pipeline is what you’d use if you have a nightly job with tens of millions of items to score. In that case, the typical pattern is to have a service listening for changes in a storage account and, some time after people drop new files into the proper folder, the batch inference process will pick up these files, score the results, and write those results out to a destination location.

This post is all about inference pipelines. The next post will be all about batch pipelines.

Leave a Comment

New Azure Synapse Database Templates

Kevin Schofield has some new database templates for us:

The Synapse Database Template for Automotive Industries is a comprehensive data model that addresses the typical data requirements of organizations engaged in manufacturing automobiles, heavy vehicles, tires, and other automotive components.

The Synapse Database Template for Genomics is a comprehensive data model that addresses the typical data requirements of organizations engaged in acquiring and analyzing genomic data about human beings or other species.

Click through for more information on these, as well as two other fields.

Leave a Comment

Logins and Users in SQL Server

Lee Markum disambiguates two security terms:

You’re a data professional learning about managing SQL Server and you’ve been asked to grant permissions for SQL Server to an individual or a group of individuals. What do you need to understand in order to accomplish this? I’ll be your guide to getting started with handling access to SQL Server.

Click through for the mandatory automobile analogy and a good way of laying out what logins and users are.

Leave a Comment

Cannot Open User Default Database

David Alcock backs out of a problem:

This error isn’t to do with my login as such, it’s still there with sysadmin role membership so I don’t have to do anything too drastic like restarting SQL Server with the -m or -f startup parameters and recreate it. The error message is telling me that my logins default database cannot be opened, which is more than likely because I’ve deleted it.

Click through to see how David got out of this issue. This is a big part of why I highly prefer not to change the default database from master for logins..

Leave a Comment