Press "Enter" to skip to content

Day: August 16, 2024

Creating a Role for Procedure Execution

Andy Brownsword brings out the hangman:

We have database roles for reading and writing data but interestingly there’s no role which provides permission to execute procedures.

Most DBAs I’ve worked with – production or development – prefer to use stored procedures for data access rather than an ORM. A role to allow procedure execution would be very handy.

So let’s fix that.

The nice thing about this role is that you can then introduce module signing and allow the stored procedures to do things that you might not want regular users to do, such as truncate tables, start and stop SQL Agent jobs, etc. That way, users don’t have these advanced permissions, but the application (whose account has the stored procedure execution role) can do what it needs to do. You can, of course, also have other roles, like one for the kinds of things I mentioned and another to execute most stored procedures. Maintaining that gets a little trickier, but is doable.

Comments closed

Choosing Compute Options for Databricks

Matt Collins shares some recommendations:

You can use Databricks for a vast range of applications these days. From handling streaming datasets, running Deep Learning models to populating data model fact tables with complex transformations, choosing the correct compute option can seem a lot like a stab in the dark followed by (potentially expensive) trial end error.

You can choose from an incredible range of configurations in the Databricks Compute User Interface.

This variety is comprised of the Virtual Machine (VM) category, size, availability and access mode (also referred to as family).

Determining the right compute choice for you could be answered by the classic answer “it depends”, but this guide aim to inform the decision-making process — both in terms of cost and performance.

Read on for more information.

Comments closed

Creating a SQL Server Data Tools Project Template from a Database Project

Olivier Van Steenlandt builds a template:

Since I was introducing Unit Testing for databases at work, and we have multiple Database Projects, I wanted to do the initial work once and reuse it in my different Database Unit Testing Projects.

To create an SSDT template based on an existing Database Project, you first open the Database Project in Visual Studio. As soon as your Database Project is loaded, navigate to the Project section in the menu bar at the top left. Choose Export Template to continue.

Click through to see how you can put together a database project template that you could re-use in future projects.

Comments closed

Azure Data Explorer Web UI Updates for July 2023

Michal Bar has an update for us:

Welcome to the July 2024 update. We are excited to announce new features and improvements in ADX web UI.

Continue reading to learn more about:

  1. Copy query with syntax coloring and KQL IntelliSense improvements
  2. Ad-hoc visual data exploration
  3. Dashboards real time refresh rate
  4. Enhanced data interaction for dashboard tiles
  5. Resize and move dashboard tiles using keyboard only

Click through to continue reading.

Comments closed

Changing the Data Type of a PostgreSQL Column Used in a View

Andrea Gnemmi makes a change:

We have all encountered the need to change a column data type in a table. Unfortunately, in PostgreSQL, you cannot change it in a single step if the column is used in a view. In this article, we cover the steps to change the data type for a column in a PostgreSQL table.

Read on for those steps. This is a bit more complex than SQL Server, at least for non-materialized views. In that case, the view is little more than a saved T-SQL query, so you would want to run sp_refresh [view_schema].[view_name] to ensure that the view has the latest metadata and doesn’t fail when you run the query.

Comments closed

Microsoft Fabric Capacity Limits

Teo Lachev builds a pair of tables:

Here is table that is getting more and more difficult to find as searching for Fabric capacity limits returns results about CU compute units (for the most part meaningless in my opinion). I embed in a searchable format below before it vanishes on Internet. The most important column for semantic modeling is the max memory which denotes the upper limit of memory Fabric will grant a semantic model.

Click through for that table, followed by a table comparing Fabric SKUs to Power BI SKUs.

Comments closed