Press "Enter" to skip to content

Day: November 4, 2022

Removing Backgrounds from Images

Brendan Tierney focuses on the subject at hand:

There are a number of methods available for preparing images for input to a variety of purposes. For example, for input to deep learning, other image processing models/applications/systems, etc. But sometimes you just need a quick tool to perform a certain task. An example of this is I regularly have to edit images to extract just a certain part of it, or to filter out all the background colors and/or objects etc. There are a a variety of tools available to help you with this kind of task. For me, I’m a Mac user, so I use the instant alpha feature available in some of the Mac products. But what if you are not a Mac user, what can you use.

I’ve recently come across a very useful Python library that takes all or most of the hard work out of doing such tasks, and has proved to be extremely useful for some demos and projects I’ve been working on. The Python library I’m using is remgb (Remove Background). It isn’t perfect, but it does a pretty good job and only in a small number of modified images, did I need to do some additional processing.

Click through to see how the tool works, as well as some cases it doesn’t quite get correct.

Comments closed

LATERAL and APPLY

Lukas Eder shows off one of my favorite operators:

The SQL:1999 standard specifies the <lateral derived table>, which is SQL’s way of allowing for a derived table (a subquery in the FROM clause) to access all the lexically preceding objects in the FROM clause. It’s a bit weird in terms of syntax, I personally think that Microsoft SQL Server has a much nicer solution for this concept via APPLY. Oracle supports both syntaxes (standard and T-SQL’s). Db2, Firebird, MySQL, PostgreSQL only have LATERAL.

Click through to see how the operator works.

Comments closed

Non-Parallel Plans from Computed Columns with Scalar Functions

Etienne Lopes tells a tale:

I must say that per principle I’m not a big fan of neither computed columns nor scalar UDFs. I mean, I find them attractive in the way they (appear to) make “things simpler” also allowing code reuse, improving queries readability, etc. Yes but they also hide or mask the complexity behind their use, which can often be quite deceiving, making it much harder to troubleshoot and solve performance problems. Furthermore they have several limitations by design that can hurt performance and all this combined, can sometimes make a “simple” query take many minutes or hours to run, instead of just a few seconds! When you see this situation happen again and again while fine tuning databases, their use becomes much less appealing.

Having this said, sometimes they can be useful of course but it’s very important to choose carefully where, how and when to use computed columns and scalar UDFs, so that performance won’t get hurt and its benefits outweigh the drawbacks.

Click through for an example of where the combo really falls short. I do like computed columns, though never with user-defined functions.

Comments closed

The Importance of the Power BI Service

Reza Rad explains why the Power BI Service is useful:

The Power BI toolset comes in many shapes and forms. There is a Power BI Desktop, Power BI Mobile app, Power BI Report Server, and Power BI Service (and some other applications and components too). The questions I hear from the new users of Power BI are; Do I need to have an account for Power BI? do I need to use the Power BI website for creating visualization etc.? What is the Power BI website or service, and what is its usage? If I can do the reporting using Power BI Desktop for free, then why would I need the service? In this article and video, I will answer all of that.

Click through for a video or for the article explaining the purpose behind the Power BI Service. Having done work with places using Power BI Report Server and places using the Power BI Service, I will say that the latter takes more work to get corporate-compliant but offers a whole lot more.

Comments closed

Incorporating Power BI with Azure Synapse Analytics

Ginger Grant counts the ways:

The first is to connect Power BI to Azure Synapse to explore and visualize data. You can examine your datasets that you have loaded in your datalake with Power BI to help with the analysis of the data either for a data science solution or to determine how you are going to transform the data. For more information on how to do this, check out my previous blog .

Click through for three additional methods.

Comments closed

MAXDOP Calculation Discrepancy

Brent Ozar does the math:

In this case, the SQL Server has multiple NUMA nodes, with greater than 16 logical processors per node – that’s the last line of the screenshot. In that line, Microsoft says MAXDOP should be half of the number of logical processors with a max of 16 – so 16.

But it’s recommending 8. Hmm.

Read on for the answer.

Comments closed

STONITH Resources for Pacemaker Clusters

Andrew Pruski picks up Chekov’s Gun:

Recently I had to create another pacemaker cluster, this time on-premises using VMWare virtual machines. The steps to create the pacemaker cluster and deploy an availability group where pretty much the same as in my original post (minus any Azure marlarkey) but one step was different, creating the STONITH resource.

A STONITH resource is needed in a pacemaker cluster as this is what prevents the dreaded split brain scenario…two nodes thinking that they’re the primary node. If the resource detects a failed node in the cluster it’ll restart that node, hopefully allowing it to come up in the correct state.

Read on to see how Andrew did it.

Comments closed