We can try it out with a very familiar domain:
(converted <- to_homoglyph("google.com")) ##  "ƍ၀໐|.com"
Now, that’s using all possible homoglyphs and it might not look like
google.comto you, but imagine whittling down the list to ones that are really close to Latin character set matches. Or, imagine you’re in a hurry and see that version of Google’s URL with a shiny, green lock icon from Let’s Encrypt. You might not really give it a second thought if the page looked fine (or were on a mobile browser without a location bar showing).
Click through for more details, as well as information on punycode.
An easier way to do it is to use the normal distribution, or central limit theorem. My post on the theorem illustrates that a sample will follow normal distribution if the sample size is large enough. We will use that as well as the rules around determining probabilities in a normal distribution, to arrive at the probability in this case.
Problem: I have a group of 100 friends who are smokers. The probability of a random smoker having lung disease is 0.3. What are chances that a maximum of 35 people wind up with lung disease?
Click through for the example.
There are a few tools and add-ins that support the creation of “regions” in T-SQL code. SQL Server Management Studio (SSMS) supports one way to separate sections of long-ish T-SQL scripts natively, by using begin/end:
It’s an interesting SSMS trick.
A probe residual is important because they can indicate key performance problems that might not otherwise be brought to your attention.
What is a probe residual?
Simply put, a probe residual is an extra operation that must be performed to compete the matching process. Extra being left over things to do.
Click through for an example brought about by implicit conversion.
Obviously, it’s an important subject, right? And yet we keep seeing comments about how the cost is in seconds.
And to be fair, it is. It’s an estimate of how many seconds a query would take, if it was running on a developers workstation from back in the 90’s. At least that’s the story. In fact Dave Dustin (t) posted this interesting story today:
The best way to think of cost is as a probabilistic, ordinal, unitless value: 3 might be greater than 2; 1000 is almost certainly greater than 2; and “2 what?” is undefined.
Have you already tried to sort a table based on a text field? The result is usually a surprise for most people. M language has a specific implementation of the sort engine for text where upper case letters are always ordered before lower case letters. It means that Z is always before a. In the example (here under), Fishing Rod is sorted before Fishing net.
The classical trick to escape from this weird behavior is to create a new column containing the upper case version of the text that will be used to sort your table, then configure the sort operation on this newly created column. This is a two steps approach (Three steps, if you take into account the need to remove the new column). Nothing bad with this except that it obfuscates the code and I hate that.
Click through to learn a more elegant way of sorting.
I asked the SQL Server community to write about their experience/opinion about the changing world we live in and how it impacts their daily job. The response was overwhelming: we had 30 participants in this months blog party!
Here’s an overview of everyone who participated. Take your time to read their stories, as they are very insightful, interesting or just plain fun to read.
There’s a lot of reading this month, with a well above-average turnout.
In this module you will learn how to use the Line Dot Chart Power BI Custom Visual. The Line Dot Chart gives you the ability to make a more engaging line chart that can actually be animated across time.
This seems more like a fun chart than a useful chart, but I could see it being visually engaging for demonstrating relatively low-frequency events.
But what happens when the VM is configured with less vCPUs than the core count of the physical CPU package and CPU Hot-Add is enabled? Will there be performance impact? And the answer is no. The VPD configured for the VM fits inside a NUMA node, and thus the CPU scheduler and the NUMA scheduler optimizes memory operations. It’s all about memory locality. Let’s make use of some application workload test to determine the behavior of the VMkernel CPU scheduling.
For this test, I’ve installed DVD Store 3.0 and ran some test loads on the MS-SQL server. To determine the baseline, I’ve logged in the ESXi host via an SSH session and executed the command:
sched-stats -t numa-pnode. This command shows the CPU and memory configuration of each NUMA node in the system. This screenshot shows that the system is only running the ESXi operating system. Hardly any memory is consumed. TotalMem indicates the total amount of physical memory in the NUMA node in kb. FreeMem indicates the amount of free physical memory in the NUMA node in kb.
Input Variables: These variables are called as predictors or independent variables.
- Customer Demographics (Gender and Senior citizenship)
- Billing Information (Monthly and Annual charges, Payment method)
- Product Services (Multiple line, Online security, Streaming TV, Streaming Movies, and so on)
- Customer relationship variables (Tenure and Contract period)
Output Variables: These variables are called as response or dependent variables. Since the output variable (Churn value) takes the binary form as “0” or “1”, it will be categorized under classification problem in the supervised machine learning.
One of the interesting things in this post was the use of missmap, which is part of Amelia.