Before you build a dashboard, you have to know your audience. If you don’t know who your viewers will be and where their interests lie, you run the risk of building a dashboard which fails to serve their needs. When that happens, people stop looking at your dashboard. In order to increase the likelihood that your dashboard will be useful, I have a few critical questions:
- Who is your intended audience?
- How will your intended audience use your dashboard?
- What actions do you want them to take as a result of what they see?
- Are you showing the right measures in the right way?
- What cultural differences might matter?
The rest of this post will drill into each of these concepts.
These are the types of questions which can make the difference between a dashboard people love and a dashboard people never use.
Using a Raspberry Pi 3 with Google AIY Voice Kit is an easy way to control data flows in an organization.
The AIY Voice Kit is a good way to prototype voice ingestion. It is not perfect, but for low-end, inexpensive hardware, it is a good solution for those who speak clearly and are willing to repeat a phrase a few times for activation.
I was able to easily add a word for it to react to. I also have it waiting for someone to press the button to activate it.
It seems like a fun DIY project.
Wayne Sheffield continues his series on SSMS tips and has four new posts for us.
Have you ever had a long script that you are trying to scroll through? Do you wish that you could see a preview of the scroll area to easily see what section you are in? Well, starting in SSMS 2016, you can. Just right-click the vertical scroll bar, and select “Scroll Bar Options…”
Of the four, map mode is my favorite. It’s extremely helpful when going through large files.
One of the new features announced with SQL Server 2017 is support for the Python language. This is big! In SQL Server 2016, Microsoft announced support for the R language – an open source language ideally suited for statistical analysis and machine learning (ML). Recognizing that many data scientists use Python with ML libraries, the easy-to-learn-hard-to-forget language has now been added to the SQL Server ML suite.
There’s a big difference between R and Python though: R is a domain-specific language while Python is general purpose. That means that the full power of Python is available within SQL Server. This article leaves ML aside for the moment and explores a few of the other possibilities.
Gerald has two good cases for using Python with SQL Server. Funny enough, they’re both also easily supported in R, so you could do this in 2016 as well.
Security was controlled by granting EXECUTE permissions only to appropriate stored procedures and by explicitly not granting permission to any tables or views within the database.
One of the procedures was getting a bad query plan and timing out. This is when I was called in. The procedure was performing a search based on an unknown number of up to 10 search parameters. The code was filled with many AND/OR combinations to account for the users’ ability to include any combination of search parameters. I found this procedure to be a prime candidate for dynamic SQL where I would create the select statement including only the search parameters entered by the user into the WHERE clause.
Immediately upon testing the users started to receive SELECT permission denied errors. It turns out that when you change to dynamic SQL and call your statement through sp_ExecuteSQL that the permissions are lost. Our options were to grant explicit select permission on the objects or to refactor the code such that it does not use dynamic SQL anymore.
The best solution here would probably be to use a certificate to sign the procedure and give that certificate user rights to select from the tables used in dynamic SQL.
During the December 2017 delivery of Expert SSIS, I was asked if there is an SSIS Catalog Logging Mode that will display the value of variables. I responded that I wasn’t aware of a logging level that accomplishes this, but then – as sometimes happens – I could not let it go. It is an excellent question. So I dug in…
I second his notion that you should learn how to use script tasks and script components. They’re not very difficult to pick up and even a basic knowledge will benefit you greatly.
When I was putting together the lesson plans for this, I wanted to make my own query for the comparisons, not borrow one from another site or blog. Yes, I borrow plenty, but I wanted this to be mine. When I was presenting my “code tuning” class, I had recently upgraded my instance from 2012 to 2017. I had also put my database into 2017 compatibility mode. I had used this query to show that unions that are intensive can cause issues with tempdb and cause spill over. To my “joy”, when I ran the query in the class I did not get the tempdb spillover. And right then I realized that I was not in Kansas(2012 compatibility) any longer. But this proved to be opportunistic for the statistics/optimizer comparison.
Read on for a discussion of the cardinality estimator as well.
How to use
It takes 2 arguments: The Text/string and the Delimiter. So for our example you have to enter a space, but you can use any sign there for other use cases as well.
Click through for a quick demo.