Quick tip for DST Refresh Date function Power BI Service. I’ll put the code up front, and explain it below. I’ll also say a bit about how to use it at the end. The United States and other places, like Australia, have a pesky thing called Daylight Savings Time. This means that in Central Time US, the offset from Universal Time Coordinated (UTC) is sometimes -6 and other times it’s -5. While Power Query can convert time zones, it doesn’t handle DST. And, my users like to see when the reports were refreshed as a step in evaluating data quality. In 2019, US DST is from March 10 – November 3 (2 AM local time). So, the functions here need to be updated every year.
As promised, here’s the custom function.
Click through for the custom function and a nice explanation of how it works.
Ok, here is the warning. This is not supported by Microsoft. If you do this and it breaks your model, you will not get support from Microsoft (or me for that matter). So, back everything up and keep the backups – don’t delete them. Consider yourself suitably warned 🙂 .
This warning just makes me more likely to do it…
Database cloning is a key aspect of the SSRS scale out architecture, with database clones providing each container a complete set of databases. Two or more VMs operated behind a load balancer delivers a highly available and scalable reporting service. This article focuses on Windows SQL Server containers and Windows Virtual Hard Drive (VHD) based cloning, but the same architecture can support SQL Server Linux containers or conventional instances (Windows or Linux). Redgate SQL Clone, for example would support SQL Server instances. Other options include the use of storage arrays instead of Windows VHD based clones. The trade-offs between SQL containers and instances, and between VHDs and storage arrays are covered in separate sections below.
The combination of SSRS containers with database cloning is appealing for simplicity and operational savings. SSRS containers are also drawing interest as part of public cloud strategies, as SSRS containers can be integrated with AWS RDS or SQL Azure databases to provide a horizontally scalable reporting solution.
This is a bit more complex than Reporting Services scale-out with Enterprise Edition, but if you’re on Standard Edition and can’t use scale-out, it’s an interesting alternative.
The purpose of this setting is limit the amount of memory any single query can take. This setting is extremely useful when you want to limit the amount of memory consumption per query for queries across the board. Before this setting, it was possible to have an extremely poorly written query eat up all of a server’s memory and bring all other queries down to a halt. You can see an example of a such a query and SSAS memory settings in my previous post here.
Read on for details about what it does and what happens when a query reaches the memory limit.
When building indexes for your queries, the order of your index key columns matters. SQL Server can make the most effective use of an index if the data in that index is stored in the same order as what your query requires for a join, where predicate, grouping, or order by clause.
But if your query requires multiple key columns because of multiple predicates (eg. WHERE Color = ‘Red’ AND Size= ‘Medium’), what order should you define the columns in your index key column definition?
One of my favorite books for query tuning is a bit long in the tooth at this point but remains quite relevant, and a key point there is to look for ways to drop the largest percent of rows as soon as possible. This applies for good indexes as well: they’ll let you ignore as large a percentage of your irrelevant data as you can, as soon as possible.
An error message has started appearing in the SQL Server Error Logs during a nightly full backup.
Could not clear ‘DIFFERENTIAL’ bitmap in database ‘Database1’ because of error 9002. As a result, the differential or bulk-logged bitmap overstates the amount of change that will occur with the next differential or log backup. This discrepancy might slow down later differential or log backup operations and cause the backup sets to be larger than necessary. Typically, the cause of this error is insufficient resources. Investigate the failure and resolve the cause. If the error occurred on a data backup, consider taking a data backup to create a new base for future differential backups.
Click through for the root cause and solution.
That seems to work easily. Maybe that’s the easiest way at the end of the day (?).
One problem that may arise – besides building on proprietary code that may change without notice – is that Excel may have problems reading a UTF8 csv, as explained here.
Read on for more info on what has become the de facto web standard for text.
To generate these, you can log into your AWS dashboard, go to the IAM (Identity and Access Management) dashboard and select the
Userstab. On the
Userstab, add a user and also the administration rights that you want the user to have.Remember to restart R once you have filled in the access key information in the .Renviron file for it to take effect.
At this point, those familiar with
cloudyrsuite is probably asking – “This is exactly the same as
library(aws.ec2), so why use boto3?“. Well, to be honest, I was using
aws.ec2for a while, but I find spot-instances, which the current version of
aws.ec2does not support. In addition I found that
boto3has some other functionalitue – which I prefer. For a full list of
boto3functions to interact with an EC2 instance, have a look at the reference manual.
It’s pretty good stuff; check it out.
That’s right. I defined an expression – a concatenation of two fields – and used the alias of that expression in another expression. Furthermore, I used the alias of the second expression in the WHERE clause. Gasp.
My workaround is to use
CROSS APPLY and define calculations in blocks there. This doesn’t work for aggregation operations, but in cases like Koen’s example, it does simplify the
WHERE clauses. This is a nicer solution, though.
SQL Server can be configured to provide certain behavior to client sessions, via the
SET OPTIONS=command, or via the
sys.sp_configure 'user options'system stored procedure. The
SET OPTIONS=command only affects the current session, whereas the
sys.sp_configure 'user options'system stored procedure configures the default values for all future user sessions.
Click through for the script, which even contains a quick description of each option.