Andy Leonard has a recording of his introductory session on BIML He also has a series of links and resources.
If you use Integration Services at all, ever, then you want to learn BIML. BIML:SSIS::Powershell:Administration.
And these are all awesome changes. But sometimes it’s the little things that make me smile.
For example, I was toying with SSRS 2016 CTP3.0 and I wanted to insert a new chart. Suddenly I noticed they had updated the little chart icons in the dialog:
Here’s hoping the final product ends up being what we all wanted back in 2008; if so, that’d be pretty awesome.
There’s something quite odd about this session. It has no target! The data isn’t being written to memory in the ring buffer or to a file or even a counter.
So I did a little testing. I right clicked the session and selected ‘Watch Live Data’ to see if I could consume the data flowing through in SQL Server Management studio even though it didn’t have a target. And then I ran this in another session:
It looks like this extended event is designed to track serious error messages. I’m liking it and hope it sticks around in RTM.
Richie Lee has a quick script to check which objects are compressed. Given that I ended up needing to use this script within a day of his posting it (hey, I’m as lazy as anybody else…), I figured it was worth linking.
Jan Mulkens has started an interesting series on data science using the Microsoft stack. His first post is an overview of the products currently available:
But on a more serious note, I’m going to be crude to Microsoft here.
A long time ago, Power BI started as an over-hyped and underwhelming experience. Everyone saw the potential this Excel stuff had but I’m guessing the experience most people had was similar to mine. That is, Power BI back then was a disappointment because of what we were expecting.
The one good thing it did have at one point was PowerPivot.
Skip forward to august 2015.
The Power BI dream had suddenly come true!
Most of the things we were expecting in the past suddenly were there, in a web service AND a desktop application.
From there, Mulkens shares a number of training materials:
Make Microsoft’s Virtual Academy your first or last stop when learning, but you should always pay it a visit!
It’s filled with incredible information broken down in some great free courses.
It seems that (at least some of) the closed edX.org courses are being placed on here, so you can follow up on them at your own pace.
Do be aware that you can’t receive certificates on Microsoft Virtual Academy.
This is an exciting time to jump into analytics. Most of the material is free, and it’s easy to get VMs to practice, so the barrier to entry is low.
As I sit here typing this blog post in my home, we are 145 days or more accurately about 3480 hours until April 12, 2016. That is the date when Extended Support for SQL Server 2005 will be done. Over. Dunzo. Kaput. Yes, Microsoft can do Custom Support Agreements for large companies but it will cost you and you still will be stuck without all the sweet features that have been released in the last 7 years (since 2008 came out). So let’s face, it’s time to upgrade SQL Server 2005!
Chaves gives two good options: either upgrade or move your database into Azure. The unfortunate thing is that there are industries (health care, I’m looking at you) whose vendors are so slow to support new versions that some servers will be stuck on 2005 or (ick) 2000 forever. I feel for you guys.
Organizations looking to take control of this onslaught of information are turning to other solutions to meet their data needs, either in addition to or instead of the traditional EDW. Quite often this means turning to a logical architecture that abstracts the inherent complexities of the big data universe. Such an approach embraces mixed environments through the use of distributed processing, data virtualization, metadata management, and other technologies that help ease the pain of accessing and federating data.
Dubbed the logical data warehouse (LDW), this virtual approach to a BI analytics infrastructure originated with Mark Beyer, when participating in Gartner’s Big Data, Extreme Information and Information Capabilities Framework research in 2011. According to his blog post “ Mark Beyer, Father of the Logical Data Warehouse, Guest Post ,” Beyer believes that the way to approach analytical data is to focus on the logic of the information, rather than the mechanics:
This feels like something that first-movers are starting to adopt, but won’t be mainstream for another 6-8 years. That should give the idea some time to mature as we see the first round of successes and (more importantly) failures.
I trust the utilization reported by “Processor Info”. Note that the greatest reported “Resource Pool Stats” utilization (approaching 120%) is when total “Processor Info” utilization is near 100% across all 12 physical/24 logical cores. Nominal rating of the core is 3.06 GHz, top SpeedStep is 3.46 GHz. That would give a maximum ratio of 3.46/3.06 = 113%, which is still under the number reported by SQL Server (for Default pool alone, I’ll add). Even if the numbers made it seem possible that SpeedStep was responsible for more than 100% utilization reported by SQL Server, I don’t think SpeedStep is the culprit. The older Intel processors were by default conservative with SpeedStep, to stay well within power and heat envelope. And no-one’s been souping this server up for overclocking 🙂
So… if my database engine will give 110% (and sometimes more…) I guess I better, too. 🙂
Math is hard.
With the domain-centric approach, on the other hand, programmers view the domain model as the most important part of the software project. It is usually represented in the application code, using an OO or functional language. Data (as well as other notions such as UI) is considered to be secondary in this case:
Each of the approaches brings its own pros and cons, as well as some differences in the way developers address common design challenges. Let’s elaborate on that.
Khorikov is domain-centric, whereas I am data-centric. My justification is as follows: 20 years from now, the most likely scenario is that your application has been re-written three or four times, whereas my database is still chugging along. Therefore, we should design in ways which make it easier to maintain correct data.