Pesonally, I am still struggling to see where PowerBI fits in my organization. I am the only BI professional here, so I have to do every bit of the process from data modeling to building front end applications. Right now, my organization has a data warehouse with some processes in the warehouse, a Datazen environment and an SSRS environment. There is no SSAS cubes or any power users using PowerPivot to analyze data.
Data warehouses serve a particular role in an environment: they answer known business questions and give consistent answers across an organization. I see Power BI as a tool with a few separate uses depending upon organizational size and maturity. I think its best use in shops which are not large enough, well-established enough, or with enough non-IT business intelligence expertise is BI developers building beautiful dashboards for business data consumers, feeding from existing systems (including data warehouses). In that sense, it is a complement to a Kimball-style data warehouse.
Exciting news! Starting today, SQL Server 2014 Developer Edition is now a free download for Visual Studio Dev Essentials members. We are making this change so that all developers can leverage the capabilities that SQL Server 2014 has to offer for their data solution, and this is another step in making SQL Server more accessible. SQL Server Developer Edition is for development and testing only, and not for production environments or for use with production data.
Visual Studio Dev Essentials is Microsoft’s most comprehensive free developer program ever, with everything you need to build and deploy your app on any platform, including state-of-the-art tools, the power of the cloud, training, and support.
SQL Server 2016 will also be covered under this plan. Granted, Developer Edition would not break the bank anyhow, but it does lower (ever so slightly) those barriers to entry, and I think it’ll be a driving point for SQL Server on Linux.
SQL Server 2016 gets a scalability boost from scheduling updates. Testing uncovered issues with the percentile scheduling based algorithms in SQL Server 2012 and 2014. A large, CPU quantum worker and a short, CPU quantum worker can receive unbalanced access to the scheduling resources.
Take the following example. Worker 1 is a large, read query using read ahead and in-memory database pages and Worker 2 is doing shorter activities. Worker 1 finds information already in buffer pool and does not have to yield for I/O operations. Worker 1 can consume its full CPU quantum.
On the other hand, Worker 2 is performing operations that require it to yield. For discussion let’s say Worker 2 yields at 1/20th of its CPU, quantum target. Taking resource governance and other activities out of the picture the scheduling pattern looks like the following.
I’m going to have to reserve judgment on this. It’s been in Azure SQL Database for a while, so I’m not expecting bugs, but I wonder if there are any edge cases in which performance gets worse as a result of this.
First off, the word itself. The Cloud. What is The Cloud? It’s a server that you don’t own. You can’t touch it, it’s in someone else’s data center. It may or may not be virtual. Amazon’s Cloud or Microsoft’s or Google’s are several data centers with racks and racks of servers. They are physical, just not at your location. And they’re accessed across the Internet. This is something that we’ve been doing for 30 years, it’s called a Wide-Area Network, just scaled up bigger. We had bi-coastal WANs before the World Wide Web came along.
Four or five years ago, I was absolutely in agreement. Today, I’m 50/50, being near 100% for many types of servers (web servers, etc.) and closer to 25-30% for databases. My expectation is that those numbers will continue to shift upward as time goes on, but there will always be reasons not to migrate certain servers to someone else’s data center.
To understand why we get this performance degradation with SQL Server 2016 RC1 three key parts of a transactions life cycle need to be understood along with the serialisation mechanisms that protect them
Chris digs into call stacks as part of his post. We’ll see if there are some performance improvements between now and RTM on this front.
SAN snapshots, and I don’t care who your vendor is, by definition depend on the production LUN. We’ll that’s the production data.
That’s it. That’s all I’ve got. If that production LUN fails for some reason, or becomes corrupt (which sort of happens a lot) then the snapshot is also corrupt. And if the snapshot is corrupt, then your backup is corrupt. Then it’s game over.
SAN snapshots are a good part of an infrastructure-side recovery plan, but databases have their own recovery time and recovery point objectives. Conflating these two can lead to sadness.