– Two Major Licensed Editions: Enterprise and Standard;
– Enterprise Edition can be licensed only “By Core”. Standard also available on “Server+CAL*” basis;
– If you have SA** you can still use your old CAL licenses with SQL Server 2016 Enterprise Edition, but will be limited by usage of only 20 Cores on your server;
– Standard Edition is limited by 4 Sockets/16 Cores and 128 Gb of Memory;
Licensing is boring, painful, and ultimately necessary to understand.
Lowest performance rate is $1.25/hr or just under $1K/mo. Only goes up from there
“Stretch Database currently does not support stretching to another SQL Server. ” Azure only
Lame/minimal filters…you have to roll your own functions, and they must be deterministic…no “Getdate() – 30”. This GUI is only slightly better than the horrible nightmare that was Notification Services…
I see the negatives overwhelming the positives at this point. You also can’t modify schema while Stretch is active.
The expansion of data sets and increased expectations of businesses for analysis and modeling of data has led developers to create a number of database products to meet those needs. As data professionals, it is incumbent upon us to understand how these tools work and put them to their best use–before somebody else puts them to sub-optimal use. I am joined by Kevin Feasel who walks us through some of the technologies available and sorts out under what circumstances we want to consider using each one.
In SQL Server 2016, Distributed Transactions are only supported if the transaction is distributed across multiple instances of SQL Server. It is NOT supported if the transaction is distributed between different databases within the same instance of SQL Server. So in the picture above, if the databases are on separate SQL instances it will work, but not if the databases reside on the same instance which is more likely.
This seems like a half-finished job. We’ll see if Microsoft improves on this later.
Typically, data warehousing and ETL tool vendors recommended that we write your own custom components. After all, the target market for ETL tools is a space where the tools are specifically marketed as reducing the need for “error prone and time consuming” manual coding. When I ran across this tutorial on writing your own NiFi processor it occurred to me that NiFi is the exact opposite. It’s both Open Source and designed for extensibility from the ground up. I found it quite reasonable to write a custom NiFi processor that leverages our existing code base.
The existing code is a Java program with separate classes for each device vendor, all with the same interface to abstract the nuances of each vendor from the main data export program. This interface follows a traditional paradigm: login, query, query, query, logout. Given that my input to NiFi above takes in simple username, password, and query criteria arguments, it seems trivial to create a NiFi processor class that adapts the existing code into the NiFi API. Here’s a slightly abbreviated version of the actual code. (In reality, it’s all of 70 lines of code.)
In almost any realistic scenario, you’re not going to have the opportunity to start from scratch. You will always have legacy components, external dependencies, and existing user bases to satisfy. I like this article because it moves forward from that starting point.
I haven’t gotten a ton of comments, but I did get a few (thank you to those have responded!), and I decided to take one of them and create a Trace and create an Extended Events session and see how long it took for each. Jonathan has mentioned before that he can create an XE session as fast as a Trace, and I’ve been thinking that I can as well so I thought I’d test it. It’s a straight-forward Trace versus Extended Events test. Want to see what’s faster? Watch the video here.
I love the “I would pop up the timer on the screen but I don’t know how to do that” bit; very Friday afternoonish.
After loading data into a server-based associative, in-memory database, Qlik customers could explore the data in a variety of ways from an AJAX Web GUI, enabling them to create and publish all sorts of reports and dashboards. The approach is not entirely dissimilar to the one taken by its rival, Tableau Software, which has also benefited from the big data boom and the democratization of BI.
The combination of market forces and a keen eye for product development were propellant for growth at Qlik. In 2009, the Radnor, Pennsylvania-based company had 11,400 customers and $157 million in revenues. By 2010, it had grown to 13,000 customers and had an IPO. By 2015, the company boasted 37,000 customers, $612 million in revenue, and a market cap north of $2.8 billion.
Qlik is definitely one of the big players in the visualization market, which includes Tableau, and Power BI/SSRS in Gartner’s Leaders quadrant and a bunch of competitors nipping at their heels.
Now, we’re getting somewhere. Looking at this graph we see we have four high-level problems we are trying to solve.
(Unknown/Unknown) The first step in realizing that we have a problem is accepting that we may not have the answer. We may not have the right mental or computational models; or even the right data to find bad things.
(Known/Unknown) We’ve invested time and energy brainstorming what could happen, sought out and collected the data we believe will help, and created mental and conceptual models that SHOULD detect/visualize these bad things. Now, we need to hunt and seek to see if we’re right.
(Unknown/Known) We’ve been hunting and seeking for some time tuning and training our analytical models until they can automatically detect this new bad thing. Now we need to spend some time formalizing our response process to this new use case.
(Known/Known) Great, we’ve matured this use case to a point that we can trust our ability to detect; maybe even to the point of efficient rules/signatures. We have mature response playbooks written for our SOC analysts to follow. Now we can feel comfortable enough to design and implement an automated response for this use case.
I think his breakdown is correct, and also would reiterate that within any organization, all four zones come into play, meaning you have different teams of people working concurrently; you’ll never automate away all the problems.
Common chart clutter items include:
Dark gridlines (use soft gray gridlines or eliminate gridlines when possible)
Overuse of bright, bold colors
Unnecessary use of all uppercase text (uppercase text is only necessary when calling attention to an element)
Basically, remove every visualization “feature” that Excel 97 gave you…
No data source is needed – this is a way of defining a table value in pure M code. The first parameter of the function takes a list of column names as text values; the second parameter is a list of lists, where each list in the list contains the values on each row in the table.
In the last example the columns in the table were of the data type Any (the ABC123 icon in each column header tells you this), which means that they can contain values of any data type including numbers, text, dates or even other tables. Here’s an example of this
This is a helpful trick.