Commas are the PUNCTUATION WINNER, except in Anne of Green Gables and Ulysses, where periods win out. These two novels are dramatically different from each other in other ways, though, and Ulysses is an outlier overall with almost no spoken dialogue via quotation marks and an unusual use of colons to semicolons. Exclamation marks are used relatively more in Wuthering Heights and Alice in Wonderland!
Exploring text in these kinds of ways is so fun, and tools for this type of text mining are developing so fast. You can incorporate information like this into modeling or statistical analysis; Mike Kearney has a package called textfeatures that lets you directly extract info such as the number of commas or number of exclamation marks from text. Let me know if you have any questions!
Yet more proof that Ulysses was an awful book.
The new R package flatxml provides functions to easily deal with XML files. When parsing an XML document fxml_importXMLFlat produces a special dataframe that is ‘flat’ by its very nature but contains all necessary information about the hierarchical structure of the underlying XML document (for details on the dataframe see the reference for the fxml_importXMLFlat function). flatxml offers a set of functions to work with this dataframe.
Apart from representing the XML document in a dataframe structure, there is yet another way in which flatxml relates to dataframes: the fxml_toDataFrame function can be used to extract data from an XML document into a dataframe, e.g. to work on the data with statistical functions. Because in this case there is no need to represent the XML document structure as such (it’s all about the data contained in the document), there is no representation of the hierarchical structure of the document any more, it’s just a normal dataframe.
Imagine a simple table containing contact information. It has two places for phone numbers (
Phone2). Let’s say you wanted to split those off into a related phone numbers table so you could support many more numbers for a single contact. As part of that process, you’d need to create two rows for each existing contact– one for each phone number.
One way to get two phone numbers is to handle each phone number as a separate process, combining a list of all contacts and their first number with those same contacts and their second number. So you’d
SELECTeverything from the table once for the first phone number and then
UNIONit with another
SELECT. This works, but if you have a large list of columns or a complex set of
JOINs, you end up repeating a lot of code.
Instead, what if you simply, magically double the rows and then pick and choose the columns you wanted from each of the rows?
Tally tables are a great “Get out of a bad design (relatively) cheap” tool. They’re not something I use on a daily basis, but they’ve made life easier for me on dozens of occasions.
We’ll start off with a simple step-by-step introduction to the sp_execute_external_script stored procedure. This is the glue that enables us to integrate SQL Server with the Python engine by sending the output of a T-SQL query over to Python and getting a result set back. . For example, we could develop a stored procedure to be used as a data set in an SSRS report that returns statistical data produced by a Python library such as SciPy. In this post we’ll introduce the procedure and how to pass a simple data set into and back out of Python, and we’ll get into manipulating that data in a future post.
If you’d like to follow along with me, you’ll need to make sure that you’re running SQL Server 2017 or later, and that Machine Learning Services has been installed with the Python option checked. Don’t worry if you installed both R and Python, as they play quite nicely together. The SQL Server Launchpad service should be running, and you should make sure that the ability to execute the sp_execute_external_script procedure is enabled by running the following code:
Chris’s talks on ML Services (either R or Python) were great and I expect this series to be as well.
Today we’ll look at configuring a common, repeatable scenario: take the latest backup of MyDB from ProdServer1 and restore it to DevServer1. There are four basic steps to the setup and execution:
Configure Minion Backup and let it run on ProdServer1.Restoring with MB requires at least one full backup taken by MB. (Note that you don’t need Minion Backup on DevServer1 for this scenario.)
Configure restore settings paths. You know how sometimes a restore requires one or more “WITH MOVE” clauses? Configure this once for your source-target pair, and MB takes care of it from then on.
Configure the restore tuning settings (optional). Oh yes, we tune our backups AND our restores!
Generate and run the restore statements.
It’s a good walkthrough if you’re a Minion Backup user. If you’re not and you’re not particularly happy with your backup solution, I recommend giving it a try.
I thought this was an interesting question, but it makes sense to have some concern about it. If you are using Sequences over identity inserts, this typically means you need more control over how those numbers are handled from creation to archive. So what if you need a lot of different Sequences stored on your server to handle your needs? Will you run out of Sequence objects that you can create?
This information is not intuitively simple to find, this is one of those times where you need to read various articles and connect the dots. Let’s start simple
This is one of those cases where Swart’s 10% Rule comes into play.
If you were to look at sys.indexes, you would see that these two indexes use index_id values of 1 and 3. The value 2 is skipped. It’s not because there used to be an index that was deleted after the index_id 3 index was created. It’s simply because of the relationship that index_id = stats_id, and there is already a statistic with stats_id = 2. When creating the index for the primary key, index_id 2 had to be skipped.
Check it out for additional insights.
As a general rule of thumb, in formal SSAS projects built on a relational data mart or data warehouse that is managed by the same project team as the BI data model, I typically recommend that every table in the model import data from a corresponding view or UDF stored and managed in the relational database. Keep in mind that is the way we’ve been designing Microsoft BI projects for several years. Performing simple tasks like renaming columns in the SSAS data model designer was slow and cumbersome. Performing this part of the data prep in T-SQL was much easier than in SSDT. With the recent advent of Power Query in SQL Server Data Tools, there is a good argument to be made for managing those transformations but the tool is still new and frankly I’m still testing the water. Again, keep changes in one place for future maintenance.
Do your absolute best to avoid writing complex SQL query logic that cannot be traced back to the sources. Complicated queries can become a black box – and a Pandora’s box if they aren’t documented, annotated and easy to decipher.
But do read Paul’s closing grafs on the importance of not being hidebound.