Lets look at how teams played on offense depending on where they were on the field (their yardline) and the down they were on. The fields in our dataframe that we will care about here are yfog (yards from own goal), type (rush or pass), dwn (current down number: 1,2,3, or 4). We will want a table with each of these columns as well as a sum column. That way, we can see how many times a pass attempt was done on the 4th down when a team was X yards from their own goal.
To do this, we will use a package called plyr. The Internet says that this package makes it easy for us to split data, mess with it, and then put it back together. I am not convinced the tool is easy, but I haven’t spent too much time with it.
Check it out for some ideas on what you can do with R.
It is important to note that the SQL statements generated in the background are not executed unless explicitly requested by the command as.data.frame. Hence, you can merge, filter and aggregate your dataset on the database side and load only the result set into memory for R.
In general the design principle behind RDBL is to keep the models as close as possible to the usual data.frame logic, including (as shown later in detail) commands like aggregate, referencing columns by the \($\) operator and features like logical indexing using the \(\) operator.
Check it out. I’m not particularly excited about this for one simple reason: SQL is a better data retrieval and connection DSL than an R-based mapper. I get the value of sticking to one language as much as possible. I also get that not all queries need to be well-optimized—for example, you might be running queries on a local machine or against a slice of data which is not connected to an operational production environment. But I’m a big fan of using the right tool for the job, and the right tool for working with relational databases (and the “relational” part there is perhaps optional) is SQL.
Now that we’ve defined connections, databases and schemas we still need to add our table metadata.
We’re going to do that by looping across all our databases marked as a source in Biml, retrieving the list of required tables from SQL (located in View vMyBimlMeta_Tables) and creating a table tag for each table which will also reference back to the corresponding target system. That also allows us to use the same table names multiple times. Again, we’ll store some additional data in annotations.
This is an interesting concept. Check it out.
So make sure you really need all that junk in your nonclustered index trunk. Er, key.
But even with the expanded size of key columns, sometimes I get asked a question: do columns that “secretly” get added to the key of a nonclustered index count against the maximum allowed nonclustered index key length?
You can read the short answer, but I recommend reading the full post.
This is a very exciting project with great interest from the R and more general data science community — in the past short 2 months (since we opened registration for the conference):
More than 160 persons signed up and paid for attendance from 17 countries so far (around 50-50% mix of academic and industry tickets, 30-70% mix of foreign and Hungarian attendees)
We received almost 40 voluntary talk proposals in a few weeks of time while the CfP was open
25 selected & awesome speakers agreed on to present at the conference
I’d like to see this take off, similar to SQL Saturdays.
For data scientists, we provide out-of-the-box integration with Jupyter (iPython), the most popular open source notebook in the world. Unlike other managed Spark offerings that might require you to install your own notebooks, we worked with the Jupyter OSS community to enhance the kernel to allow Spark execution through a REST endpoint.
We co-led “Project Livy” with Cloudera and other organizations to create an open source Apache licensed REST web service that makes Spark a more robust back-end for running interactive notebooks. As a result, Jupyter notebooks are now accessible within HDInsight out-of-the-box. In this scenario, we can use all of the services in Azure mentioned above with Spark with a full notebook experience to author compelling narratives and create data science collaborative spaces. Jupyter is a multi-lingual REPL on steroids. Jupyter notebook provides a collection of tools for scientific computing using powerful interactive shells that combine code execution with the creation of a live computational document. These notebook files can contain arbitrary text, mathematical formulas, input code, results, graphics, videos and any other kind of media that a modern web browser is capable of displaying. So, whether you’re absolutely new to R or Python or SQL or do some serious parallel/technical computing, the Jupyter Notebook in Azure is a great choice.
If you could only learn one new thing in 2016, Spark probably should be that thing. Also, I probably should agitate a bit more about wanting Spark support within Polybase…
In my previous series on Stream Analytics, I wrote some U-SQL. That U-SQL didn’t look much different than Ansi-SQL, which is sort of the point of porting the functionality to a different yet familiar language. Another application which heavily uses U-SQL is Azure Data Lake. Data Lake stores its data in HDInsight, but you don’t need to write hive to query the data, as U-SQL will do it. Like Hive, U-SQL can be used to create a schema on top of some data, and then query it.
For example, to write a query on this csv file stored in a Data Lake, I would need to create the data definition for the data, then I could easily write a statement to query it.
I’m interested in seeing how much adoption we see in this language.
The idea of using key-value pairs to store data isn’t new, but with the rapid development of cloud solutions like Azure and the hype around NoSQL databases, using key-value pairs to store data got a big boost. Especially developers (in my experience) love using key-value pair to store their data, because it’s easy for them to consume the data in an application. But it gives the database professional an extra challenge because we’re used to retrieve columns with values instead of a record per value. So how can we turn those key-value pairs into rows?
This is a good example of using PIVOT. I’m not a big fan of storing data in key-value pairs and using pivoting operators because you’re burning CPU on that very expensive SQL Server instance (and you’re not taking advantage of what relational databases do well); if you really need to store data as key-value, I’d recommend doing the pivot in cheaper application servers.
Very often when I mention testing before an upgrade, I’m told that there is no environment in which to do the testing. I know some of you have a Test environment. Some of you have Test, Dev, QA, UAT and who knows what else. You’re lucky.
For those of you that state you have no test environment at all in which to test, I give you
DBCC CLONEDATABASE. With this command, you have no excuse to not run the most frequently-executed queries and the heavy-hitters against a clone of your database. Even if you don’t have a test environment, you have your own machine. Backup the clone database from production, drop the clone, restore the backup to your local instance, and then test. The clone database takes up very little space on disk and you won’t incur memory or I/O contention as there’s no data. You will be able to validate query plans from the clone against those from your production database. Further, if you restore on SQL Server 2016 you can incorporate Query Store into your testing! Enable Query Store, run through your testing in the original compatibility mode, then upgrade the compatibility mode and test again. You can use Query Store to compare queries side by side! (Can you tell I’m dancing in my chair right now?)
Erin’s discovery makes CLONEDATABASE go from being an interesting tool to being outright powerful for handling upgrades.
A natural key is one constructed of data that already exists in the table. For example using latitude and longitude in a table of addresses. Or the social security number in a table of employees. (Before you say anything, yes, the social security number is a horrible primary key. Be patient.)
My personal preference is to use surrogate keys most of the time and put unique constraints (or unique indexes) on the natural key. There are some occasions in which I’d deviate, but ceteris paribus I’d pick this strategy..