Now interact with SparkSQL through a Zeppelin UI, but re-use the table definitions you created in the Hive metadata store. You’ll create another table in SparkSQL later in this post to show how that would have been done there.
Connect to the Zeppelin UI and create a new notebook under the Notebook tab. Query to show the tables. You can see that the two tables you created in Hive are also available in SparkSQL.
There are a bunch of tools in here, but for me, the moral of the story is that SQL is a great language for data processing. Spark SQL has gaps, but has filled many of those gaps over the past year or so, and I recommend giving it a shot.
Apache Storm was the original solution to Twitter’s problems. It was created by a marketing intelligence company called BackType, and Twitter bought the company in 2011 and eventually open-sourced Storm, providing it to the Apache Foundation.
There’s no question Storm has a lot of advantages. It’s scalable and fault-tolerant, with a decent ecosystem of “spouts,” or systems for receiving data from established sources. But it was reputedly also hard to work with and hard to get good results from, and despite a recent 1.0 renovation, it’s been challenged by other projects, including Apache Spark and its own revised streaming framework.
It’s good to see this competition in the streaming space.
If you have no idea what Box-and-Whisker Plot is, please visit following link: http://www.wellbeingatschool.org.nz/information-sheet/understanding-and-interpreting-box-plots
At first, I will show how to do it based on AdventureWorks database in SQL Server 2014.
We will analyze amounts of Individual lines of Sales Orders within each month.
The first step is to create a Data Set to process. That Data Set will contain a Month, Single Line amount and order number of that record within a month.
This is really cool…but I wonder if it wouldn’t be better to do this in R, where it’d take a lot less code. If you can’t reach out to R, though, this is a good way of visualizing results.
Here’s some code that can show you what indexes are unused or empty. An empty index just means that there’s no data in that table right now, it may always be populated later, so I would not drop an empty index. Besides, how much space would an empty index take?
For my personal preferences, I order the output by table then index name, also I put a u.* at the end of the select statement so the more interesting usage stat columns can be seen.
If an index truly is unused, it’s a waste of resources. The problem is, sometimes you’ll think an index is unused but it’s really a vital part of month-end reporting or used for the CEO’s favorite dashboard.
Dear SQL DBA,
What are your thoughts on the early adoption of new SQL Server versions? Specifically, if the business is willing to assume the risk of early adoption just to get one new feature that they can probably live without, should DBAs be happy and willing to assume that risk too? Or, is it our responsibility to “just say no” until it has been tested? I would like to hear about any experience you have with this. Thanks.
Bleeding in Edgeville
Go read Kendra’s answer because it’s a good one. My answer is, I want to be on the edge. I’ve run into V1 bugs and had to spike projects before they made it to production, but if there’s a good benefit to moving, and if your business side is supportive, I’d lean heavily toward fast upgrades.
While one transaction is holding an exclusive lock on an object (Ex. ongoing table update), another transaction is executing parallelized SELECT (…) FROM SourceTable, using the NOLOCK hint, under the default SQL Server lock-based isolation level or higher. In this scenario, the SELECT query trying to access SourceTable will be blocked.
Executing a parallelized SELECT (…) INTO Table FROM SourceTable statement, specifically using the NOLOCK hint, under the default SQL Server lock-based isolation level or higher. In this scenario, other queries trying to access SourceTable will be blocked.
The current recommendation is not to install CU6 until the issue is fixed.
Extrapolating beyond the range of training data, especially in the case of time series data, is fine providing the data-set is large enough.
Strong Evidence is same as a Proof! Prediction intervals and confidence intervals are the same thing, just like statistical significance and practical significance.
These are some good things to think about if you’re getting into analytics.
Step 3: run sp_Blitz @IgnorePrioritiesAbove = 50, @CheckUserDatabaseObjects = 0. This gets you the fast headline news, especially when used with the improved priorities in the latest version in our SQL Server download kit
It’s important to have one or two high-level systems you can glance at to see top-level health of a system. This could be one of those systems.
If a database has a high number of Virtual Log Files (VLFs), it can impact the the speed of transaction log backups and database recovery. I once had a database with 75,000 VLFs. I didn’t even know what a VLF was at the time (hence having so many). After rebooting the server, a mission critical database with extremely high SLAs took 45 minutes to complete recovery. We were in the process of opening a support case with Microsoft when the database finally came online. The next day, I contacted a Microsoft engineer and learned about VLFs.
There are a number of helpful tidbits here, so check it out.
Apache Zeppelin, a web-based notebook, enables interactive data analytics including Data Ingestion, Data Discovery, and Data Visualization all in one place. Zeppelin interpreter concept allows any language/data-processing-backend to be plugged into Zeppelin. Currently, Zeppelin supports many interpreters such as Spark (Scala, Python, R, SparkSQL), Hive, JDBC, and others. Zeppelin can be configured with existing Spark eco-system and share SparkContext across Scala, Python, and R.
This links to a rather long post on how to set up and use all of these pieces. I’m more familiar with Jupyter than Zeppelin, but regardless of the notebook you choose, this is a good exercise to become familiar with the process.