Because today is Thanksgiving, there will be no curation. Curated SQL will return either tomorrow or Monday, depending upon when I wake up from my turkey coma.
Comments closedMonth: November 2017
Subham Sinha has an introductory-level tutorial on Apache Sqoop:
For Hadoop developer, the actual game starts after the data is being loaded in HDFS. They play around this data in order to gain various insights hidden in the data stored in HDFS.
So, for this analysis the data residing in the relational database management systems need to be transferred to HDFS. The task of writing MapReduce code for importing and exporting data from relational database to HDFS is uninteresting & tedious. This is where Apache Sqoop comes to rescue and removes their pain. It automates the process of importing & exporting the data.
Sqoop makes the life of developers easy by providing CLI for importing and exporting data. They just have to provide basic information like database authentication, source, destination, operations etc. It takes care of remaining part.
Sqoop internally converts the command into MapReduce tasks, which are then executed over HDFS. It uses YARN framework to import and export the data, which provides fault tolerance on top of parallelism.
In my experience, Sqoop does two things really well: first, it lets you move data from a relational database into HDFS (or Hive). Second, it lets you move data from HDFS (or Hive) into a staging table on a relational database. That can make Sqoop a useful part of an ETL process.
Comments closedSamir Behara looks at a few ways to customize SQL Operations Studio:
The Dark Theme has been one of the top requests from the developer community for SSMS for a long period now, and all the out of the box dark themes in SQL Operations Studio comes with a sigh of relief. I have liked playing around with this tool so far and am currently exploring the customization options.
Read on to see how to switch over to a dark theme, or even how to create your own theme.
Comments closedBased on a real-world scenario I encountered recently, here is the premise for this post. I’m putting it here at the top, so I won’t have to expand my post into a gazillion permutations for all imaginable types of scenarios and situations. However, I think you’ll be able to adapt the workflow to your particular setup.
-
SQL Server is running on an Azure VM with a connection to the Internet.
-
Stand-alone SQL Server – no clustering, no availability groups.
-
SQL Server has its own service account.
-
No web server installed on the machine.
-
I don’t have an Enterprise CA.
-
I can’t/won’t install certificates on my clients’ computers and servers.
Daniel has done yeoman’s work with this. I highly recommend giving it a read.
Comments closedThe biggest difference resides in partition function split and merge behavior. With B-Tree indexes, you can split and merge non-empty partitions. SQL Server would split or merge the data automatically, granted with the schema-modification (Sch-M) table lock held in place. Other sessions would be unable to access the table but at least split and merge would work.
This is not the case with columnstore indexes where you would get the error when you try to split or merge non-empty partitions. There are several reasons for this limitation. Without diving very deep into columnstore index internals, I could say that the only option of doing split of merge internally is rebuilding columnstore index on affected partitions. Fortunately, you can split and merge empty columnstore partitions, which allow you to workaround the limitation and also implement Sliding Window pattern and use partitioning to purge the data.
With SQL Server 2017, the logic gets a little simpler, as you can directly truncate partitions instead of shuffling them off to a separate table.
Comments closedJason Squires has a series on database migrations using dbatools. The first part covers capacity planning:
Great results, I now know that I cannot move the database from one server to another on any of the drives shown without expanding the drive volume. Even though this was an awesome output, I wanted more. I know my data and log drives that I wanted to focus on so I decided to dump everything into an array.
The second part is the actual migration:
In this post, I am going to show you the steps that I decided to take to do this particular migration. This migration was going to be a bit different. I am merging two systems together, so I had to really think the process through and ensure I get everything I needed. I also did not want to rollback due to the systems being legacy and unsupported. Some of the hosts were going to be pushing storage limits so I had to ensure everything went flawlessly so I could decommission the legacy systems quickly.
Both of these have scripts and explanations attached, making it easy to follow along.
Comments closedJoe Obbish explains a quirk of columnstore index compression:
The insert query now takes 3594 ms of CPU time and 2112 ms of elapsed time on my machine. The size of each rowgroup did not change. It’s still 2098320 bytes. Even though this is a parallel query there’s no element of randomness in this case. In the query plan we can see that the source table was scanned in a serial zone and round robin distribution is to used to distribute exactly half of the rows to each parallel thread.
This seems like a reasonable plan given that
TOP
forces a serial zone and we need to preserve order. It’s possible to rewrite the query to encourage a parallel scan of the source table, but that would introduce an order-preserving gather streams operator.
Click through for the full story.
Comments closedArun Sirpal covers DBCC CHECKTABLE:
If you know about DBCC CHECKDB then most likely you will know about DBCC CHECKTABLE. Quite simply this command performs primitive system-catalog consistency checks, per-table consistency checks on the single table specified, and cross-table consistency checks on indexed views that reference the specified table. (Page 899 Microsoft SQL Server Internals 2012, Chapter 14, Page 899, Paul Randal)
There is one concept about this command that I did not know about until recently, intrigued? Read on.
I was going to say “read on” but Arun already used that line.
Comments closedJason Brimhall has an interesting story around a common error message:
Now is the time to start pulling out your hair. There is no syntax error in the query. Go ahead and look it over 10-15 times. I know I did.
Ok, if there is no syntax error, then what could possibly be the real problem? Is the database corrupt? Maybe a system table is corrupt? Grasping at straws here, but could it possibly even be some sort of royally screwed up permissions.
Everything seems to be checking out properly. There is no corruption whatsoever. Laptop is soon to be launched at this point right? Ok, maybe not launched because this is a simple query. But, had this been a production related query that was rather intense and complicated, there really may be something getting launched as the frustration mounts.
Click through for the answer. Sometimes the error message is technically correct but utterly confounding.
Comments closedKathryn Bryant and M. Aaron Owen share their Kaggle experiences. First, Kathryn, et al:
The lifecycle of our project was a typical one. We started with data cleaning and basic exploratory data analysis, then proceeded to feature engineering, individual model training, and ensembling/stacking. Of course, the process in practice was not quite so linear and the results of our individual models alerted us to areas in data cleaning and feature engineering that needed improvement. We used root mean squared error (RMSE) of log Sale Price to evaluate model fit as this was the metric used by Kaggle to evaluate submitted models.
Data cleaning, EDA, feature engineering, and private train/test splitting (and one spline model!) were all done in R but we used Python for individual model training and ensembling/stacking. Using R and Python in these ways worked well, but the decision to split work in this manner was driven more by timing than anything else.
Then, Aaron, et al, share their process and findings:
Some variables had a moderate amount of missingness. For example, about 17% of the houses were missing the continuous variable, Lot Frontage, the linear feet of street connected to the property. Intuitively, attributes related to the size of a house are likely important factors regarding the price of the house. Therefore, dropping these variables seems ill-advised.
Our solution was based on the assumption that houses in the same neighborhood likely have similar features. Thus, we imputed the missing Lot Frontage values based on the median Lot Frontage for the neighborhood in which the house with missing value was located.
This is the major upside to Kaggle: it gives you the ability to work in a controlled environment with real data sets, which include real data problems. Yeah, the data’s much cleaner than you’d experience in production pretty much anywhere, but that lets you practice technique with a relatively low barrier to entry. H/T R-Bloggers (Kathryn | Aaron)
Comments closed