Press "Enter" to skip to content

Curated SQL Posts

DBCC CHECKDB’s NOINDEX Option

Kendra Little explains the NOINDEX option of DBCC CHECKDB and how it relates to PHYSICAL_ONLY:

PHYSICAL_ONLY looks at page structures and makes sure something isn’t obviously wrong. If you have the database’s page verification option set to CHECKSUM (the default since SQL Server 2005, and a very good thing), it runs a checksum on the page and compares it to a checksum recorded in the header when the page was last modified.

When you use the PHYSICAL_ONLY option, you’re telling SQL Server to skip logical checks. It won’t do things like make sure the data in a clustered and nonclustered index on a table is consistent with one another.

I like the analogy, as well as the how-to guide on messing up your database.  But try not to do that to a production database…

Comments closed

SparkSession

Jules Damji shows off SparkSession:

Beyond a time-bounded interaction, SparkSession provides a single point of entry to interact with underlying Spark functionality and allows programming Spark with DataFrame and Dataset APIs. Most importantly, it curbs the number of concepts and constructs a developer has to juggle while interacting with Spark.

In this blog and its accompanying Databricks notebook, we will explore SparkSession functionality in Spark 2.0.

This looks to be an easier method for integrating various parts of Spark in one user session.  Read the whole thing.

Comments closed

Concurrency In Hadoop Using ZooKeeper

Garima Dosi discusses an architecture using ZooKeeper to introduce some limited protections for concurrent access in HDFS:

The ZooKeeper nodes topology as per the design looks like this. ZooKeeper works like a filesystem starting with a root directory followed with several nodes (analogous to folders) and finally the data nodes (analogous to files). The circles in the image represent the name of a property/folder that we are trying to maintain and the rounded boxes are the values/files for those properties/folders.

So, the image above shows that the “global version” is 100 and there are 10 & 20 read requests being executed on versions 98 and 99 respectively and since there is a write request in progress, no other write request would be taken up until it completes.

This feels a little overly complicated to me.

Comments closed

Analytic Tool Usage

Alex Woodie notes the increased popularity of Python for data analysis:

According to the results of the 2016 survey, R is the preferred tool for 42% of analytics professionals, followed by SAS at 39% and Python at 20%. While Python’s placing may at first appear to relegate the language to Bronze Medal status, it’s the delta here that really matters.

It’s interesting to see the breakdowns of who uses which language, comparing across industry, education, work experience, and geographic lines.

Comments closed

Locality

Kyle Kingsbury explains that sequential, serializable, and strictly serializable consistency models cannot provide locality:

We often speak of locality as a property of subhistories for a particular object x: “H|x is strictly serializable, but H is not”. This is a strange thing to say indeed, because the transactions in H may not meaningfully exist in H|x. What does it mean to run [(A enq y 1) (A enq x 1)] on x alone? If we restrict ourselves to those transactions that doapply to a single object, we find that those transactions still serialize in the full history.

So in a sense, locality is about the scope of legal operations. If we take single enqueue and dequeue operations over two queues x and y, the space of operations on the composite system of x and y is just the union of operations on x and those on y. Linearizability can also encompass transactions, so long as they are restricted to a single system at a time. Our single-key, multi-operation transactions still satisfied strict serializability even in the composite system. However, the space of transactions on a composite system is more than the union of transactions on each system independently. It’s their product.

Here’s the part where I pretend that of course I understand what Kyle wrote…  Seriously, though, this is a very interesting read.

Comments closed

Min And Max Server Memory

Kevin Hill explains the minimum and maximum server memory options in SQL Server:

Min Server Memory seems to get the most bad information spread around.   SQL Server does NOT automatically grab memory up to the Min setting when it starts.  However, once it gets there, it doesn’t give back.

Back to the car analogy…if you start up and head off down the road at 20 mph, you are above the default (0), but not at the max (100+, depending on the car and the tires…).  If you set the cruise control, you can accelerate up and down above 20, but you won’t go below that unless you hit the brakes.

I do like the car analogy to his post.

Comments closed

Vendors And Privileges

Dave Mason has a good post about onerous third-party software requirements:

If you’re not familiar with SQL Server, the “sysadmin” server role conveys the highest level of authorization available to a login. “db_owner” also conveys a high level of authorization. Both requirements are far more than what is necessary and violate the Principle of Least Privilege. While I strongly disagree with the install-time requirements, I can at least understand the argument: it’s a one-time activity. But elevated permissions at run time are inexcusable.

Most of the time, software companies publish that because they want to avoid the hassle of support calls when people don’t grant privileges correctly.  I’ve worked with one third-party vendor in the past who sent me the actual permissions requirements after I pestered them a bit, as I wasn’t going to let just anyone have sysadmin on my servers.  But that’s not a scalable approach and does nothing for the next guy who reads the documentation and just gives sysadmin away.

Comments closed