Press "Enter" to skip to content

Day: March 21, 2018

Kafka Partitioning Strategies

Amy Boyle shares some thoughts on Kafka partitioning strategy:

If you have enough load that you need more than a single instance of your application, you need to partition your data. The producer clients decide which topic partition data ends up in, but it’s what the consumer applications will do with that data that drives the decision logic. If possible, the best partitioning strategy to use is random.

However, you may need to partition on an attribute of the data if:

  • The consumers of the topic need to aggregate by some attribute of the data.

  • The consumers need some sort of ordering guarantee.

  • Another resource is a bottleneck and you need to shard data.

  • You want to concentrate data for the efficiency of storage and/or indexing.

Good advice.

Comments closed

Single-Node Hadoop 3 Installation

Mark Litwintschik has a fairly simple guide for installing Hadoop 3 on a single node for testing:

This post is meant to help people explore Hadoop 3 without feeling the need they should be using 50+ machines to do so. I’ll be using a fresh installation of Ubuntu 16.04.2 LTS on a single computer. The machine has an Intel Core i5-7300HQ CPU clocked at 2.50GHz, 8 GB of RAM and a 200 GB mechanical disk drive. I intentionally picked a low end machine to demonstrate not much is needed to try out Hadoop in a learning exercise.

Please do be mindful these instructions are aimed at building a test environment that is cut off from the outside world. Beyond the fact this is a single machine installation for software which is meant to run on multiple machines there would need to be significant content changes to turn these instructions into production installation notes.

It’s a useful guide if you’re not interested in going with one of the platform vendors like Hortonworks or Cloudera.

Comments closed

Reproducibility And ML Projects

Pete Warden explains some of the difficulties around reproducing ML models:

Why does this all matter? I’ve had several friends contact me about their struggles reproducing published models as baselines for their own papers. If they can’t get the same accuracy that the original authors did, how can they tell if their new approach is an improvement? It’s also clearly concerning to rely on models in production systems if you don’t have a way of rebuilding them to cope with changed requirements or platforms. At that point your model moves from being a high-interest credit card of technical debt to something more like what a loan-shark offers. It’s also stifling for research experimentation; since making changes to code or training data can be hard to roll back it’s a lot more risky to try different variations, just like coding without source control raises the cost of experimenting with changes.

It’s not all doom and gloom, there are some notable efforts around reproducibility happening in the community. One of my favorites is the TensorFlow Benchmarks project Toby Boyd’s leading. He’s made it his team’s mission not only to lay out exactly how to train some of the leading models from scratch with high training speed on a lot of different platforms, but also ensures that the models train to the expected accuracy. I’ve seen him sweat blood trying to get models up to that precision, since variations in any of the steps I listed above can affect the results and there’s no easy way to debug what the underlying cause is, even with help from the authors. It’s also a never-ending job, since changes in TensorFlow, in GPU drivers, or even datasets, can all hurt accuracy in subtle ways. By doing this work, Toby’s team helps us spot and fix bugs caused by changes in TensorFlow in the models they cover, and chase down issues caused by external dependencies, but it’s hard to scale beyond a comparatively small set of platforms and models.

I see two separate problems:  reproducing the process and reproducing the result.  Reproducing the process is why you want to use something like notebooks:  it’s a proof that you (and others!) can generate the same type of model the same way multiple times.  Reproducing the result is harder given the stochastic nature of ML, but if you’re following the same process, you’re at least more likely to end up close to the same result.

Comments closed

Speeding Up Power BI Aggregations With Primary Keys

Chris Webb has an interesting use case for adding primary keys on lookup tables:

As you can see, the Property Type column from the #”Price Paid” query contains single letter codes describing the type of property sold in each transaction; the Property Type column from #“Property Types” contains a distinct list of the same codes and acts as a dimension table. Again there’s nothing interesting going on in this query.

The problems start when you try to join data from these two queries using a Merge and then, for each row in #”Property Types”, show the sum of the Price Paid column from #”Price Paid”.

Although baseline performance is bad, Chris shows a way of improving that performance significantly.

Comments closed

XML In Scala

Mahesh Chand Kandpal shows how to create XPath statements in Scala:

We called the \() on the XML element and asked it to look for all symbol elements.  It returns an instance of scala.xml.NodeSeq, which represents a collection of XML nodes.

The \() method looks only for the elements that are direct descendants of the target element(i.e symbol).   If we want to search through all the elements in the hierarchy starting from the target element, \\() method is used

Check it out, especially if you’re working with Spark, as you never know when a rogue XML file will head your way.

Comments closed

Remote DAC And Vulnerability Assessments

Max Vernon points out a SQL Server Management Studio Vulnerability Assessment check which seems somewhat incomplete:

Certainly, you’d want to ensure the port for the DAC is not available to the Internet, but hopefully if you’re reading this blog you already know how silly it would be to open SQL Server to the Internet.

Assuming you don’t have the port open to the Internet, it’s very likely the DAC will not be of any use at all if you disable Remote Admin Connections as advised in the Vulnerability Assessment. My advice is to ignore this warning completely and configure the DAC to allow remote connections. Microsoft Technet has documentation about using the DAC, and says to configure it for remote connections by logging onto the server locally first, then configuring SQL Server to allow remote DAC connections, which seems a bit like putting the cart before the horse.

Read the whole thing.  I agree with Max’s assessment that if there are some basic controls around your instance (like not letting SQL Server be Internet-accessible, putting SQL Server instances in a protected subnet, etc.), remote DAC is definitely useful enough to keep running.

Comments closed

Query Store Deadlock When Creating Databases

Andy Mallon ran into a weird issue with Query Store:

I tried the configuration a couple of times just to make sure it wasn’t a one-off problem. I installed the latest Cumulative Update (CU). I made sure nothing else was connected to the instance. I rebooted my machine. I restarted services. I banged my head against the wall. I asked a friend if I was insane or stupid. After confirming that I was both, my friend Aaron Bertrand (blog|twitter) confirmed it wasn’t a problem for him.

I discovered I could reproduce the problem simply by running the same simple statement that SSRS used when creating the ReportServer database. SSRS uses a non-standard collation, and specifying that collation seems to be the difference in causing the deadlock. Then I discovered that specifying ANY non-standard collation was causing the deadlock. This had nothing to do with SSRS, and everything to do with non-default collations.

Vote for his User Voice item too.

Comments closed

Adding Instance Name To The AlwaysON_health Session

Jonathan Kehayias shows how to add server_instance_name to the AlwaysON_health event session to make Availability Group troubleshooting easier:

The AlwaysOn_health event session in Extended Events is intended to make analyzing problems with Availability Groups possible after they have occurred.  While this event session goes a long way towards making it possible to piece together the puzzle of what went wrong in a lot of situations, it can still be a difficult task.  One of the things I wish Microsoft had included in the AlwaysON_health event session definition is the sqlserver.server_instance_name action for every event, and this is something that I usually recommend clients add to each of their AG servers using a script after I work with them the first time.  Why would this be useful, since if the files come from a specific server we should know the events are for that server right? Well, when we are working with AG configurations with more than two servers and trying to see the big picture of what is happening across the servers, it can be difficult to follow timelines with multiple files from multiple servers open.  It’s not impossible, but it does make things more difficult.

Click through to see how to do this through the UI or via T-SQL.

Comments closed

Performance Concern: Anti-Join And Top

Paul White explains a scenario in which an innocent-looking execution plan can hide something sinister:

Not every execution plan containing an apply anti join with a Top (1) operator on its inner side will be problematic. Nevertheless, it is a pattern to recognise and one which almost always requires further investigation.

The four main elements to look out for are:

  • A correlated nested loops (apply) anti join

  • Top (1) operator immediately on the inner side

  • A significant number of rows on the outer input (so the inner side will be run many times)

  • potentially expensive subtree below the Top

Read the whole thing.  This is a great way to wrap up the series.

Comments closed