Press "Enter" to skip to content

Category: Hadoop

Pulsar

Alex Woodie reports that Yahoo is open sourcing Pulsar:

Pulsar uses Apache Bookkeeper (committed by Yahoo to open source in 2011) as its durable storage mechanism. “With Bookkeeper, applications can create many independent logs, called ledgers,” Pulsar’s project page on GitHub says. “A ledger is an append-only data structure with a single writer that is assigned to multiple storage nodes (or bookies) and whose entries are replicated to multiple of these nodes.”

Pulsar uses brokers to serve topics. Each topic is assigned to a broker, and an individual broker can serve thousands of topics, Yahoo says. “The broker accepts messages from writers, commits them to a durable store, and dispatches them to readers,” Yahoo says.

My biases would lead me to still go with Kafka over Pulsar, but it’d be interesting to see a good comp between the two.

Comments closed

Power BI And Impala

Justin Kestelyn describes the Impala Connector for Power BI Desktop:

Note that the connector currently only supports Import mode, which requires downloading the query output data to the local data model. In future updates, we will enhance the connector with DirectQuery capabilities, as well as with support for refresh scenarios via the Power BI Gateway. [Ed. Note: As of the August 2016 update, the Impala Connector also supports DirectQuery mode, which means you are always viewing the most up-to-date data. The functionality for both periodic refreshes and DirectQuery mode require the Power BI Gateway running either on-premise or in Microsoft Azure.]

Enabling Power BI connectivity to Impala has been a very frequently requested capability from our customers. We encourage you to give it a try and share with us any feedback or issues that you encounter via the “Send a Frown” feature in Power BI Desktop.

Good stuff.

Comments closed

Ambari And Active Directory

Jon Morisi documents his efforts in getting Ambari to play nicely with Active Directory over Kerberos:

You then need to trust the certificate on all the linux hosts
From the IBM article:

  1. Create ‘/etc/pki/ca-trust/source/anchors/activedirectory.pem’ and paste the certificate contents

  2. Trust CA cert: sudo update-ca-trust enable; sudo update-ca-trust extract; sudo update-ca-trust check

  3. Trust CA cert in Java:

  4. mycert=/etc/pki/ca-trust/source/anchors/activedirectory.pem sudo keytool -importcert -noprompt -storepass changeit -file ${mycert} -alias ad -keystore /etc/pki/java/cacerts

  5. And at last, please make sure every node on your cluster has access to the ad host.

LDAP support is a key part of setting up a production Hadoop cluster.

Comments closed

Autocompleter For Hue

The Hue team shows off their new SQL editor’s autocomplete capabilities:

We’ve fine-tuned the live autocompletion for a better experience and we’ve introduced some options under the editor settings where you can turn off live autocompletion or disable the autocompleter altogether (if you’re adventurous). To access these settings open the editor and focus on the code area, press CTRL + , (or on Mac CMD + ,) and the settings will appear.

The autocompleter talks to the backend to get data for tables and databases etc. by default it will timeout after 5 seconds but once it has been fetched it’s cached for the next time around. The timeout can be adjusted in the Hue server configuration.

I haven’t used Hue in a while, but that’s a nice feature.  Just don’t use ANSI-89 syntax like in that first example…

Comments closed

HBase Performance Tips

Ashish Thapliyal has nine tips for optimizing HBase performance:

Does your RowKey’s looks like 1,2,3…….. or 00000001, 00000002, 00000003, or do you have Row Key that starts with date-time (starting with the year)? If you answered yes, bad news is that HBase will not scale for you, you have so many options to improve the HBase performance but there is nothing that will compensate for the bad rowkey design.

When rowkey is in sorted order, all the writes go to the same region and other regions will sit ideal doing nothing. you will see one of your node is very stressed trying to cope up with all the writes where as other nodes are thanking you for not giving them enough work. So, always salt your keys by adding random numbers or characters to the row key prefix.

If you are using Phoenix on top of HBase, Phoenix provides a way to transparently salt the row key with a salting byte for a particular table. You need to specify this in table creation time by specifying a table property “SALT_BUCKETS” typical practice is to set the value of SALT_BUCKET =number of region server

I think the biggest one is to design your data structures correctly.  This is particularly important if you’re coming at it from a relational background and are thinking in terms of what makes relational databases fast.

Comments closed

MapReduce

I talk about Hadoop a good bit on Curated SQL.  Therefore, I think it’s worth mentioning the original MapReduce paper that Jeffrey Dean and Sanjay Ghemawat published in 2004:

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.

Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program’s execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.

Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google’s clusters every day.

If you’ve never read this paper before, today might be a good day to do so.

Comments closed

Ambari 2.4

Jeff Sposetti discusses improvements in Ambari 2.4:

Reduce time to troubleshoot problems. Apache Hadoop components create a lot of log data. Accessing that log data to understand what the component is telling you, especially when issues arise, is critical. Apache Ambari includes a new Log Search service that provides agents for log collection and a delivers a custom UI for searching those logs. This is essential to providing a streamlined approach to searching for stack traces and exceptions across all nodes in the cluster.

I have enjoyed watching Ambari mature as a product.

Comments closed

Flink And Kafka Streams

Neha Narkhede and Stephan Ewen compare Apache Flink versus Kafka Streams:

Before Flink, users of stream processing frameworks had to make hard choices and trade off either latency, throughput, or result accuracy. Flink was the first open source framework (and still the only one), that has been demonstrated to deliver (1) throughput in the order oftens of millions of events per second in moderate clusters, (2) sub-second latency that can be as low as few 10s of milliseconds, (3) guaranteed exactly once semantics for application state, as well as exactly once end-to-end delivery with supported sources and sinks (e.g., pipelines from Kafka to Flink to HDFS or Cassandra), and (4) accurate results in the presence of out of order data arrival through its support for event time. Flink is based on a cluster architecture with master and worker nodes. Flink clusters are highly available, and can be deployed standalone or with resource managers such as YARN and Mesos. This architecture is what allows Flink to use a lightweight checkpointing mechanism to guarantee exactly-once results in the case of failures, as well allow easy and correct re-processing via savepoints without sacrificing latency or throughput. Finally, Flink is also a full-fledged batch processing framework, and, in addition to its DataStream and DataSet APIs (for stream and batch processing respectively), offers a variety of higher-level APIs and libraries, such as CEP (for Complex Event Processing), SQL and Table (for structured streams and tables), FlinkML (for Machine Learning), and Gelly (for graph processing). Flink has been proven to run very robustly in production at very large scale by several companies, powering applications that are used every day by end customers.

The upshot is that the two products don’t do exactly the same thing, and there might be room in your organization for the two of them.

Comments closed

Hadoop For .NET Developers

Elton Stoneman has a new Pluralsight course out:

My latest Pluralsight course is out now:

Hadoop for .NET Developers

It takes you through running Hadoop on Windows and using .NET to write MapReduce queries – proving that you can do Big Data on the Microsoft stack.

The course has five modules, starting with the architecture of Hadoop and working through a proof-of-concept approach, evaluating different options for running Hadoop and integrating it with .NET.

I’ve liked Elton’s courses, as he’s one of the few trainers who really takes the time to show how you can integrate .NET languages into a Hadoop ecosystem; the general philosophy is “go learn Java and Scala and Python and …”

Comments closed

Self-Paced HDInsight Training

Ashish Thapliyal introduces three EdX courses on HDInsight:

Implementing Real-Time Analysis with Hadoop in Azure HDInsight

Start course

In this four week course, you’ll learn how to implement low-latency and streaming Big Data solutions using Hadoop technologies like HBase, Storm, and Spark on Microsoft Azure HDInsight.

Course Syllabus

Use HBase to implement low-latency NoSQL data stores.
Use Storm to implement real-time streaming analytics solutions.
Use Spark for high-performance interactive data analysis.

These are free courses on EdX.  I personally wouldn’t bother getting the certificate, but hey, it’s your money.

Comments closed