Autocompleter For Hue

Kevin Feasel

2016-09-06

Hadoop

The Hue team shows off their new SQL editor’s autocomplete capabilities:

We’ve fine-tuned the live autocompletion for a better experience and we’ve introduced some options under the editor settings where you can turn off live autocompletion or disable the autocompleter altogether (if you’re adventurous). To access these settings open the editor and focus on the code area, press CTRL + , (or on Mac CMD + ,) and the settings will appear.

The autocompleter talks to the backend to get data for tables and databases etc. by default it will timeout after 5 seconds but once it has been fetched it’s cached for the next time around. The timeout can be adjusted in the Hue server configuration.

I haven’t used Hue in a while, but that’s a nice feature.  Just don’t use ANSI-89 syntax like in that first example…

HBase Performance Tips

Ashish Thapliyal has nine tips for optimizing HBase performance:

Does your RowKey’s looks like 1,2,3…….. or 00000001, 00000002, 00000003, or do you have Row Key that starts with date-time (starting with the year)? If you answered yes, bad news is that HBase will not scale for you, you have so many options to improve the HBase performance but there is nothing that will compensate for the bad rowkey design.

When rowkey is in sorted order, all the writes go to the same region and other regions will sit ideal doing nothing. you will see one of your node is very stressed trying to cope up with all the writes where as other nodes are thanking you for not giving them enough work. So, always salt your keys by adding random numbers or characters to the row key prefix.

If you are using Phoenix on top of HBase, Phoenix provides a way to transparently salt the row key with a salting byte for a particular table. You need to specify this in table creation time by specifying a table property “SALT_BUCKETS” typical practice is to set the value of SALT_BUCKET =number of region server

I think the biggest one is to design your data structures correctly.  This is particularly important if you’re coming at it from a relational background and are thinking in terms of what makes relational databases fast.

MapReduce

Kevin Feasel

2016-09-05

Hadoop

I talk about Hadoop a good bit on Curated SQL.  Therefore, I think it’s worth mentioning the original MapReduce paper that Jeffrey Dean and Sanjay Ghemawat published in 2004:

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.

Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program’s execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.

Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google’s clusters every day.

If you’ve never read this paper before, today might be a good day to do so.

Ambari 2.4

Jeff Sposetti discusses improvements in Ambari 2.4:

Reduce time to troubleshoot problems. Apache Hadoop components create a lot of log data. Accessing that log data to understand what the component is telling you, especially when issues arise, is critical. Apache Ambari includes a new Log Search service that provides agents for log collection and a delivers a custom UI for searching those logs. This is essential to providing a streamlined approach to searching for stack traces and exceptions across all nodes in the cluster.

I have enjoyed watching Ambari mature as a product.

Flink And Kafka Streams

Neha Narkhede and Stephan Ewen compare Apache Flink versus Kafka Streams:

Before Flink, users of stream processing frameworks had to make hard choices and trade off either latency, throughput, or result accuracy. Flink was the first open source framework (and still the only one), that has been demonstrated to deliver (1) throughput in the order oftens of millions of events per second in moderate clusters, (2) sub-second latency that can be as low as few 10s of milliseconds, (3) guaranteed exactly once semantics for application state, as well as exactly once end-to-end delivery with supported sources and sinks (e.g., pipelines from Kafka to Flink to HDFS or Cassandra), and (4) accurate results in the presence of out of order data arrival through its support for event time. Flink is based on a cluster architecture with master and worker nodes. Flink clusters are highly available, and can be deployed standalone or with resource managers such as YARN and Mesos. This architecture is what allows Flink to use a lightweight checkpointing mechanism to guarantee exactly-once results in the case of failures, as well allow easy and correct re-processing via savepoints without sacrificing latency or throughput. Finally, Flink is also a full-fledged batch processing framework, and, in addition to its DataStream and DataSet APIs (for stream and batch processing respectively), offers a variety of higher-level APIs and libraries, such as CEP (for Complex Event Processing), SQL and Table (for structured streams and tables), FlinkML (for Machine Learning), and Gelly (for graph processing). Flink has been proven to run very robustly in production at very large scale by several companies, powering applications that are used every day by end customers.

The upshot is that the two products don’t do exactly the same thing, and there might be room in your organization for the two of them.

Hadoop For .NET Developers

Kevin Feasel

2016-08-31

Hadoop

Elton Stoneman has a new Pluralsight course out:

My latest Pluralsight course is out now:

Hadoop for .NET Developers

It takes you through running Hadoop on Windows and using .NET to write MapReduce queries – proving that you can do Big Data on the Microsoft stack.

The course has five modules, starting with the architecture of Hadoop and working through a proof-of-concept approach, evaluating different options for running Hadoop and integrating it with .NET.

I’ve liked Elton’s courses, as he’s one of the few trainers who really takes the time to show how you can integrate .NET languages into a Hadoop ecosystem; the general philosophy is “go learn Java and Scala and Python and …”

Self-Paced HDInsight Training

Ashish Thapliyal introduces three EdX courses on HDInsight:

Implementing Real-Time Analysis with Hadoop in Azure HDInsight

Start course

In this four week course, you’ll learn how to implement low-latency and streaming Big Data solutions using Hadoop technologies like HBase, Storm, and Spark on Microsoft Azure HDInsight.

Course Syllabus

Use HBase to implement low-latency NoSQL data stores.
Use Storm to implement real-time streaming analytics solutions.
Use Spark for high-performance interactive data analysis.

These are free courses on EdX.  I personally wouldn’t bother getting the certificate, but hey, it’s your money.

Hortonworks HDP 2.5 Available

Hortonworks has a new version of their data platform, 2.5:

We are very pleased to announce that the Hortonworks Data Platform (HDP) Version 2.5 is now generally available for download. As part of a Open and Connected Data Platforms offering from Hortonworks, HDP 2.5 brings a variety of enhancements across all elements of the platform spanning data science, data access to security to governance.

At Hadoop Summit 2016 San Jose on 06/28/2016, we unveiled the latest innovation package within Hortonworks Data Platform 2.5.

The top points of interest:  Spark 2, Kafka 0.10.0, Ambari 2.4, and Storm 1.0.1.  These are four big projects with major improvements.  Looks like I’ve got something to do this weekend…

Spark Usage Scenarios

Rimma Nehme has several usage scenarios for Spark on Azure:

For data scientists, we provide out-of-the-box integration with Jupyter (iPython), the most popular open source notebook in the world. Unlike other managed Spark offerings that might require you to install your own notebooks, we worked with the Jupyter OSS community to enhance the kernel to allow Spark execution through a REST endpoint.

We co-led “Project Livy” with Cloudera and other organizations to create an open source Apache licensed REST web service that makes Spark a more robust back-end for running interactive notebooks.  As a result, Jupyter notebooks are now accessible within HDInsight out-of-the-box. In this scenario, we can use all of the services in Azure mentioned above with Spark with a full notebook experience to author compelling narratives and create data science collaborative spaces. Jupyter is a multi-lingual REPL on steroids. Jupyter notebook provides a collection of tools for scientific computing using powerful interactive shells that combine code execution with the creation of a live computational document. These notebook files can contain arbitrary text, mathematical formulas, input code, results, graphics, videos and any other kind of media that a modern web browser is capable of displaying. So, whether you’re absolutely new to R or Python or SQL or do some serious parallel/technical computing, the Jupyter Notebook in Azure is a great choice.

If you could only learn one new thing in 2016, Spark probably should be that thing.  Also, I probably should agitate a bit more about wanting Spark support within Polybase…

Ambari Metrics Collector Error

Kevin Feasel

2016-08-29

Hadoop

Jon Morisi had to troubleshoot an issue with the Ambari metrics collector not starting:

Last week I had a bit of a trial by fire:
“Here’s a 7 node, Hortonworks Hadoop cluster, metrics is broken, fix it, go!”

The initial indication that metrics was broken was apparent in the Services tab for Ambari Metrics.  Here it showed that there was an error and that Metrics Collector was Stopped.  The error however wasn’t very informative:

Connection failed: [Errno 111] Connection refused…

That didn’t tell me much at all, and neither did googling.
(I hope the title of this blog helps someone else find this solution quicker.)

Jon includes the answer and some additional helpful details.  Check it out.

Categories

July 2019
MTWTFSS
« Jun  
1234567
891011121314
15161718192021
22232425262728
293031