Build Versus Buy For Hadoop

Kevin Feasel

2017-10-24

Hadoop

Tom Phelan walks through some thoughts on whether to build versus buy when using big data platforms:

This means you absolutely must sweat the details up front. Big Data project failures are more often than not predicated by the statement: “We will do this bit now, and figure the rest out later”. But you need to begin with the end in mind.

You need to know the performance that you’ll be able to deliver and what your requirements are. You need to know how to integrate with your corporate Active Directory, and LDAP, and Kerberos services. You need to know your network topology and security requirements as well as the required user roles and responsibilities breakdown. You need to know how you’ll handle high availability, QoS, and multi-tenancy. You need to know how you’ll manage upgrades to the latest versions of your Hadoop distribution or other big data tools, and how you’ll respond to requests for new big data frameworks and new data science tools. If not, you’re just asking for trouble.

The motif in his post is building your own car, which makes sense as an extended metaphor.

Related Posts

Mounting HDFS As A Local Filesystem

Guy Shilo looks at two techniques for mounting HDFS as a local filesystem: NFS Gateway is a HDFS component that enables the use to expose HDFS through NFS3 interface so that Linux machines can mount it and access it just as a local filesystem. The manual installation is quite cumbersome and is covered┬áhere. Cloudera manager […]

Read More

How Humio Uses Kafka

Kresten Krab describes ways that Humio uses Apache Kafka for their product: Humio is a log analytics system built to run both on-prem and as a hosted offering. It is designed for “on-prem first” because, in many logging use cases, you need the privacy and security of managing your own logging solution. And because volume […]

Read More

Categories

October 2017
MTWTFSS
« Sep Nov »
 1
2345678
9101112131415
16171819202122
23242526272829
3031