Sams Teach Yourself Hadoop in 24 Hours
Book Details
Full Title: Sams Teach Yourself Hadoop in 24 Hours
Author: Jeffrey Aven
ISBN/URL: 987-0-672-33852-6
Reading Period: 2019.10–2019.11.05
Source: Ad-hoc browsing at the library
General Review
-
Not a great book.
-
There are many unnecessary tables and figures (e.g., showing the full cli start-up prompt for the various cli).
-
It is also too brief on many aspects (e.g., when describing / giving overview of various projects in the Hadoop ecosystem, the book rarely goes further than a mere mention with a one-liner description of the project).
-
Specific Takeaways
-
Hadoop, as commonly referred to, actually refers to the whole suite of technologies that enables distributed computing with commodity (i.e., commonly available, and not highly specialized) hardware. Some of the components include:
-
Hadoop file system (HadoopFS).
-
This is the backbone on which the other technologies. Basically, HadoopFS is a schema-less distributed file system. I can put files onto the system, and the file will be replicated across machines.
-
-
Technologies that processes the files on the HadoopFS.
-
Traditionally, this is the MapReduce project, comprising the Map stage where local data is processed by the individual host, and the intermediate results are sent to the Reduce stage on other hosts.
-
Using pure MapReduce requires writing Java code for each kind of operation, and this proved to be unwieldy. As such, technologies such as Pig and Hive, together with their respective query languages, provide an abstraction on top, where users (such as data scientists, and business analysts) can use SQL-like language to query the data.
-
Furthermore, MapReduce writes intermediate output to disk, and this proved to be a bottleneck. Other technologies, such as Spark, is developed, which uses an in-memory approach to data processing, using a data structure call Resillient discributed dataset (RDD).
-
-
Technologies for loading files onto Hadoop:
-
Sqoop
-
Apache Flume
-
-
NoSQL datastores
-
-
At its most basic (currently), Hadoop file system consist of the file system, and the job scheduler.
-
The file system consists of various nodes (each node is usually a machine). The are generally two kinds of nodes:
-
One NameNode (or more for high availability deployment) keeps track of the metadata like what are the files on the system, and where are they replicated.
-
Many DataNodes which actually store the data, and regularly send a heartbeat to the NameNode to let it know that the DataNode is still up.
-
-
The job scheduler is YARN (Yet Another Resource Negotiator), consisting of two processes:
-
One ResourceManager (which is typically runned on the NameNode) which does the following:
-
Receives job request from client
-
Allocate a node (typically a DataNode) as the ApplicationMaster
-
Negatiote with all ApplicationMasters and allocate required processing resources (vCPU and memory) to the ApplicationMasters
-
Receive updates from ApplicationMasters on the status of jobs
-
-
Numerous ApplicationMasters (one per job, typically running on the DataNodes) which do the following:
-
Negotiate with the ResourceManager for processing resources (vCPU and memory), which may be on other hosts
-
Send the components of the job (called tasks) to the allocated resources, and track progress / manage retries
-
Report back to ResourceManager on job status
-
-
-
-
A Hadoop MapReduce job consist of several phases:
-
Map: Each host processes its allocated input data, and produces a key-value pair. There may also be a Combiner function in the case of aggregation tasks (e.g., if we are doing word count, it may be more efficient to sum up the occurence at the seperate hosts, instead of passing each occurence to the Reducer). There will be a partioning function to determine which key is to send to which Reducer, enusuring that each key is sent only to one Reducer (i.e., across various hosts running the Map step, a particular unique key is always sent to the same Reducer).
-
Sort and Shuffle: Transfers the output from the Map steps to the Reduce steps (aross hosts), and present them in key-sorted order.
-
Reduce: Performs reducing function on a list of key-value pairs. Note that the "value" in the key-value pair is actually a list of values associated with that particular key, for all outputs generated by all the Map steps.
-
-
DistributedCache can be used to distribute data (e.g., lookup data) and Java libraries that are needed by every tasks running any jobs.
-
A default deployment of Hadoop is far from production ready.
-
By default, Hadoop does not offer high availability (HA). The NameNode is a single-point-of-failure, as is the Resource Manager. Zookeeper is required for a HA deployment.
-
By default, all the web UIs served by the various components are via HTTP, does not require authentication/authorization
-
-
Some Hadoop ecosystem projects come with in-built analytics capabilities. For example:
-
HiveQL has in-built functions to generate n-grams
-
Spark has a component called MLib which allows training of machine learning models
-
To Internalize Now
-
Nothing in particular. Just a general understanding of Hadoop ecosystem, its purpose and development trends.
To Learn/Do Soon
-
Get a book on NoSQL technologies (e.g., MongoDB, HBase, Cassandra) and learn about them.
To Revisit When Necessary
-
For the common configuration commands / script names / log directories, refer to Hour 23: Administering, Monitoring, and Troubleshooting Hadoop
-
The book is generally light on details when details are important. As such, generally not a good reference book.