Cloud Computing With Map Reduce and Hadoop

Published on December 2016 | Categories: Documents | Downloads: 32 | Comments: 0 | Views: 201
of 36
Download PDF   Embed   Report

Cloud” refers to large Internet services like Google, Yahoo, etc that run on 10,000’s of machines.More recently, “cloud computing” refers to services by these companies that let external customers rent computing cycles on their clusters

Comments

Content

Cloud Computing with Map Reduce
and Hadoop
By
Apex Tg India Pvt Ltd

What is Cloud Computing?
• “Cloud” refers to large Internet services like Google,
Yahoo, etc that run on 10,000’s of machines
• More recently, “cloud computing” refers to services by
these companies that let external customers rent
computing cycles on their clusters
– Amazon EC2: virtual machines at 10¢/hour, billed hourly
– Amazon S3: storage at 15¢/GB/month

• Attractive features:
– Scale: up to 100’s of nodes
– Fine-grained billing: pay only for what you use
– Ease of use: sign up with credit card, get root access

What is MapReduce?
• Simple data-parallel programming model designed for
scalability and fault-tolerance
• Pioneered by Google
– Processes 20 petabytes of data per day

• Popularized by open-source Hadoop project
– Used at Yahoo!, Facebook, Amazon, …

What is MapReduce used for?
• At Google:
– Index construction for Google Search
– Article clustering for Google News
– Statistical machine translation

• At Yahoo!:
– “Web map” powering Yahoo! Search
– Spam detection for Yahoo! Mail

• At Facebook:
– Data mining
– Ad optimization
– Spam detection

What is MapReduce used for?
• In research:








Astronomical image analysis (Washington)
Bioinformatics (Maryland)
Analyzing Wikipedia conflicts (PARC)
Natural language processing (CMU)
Particle physics (Nebraska)
Ocean climate simulation (Washington)
<Your application here>

Typical Hadoop Cluster
Aggregation switch
Rack switch

• 40 nodes/rack, 1000-4000 nodes in cluster
• 1 Gbps bandwidth within rack, 8 Gbps out of rack
• Node specs (Yahoo terasort):
8 x 2GHz cores, 8 GB RAM, 4 disks (= 4 TB?)
Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/YahooHadoopIntro-apachecon-us-2008.pdf

Typical Hadoop Cluster

Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/aw-apachecon-eu-2009.pdf

Challenges
1. Cheap nodes fail, especially if you have many
– Mean time between failures for 1 node = 3 years
– Mean time between failures for 1000 nodes = 1 day
– Solution: Build fault-tolerance into system

1. Commodity network = low bandwidth
– Solution: Push computation to the data

1. Programming distributed systems is hard
– Solution: Data-parallel programming model: users write “map” &
“reduce” functions, system distributes work and handles faults

Hadoop Components
• Distributed file system (HDFS)
– Single namespace for entire cluster
– Replicates data 3x for fault-tolerance

• MapReduce framework
– Executes user jobs specified as “map” and “reduce”
functions
– Manages work distribution & fault-tolerance

Hadoop Distributed File System
• Files split into 128MB blocks
• Blocks replicated across several
datanodes (usually 3)
• Single namenode stores
metadata (file names, block
locations, etc)
• Optimized for large files,
sequential reads
• Files are append-only

Namenode

File1
1
2
3
4

1
2
4

2
1
3

1
4
3

Datanodes

3
2
4

MapReduce Programming Model
• Data type: key-value records
• Map function:
(Kin, Vin)  list(Kinter, Vinter)
• Reduce function:
(Kinter, list(Vinter))  list(Kout, Vout)

Example: Word Count
def mapper(line):
foreach word in line.split():
output(word, 1)
def reducer(key, values):
output(key, sum(values))

Word Count Execution
Input

the quick
brown fox

the fox ate
the mouse

Map

Map

Shuffle & Sort
the, 1
brown, 1
fox, 1

the, 1
fox, 1
the, 1

brown cow

fox, 2
how, 1
now, 1
the, 3

Map

Map

Output

brown, 2

Reduce

how, 1
now, 1
brown, 1

how now

Reduce

quick, 1
ate, 1
mouse, 1
cow, 1

ate, 1

Reduce

cow, 1
mouse, 1
quick, 1

MapReduce Execution Details
• Single master controls job execution on multiple slaves
• Mappers preferentially placed on same node or same
rack as their input block
– Minimizes network usage

• Mappers save outputs to local disk before serving them
to reducers
– Allows recovery if a reducer crashes
– Allows having more reducers than nodes

Fault Tolerance in MapReduce
1. If a task crashes:
– Retry on another node
» OK for a map because it has no dependencies
» OK for reduce because map outputs are on disk
– If the same task fails repeatedly, fail the job or ignore
that input block (user-controlled)

 Note: For these fault tolerance features to work,
your map and reduce tasks must be side-effect-free

Fault Tolerance in MapReduce
2. If a node crashes:
– Re-launch its current tasks on other nodes
– Re-run any maps the node previously ran
» Necessary because their output files were lost along
with the crashed node

Fault Tolerance in MapReduce
3. If a task is going slowly (straggler):
– Launch second copy of task on another node
(“speculative execution”)
– Take the output of whichever copy finishes first, and
kill the other

 Surprisingly important in large clusters
– Stragglers occur frequently due to failing hardware,
software bugs, misconfiguration, etc
– Single straggler may noticeably slow down a job

Takeaways
• By providing a data-parallel programming model,
MapReduce can control job execution in useful ways:





Automatic division of job into tasks
Automatic placement of computation near data
Automatic load balancing
Recovery from failures & stragglers

• User focuses on application, not on complexities of
distributed computing

1. Search
• Input: (lineNumber, line) records
• Output: lines matching a given pattern
• Map:

if(line matches pattern):
output(line)

• Reduce: identify function
– Alternative: no reducer (map-only job)

2. Sort
• Input: (key, value) records
• Output: same records, sorted by key
• Map: identity function
• Reduce: identify function

zebra
cow

Map

• Trick: Pick partitioning
function h such that
k1<k2 => h(k1)<h(k2)

ant, bee

Map

pig

aardvark
ant
bee
cow
elephant

Reduce [N-Z]

aardvark,
elephant

Map

Reduce [A-M]

sheep, yak

pig
sheep
yak
zebra

3. Inverted Index
• Input: (filename, text) records
• Output: list of files containing each word
• Map:

foreach word in text.split():
output(word, filename)

• Combine: uniquify filenames for each word
• Reduce:
def reduce(word, filenames):
output(word, sort(filenames))

Inverted Index Example

hamlet.txt
to be or
not to be

12th.txt
be not
afraid of
greatness

to, hamlet.txt
be, hamlet.txt
or, hamlet.txt
not, hamlet.txt

be, 12th.txt
not, 12th.txt
afraid, 12th.txt
of, 12th.txt
greatness, 12th.txt

afraid, (12th.txt)
be, (12th.txt, hamlet.txt)
greatness, (12th.txt)
not, (12th.txt, hamlet.txt)
of, (12th.txt)
or, (hamlet.txt)
to, (hamlet.txt)

4. Most Popular Words
• Input: (filename, text) records
• Output: top 100 words occurring in the most files
• Two-stage solution:
– Job 1:
» Create inverted index, giving (word, list(file)) records

– Job 2:
» Map each (word, list(file)) to (count, word)
» Sort these records by count as in sort job

• Optimizations:
– Map to (word, 1) instead of (word, file) in Job 1
– Count files in job 1’s reducer rather than job 2’s mapper
– Estimate count distribution in advance and drop rare words

5. Numerical Integration
• Input: (start, end) records for sub-ranges to integrate
– Easy using custom InputFormat

• Output: integral of f(x) dx over entire range
• Map:

• Reduce:

def map(start, end):
sum = 0
for(x = start; x < end; x += step):
sum += f(x) * step
output(“”, sum)
def reduce(key, values):
output(key, sum(values))

Getting Started with Hadoop
• Download from hadoop.apache.org
• To install locally, unzip and set JAVA_HOME
• Details: hadoop.apache.org/core/docs/current/quickstart.html
• Three ways to write jobs:
– Java API
– Hadoop Streaming (for Python, Perl, etc)
– Pipes API (C++)

Word Count in Java
public class MapClass extends MapReduceBase
implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable ONE = new IntWritable(1);
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> out,
Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
out.collect(new text(itr.nextToken()), ONE);
}
}
}

Word Count in Java
public class ReduceClass extends MapReduceBase
implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> out,
Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
out.collect(key, new IntWritable(sum));
}
}

Word Count in Java
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setMapperClass(MapClass.class);
conf.setCombinerClass(ReduceClass.class);
conf.setReducerClass(ReduceClass.class);
FileInputFormat.setInputPaths(conf, args[0]);
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
conf.setOutputKeyClass(Text.class); // out keys are words (strings)
conf.setOutputValueClass(IntWritable.class); // values are counts
JobClient.runJob(conf);
}

Sample Hive Queries
• Find top 5 pages visited by users aged 18-25:
SELECT p.url, COUNT(1) as clicks
FROM users u JOIN page_views p ON (u.name = p.user)
WHERE u.age >= 18 AND u.age <= 25
GROUP BY p.url
ORDER BY clicks
LIMIT 5;

• Filter page views through Python script:
SELECT TRANSFORM(p.user, p.date)
USING 'map_script.py'
AS dt, uid CLUSTER BY dt
FROM page_views p;

Amazon Elastic MapReduce
• Provides a web-based interface and command-line
tools for running Hadoop jobs on Amazon EC2
• Data stored in Amazon S3
• Monitors job and shuts down machines after use
• Small extra charge on top of EC2 pricing

• If you want more control over how you Hadoop
runs, you can launch a Hadoop cluster on EC2
manually using the scripts in src/contrib/ec2

Elastic MapReduce Workflow

Elastic MapReduce Workflow

Elastic MapReduce Workflow

Elastic MapReduce Workflow

Conclusions
• MapReduce programming model hides the complexity of
work distribution and fault tolerance
• Principal design philosophies:

– Make it scalable, so you can throw hardware at problems
– Make it cheap, lowering hardware, programming and admin costs

• MapReduce is not suitable for all problems, but when it
works, it may save you quite a bit of time
• Cloud computing makes it straightforward to start
using Hadoop (or other parallel software) at scale

Resources
Hadoop: http://hadoop.apache.org/core/
Pig: http://hadoop.apache.org/pig
Amazon Web Services: http://aws.amazon.com/
Amazon Elastic MapReduce guide:
http://docs.amazonwebservices.com/ElasticMapReduce/lates
t/GettingStartedGuide/
• My email: [email protected]





Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close