Login with  Log in with facebook
Hiring Manager? SIGN UP HERE
0

Yes, you can do it using output commiters.

Output Committers

Hadoop makes sure a job either succeds or fails gracefully. This is done via OutputCommitter. This is accessible from OutputFormat by OutputFormat.getOutputCommiter()

public abstract OutputCommitter getOutputCommitter(TaskAttemptContext context)
                                            throws IOException,
                                                   InterruptedException

Once you have access to OutputCommiter, based on condition you can call
OutputCommiter.commitJob() or OutputCommitter.cleanJob() method

 void cleanupJob(JobContext jobContext) 
          Deprecated. Use commitJob(JobContext) and abortJob(JobContext, JobStatus.State) instead.
 void commitJob(JobContext jobContext) 
          For committing job's output after successful job completion.

 

Rishi Yadav
05/28/2013 at 19:38
1 Answer
1
0

It depends on where your job is failing - if a line is corrupt, and somewhere in your map method an Exception is thrown then you should just be able to wrap the body of your map method with a try / catch.

Nandish Dave
09/09/2016 at 14:57

If you want to post any answer to this forum then you need to log in.
Schedule a Demo

Schedule a Demo with us

Name *
Email *
Phone *
Company *
Details