Big data hadoop practice test2

New Document

Test your caliber

1. In a MapReduce job, you want each of your input files processed by a single map task. How do you configure a MapReduce job so that a single map task processes each input file regardless of how many blocks the input file occupies?
Increase the parameter that controls minimum split size in the job configuration. Write a custom MapRunner that iterates over all key-value pairs in the entire file.
Set the number of mappers equal to the number of input files you want to process. Write a custom FileInputFormat and override the method isSplitable to always return false.

5 comments:

  1. awesome post presented by you..your writing style is fabulous and keep update with your blogs keep updating the blogger. on Big data hadoop online training

    ReplyDelete
  2. Good Post! Thank you so much for sharing this pretty post, it was so good to read and useful to improve my knowledge as updated one, keep blogging.
    Selenium Training in Bangalore | Selenium Training in Bangalore | Selenium Training in Bangalore | Selenium Training in Bangalore

    ReplyDelete
  3. Nice tutorial. Thanks for sharing the valuable information. it’s really helpful. Who want to learn this blog most helpful. Keep sharing on updated tutorials…
    Selenium Training in Chennai | Selenium Training in Bangalore | Selenium Training in Pune | Selenium online Training

    ReplyDelete