java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
11/03/15 12:54:09 WARN mapred.JobClient: Error reading task outputhttp:.....
这是老外的解释:
Apparently, it's an OS limit on the number of sub-directories that can be reated in another directory. In this case, we had 31998 sub-directories uder hadoop/userlogs/, so any new tasks would fail in Job Setup.
From the unix command line, mkdir fails as well:
$ mkdir hadoop/userlogs/testdir
mkdir: cannot create directory `hadoop/userlogs/testdir': Too many links
Difficult to track down because the Hadoop error message gives no hint whasoever. And normally, you'd look in the userlog itself for more info, butin this case the userlog couldn't be created.