MapReduce and Spark jobs stuck -
in 5 node hadoop aws ec2 instance cluster, things running smooth before submitted 1 query. tried create orc table using below query:
create table dummy_orc stored orc tblproperties ("orc.compress"="lz4") select * dummy;
the job said, run 76 mappers , 0 reducers , job started. after 10-12 minutes when map % reached 100%, job aborted. since number of records large, did not mind large time took initially.but datanode daemons , nodemanager daemons died. hdfs dfsadmin -report
command gave 0 cluster capacity, 0 live datanodes, etc.
i restarted cluster completely. restarted namenode, resource manager, datanode, nodemanager, zkfc services, quorumpeermain, everything
. after cluster capacity,etc coming fine. able fire normal non-mapreduce queries select *
.
but mapreduce not starting.also spark jobs running now. stuck @ accepted
state mr jobs.
mr stuck select count(1) dummy
at:
query id = hadoopuser_20170728093320_b1875223-801e-466b-997f-4b58f0e90041 total jobs = 1 launching job 1 out of 1 number of reduce tasks determined @ compile time: 1 in order change average load reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> in order limit maximum number of reducers: set hive.exec.reducers.max=<number> in order set constant number of reducers: set mapreduce.job.reduces=<number> starting job = job_1501233326257_0003, tracking url = http://dev-bigdatamaster1:8088/proxy/application_1501233326257_0003/ kill command = /home/hadoopuser/hadoop//bin/hadoop job -kill job_1501233326257_0003
which log give me better picture resolve error? , went wrong?
Comments
Post a Comment