java - How to recover data after namenode -format command in Hadoop -


i using hadoop 1.2.1 version. due unknown reason, namenode goes down , following log information obtained

2017-07-28 15:04:47,422 info org.apache.hadoop.hdfs.server.common.storage: start loading image file /home/hpcnl/crawler/hadoop-1.2.1/tmp/dfs/name/current/fsimage 2017-07-28 15:04:47,423 error org.apache.hadoop.hdfs.server.namenode.fsnamesystem: fsnamesystem initialization failed. java.io.eofexception         @ java.io.datainputstream.readint(datainputstream.java:392)         @ org.apache.hadoop.hdfs.server.namenode.fsimage.loadfsimage(fsimage.java:881)         @ org.apache.hadoop.hdfs.server.namenode.fsimage.loadfsimage(fsimage.java:834)         @ org.apache.hadoop.hdfs.server.namenode.fsimage.recovertransitionread(fsimage.java:378)         @ org.apache.hadoop.hdfs.server.namenode.fsdirectory.loadfsimage(fsdirectory.java:104)         @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.initialize(fsnamesystem.java:427)         @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.<init>(fsnamesystem.java:395)         @ org.apache.hadoop.hdfs.server.namenode.namenode.initialize(namenode.java:299)         @ org.apache.hadoop.hdfs.server.namenode.namenode.<init>(namenode.java:569)         @ org.apache.hadoop.hdfs.server.namenode.namenode.createnamenode(namenode.java:1479)         @ org.apache.hadoop.hdfs.server.namenode.namenode.main(namenode.java:1488) 2017-07-28 15:04:47,428 error org.apache.hadoop.hdfs.server.namenode.namenode: java.io.eofexception         @ java.io.datainputstream.readint(datainputstream.java:392)         @ org.apache.hadoop.hdfs.server.namenode.fsimage.loadfsimage(fsimage.java:881)         @ org.apache.hadoop.hdfs.server.namenode.fsimage.loadfsimage(fsimage.java:834)         @ org.apache.hadoop.hdfs.server.namenode.fsimage.recovertransitionread(fsimage.java:378)         @ org.apache.hadoop.hdfs.server.namenode.fsdirectory.loadfsimage(fsdirectory.java:104)         @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.initialize(fsnamesystem.java:427)         @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.<init>(fsnamesystem.java:395)         @ org.apache.hadoop.hdfs.server.namenode.namenode.initialize(namenode.java:299)         @ org.apache.hadoop.hdfs.server.namenode.namenode.<init>(namenode.java:569)         @ org.apache.hadoop.hdfs.server.namenode.namenode.createnamenode(namenode.java:1479)         @ org.apache.hadoop.hdfs.server.namenode.namenode.main(namenode.java:1488) 

then search on internet , found should stop cluster , run following command

hadoop namenode -format 

after when restart cluster, data not appeared in respective folders in hdfs. can recover data? how handle such situations in future if namenode goes down?

you can backup metadata using these commands:

hdfs dfsadmin -safemode enter hdfs dfsadmin -savenamespace 

these commands put namenode in safemode , push edits fsimage file:

hdfs dfsadmin -fetchimage /path/somefilename 

or

cd /namenode/data/current/ tar -cvf /root/nn_backup_data.tar 

now can place data in namenode metadata directory , restart namenode.

please note shouldn't use command below until unless don't have other options:

hadoop namenode -format 

Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

.htaccess - ERR_TOO_MANY_REDIRECTS htaccess -