Secondary namenode data loss

Yes, that happen if you not configure your installation well. I got some mails from our customers regarding that problem. 
The secondary namenode hadoop.tmp.dir have to redefined in core-site.xml to a directory outside of /tmp, because the most linux-servers cleanup /tmp when a server reboot. That causes a lost of last edit logs and fsimage, in fact the namenode could not be replayed at a server crash. Simply add a new property into core-site.xml:

<property>
<name>hadoop.tmp.dir</name>
<value>/path/for/node/${user.name}</value>
</property>


restart the secondary namenode and you'll be save. 
The same you should do with your hbase-configuration (hbase-site.xml):


<property>
<name>hbase.tmp.dir</name>
<value>/path/for/node/${user.name}</value>
</property>

Comments

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Hive query shows ERROR "too many counters"

Embedded Linux won't reboot - how to fix and repair