Friday, March 13, 2015

App Timeline Server not starting Or Downgrade Ambari

This can also be used as a downgrade guide from Ambari 1.7.0 to 1.6.1.

If using HDP 2.1.2 with Ambari 1.7.0 your App Timeline Server does not start, you come to the right place. 
Symptoms: running the ATS from Ambari throws:
Fail: Execution of ‘ls /var/run/hadoop-yarn/yarn/ >/dev/null 2>&1 && ps cat /var/run/hadoop-yarn/yarn/` >/dev/null 2>&1′ returned 1.
All services work fine. I have set the recommended configuration for HDP 2.1.2 = org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore

The History Server is running fine. 

Not the ideal solution but it worked for me. Since this is kind of related to Ambari versions I reverted back to 1.6.1, steps:
1. Stopped and removed ambari server and all agents
2. Deleted repo and any directories for ambari
3. Downloaded and installed ambari 1.6.1
4. Re-configured/installed cluster, as HDP version remained the same
5. Formatted namenode and hbase
6. Change the config: = org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore
7. Start ATS, failed, checked logs for historyserver, error: 

Permission denied on /hadoop/yarn/timeline/leveldb.timeline-store.ldb/LOCK
8. Deleted the leveldb-timeline-store.ldb
9. Restarted ATS, worked fine!

Usually I never got this issue for other cluster installs using HDP 2.2 and Ambari 1.7.0.

Changing Storage in Hadoop

After I installed a HDP 2.1.2 cluster, I noticed that all the nodes were not using the drive partition planned for storage. The Linux boxes had OS partition and data data partition. Assigned during OS install one set to OS and other for data storage.

Somehow the data storage was not available on cluster installation most probably since it was not mounted. Following are the steps performed to change HDFS storage location, along with any drive configuration needed.

First format and optimized the partition or drive.
mkfs -t ext4 -m 1 -O dir_index,extent,sparse_super /dev/sdb

Create a mount directory
mkdir -p /disk/sdb1

Mount with optimized settings
mount -noatime -nodiratime /dev/sdb /disk/sdb1

Append to fstab file so that the partition is mounted on boot (very critical)
echo "/dev/sdb /disk/sdb1 ext4 defaults,noatime,nodiratime 1 2" >> /etc/fstab

Add folder for hdfs data
mkdir -p /disk/sdb1/data

Location to store Namenode data
mkdir -p /disk/sdb1/hdfs/namenode

Location to store Secondary Namenode
mkdir -p /disk/sdb1/hdfs/namesecondary

Set these in hdfs-site.xml or through Ambari = /disk/sdb1/hdfs/namenode = /disk/sdb1/hdfs/namesecondary = /disk/sdb1/data

Set permissions
sudo chown -R hdfs:hadoop /disk/sdb1/data

Format namenode
hadoop namenode -format

Start namenode through ambari or CLI
hadoop namenode start

Start all nodes and services. The new drive should be listed.