Apache Hadoop - How to Install a Three Nodes Cluster (1)
Apache Hadoop - How to Install a Three Nodes Cluster (2)
Apache Hadoop - How to Install a Three Nodes Cluster (4)
Configure Secondary Namenode:
To configure secondary namenode is easy, in our tutorial, we choose hadoop2 server to be the secondary namenode. So on hadoop2, open hdfs-site.xml file:
# vi hdfs-site.xml Add property: <property> <name>fs.checkpoint.dir</name> <value>/data/snn1,/data/snn2</value> <description> A comma separated list of paths. Use the list of directories from $FS_CHECKPOINT_DIR. For example, /grid/hadoop/hdfs/snn,sbr/grid1/hadoop/hdfs/snn,sbr/grid2/hadoop/hdfs/snn </description> </property>
Then save hdfs-site.xml file,run the following command:
$HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start secondarynamenodeOpen your web browser, go to "http://hadoop2:50090", you can check the status of Secondary Namenode.
Troubleshooting:
1. Stack guard warning 1:
If you see warnings like this:
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /opt/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Run the following command to fix the warning:
# execstack -c /opt/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0
2. URI path warning:
14/02/19 11:22:23 WARN common.Util: Path /data/nn1 should be specified as a URI in configuration files. Please update hdfs configuration. This is a warning that your "dfs.namenode.name.dir" must be "file:///data/nn1" and not simple "/data/nn1". That is, it must now be a proper URI. It is harmless in its present form though, but I recommend changing it to avoid incompatibility in future when direct path support may get yanked or cause wrong FS assumptions.
To get rid of this warning, change:
<property> <name>dfs.namenode.name.dir</name> <value>/data/nn1,/data/nn2</value> </property> to <property> <name>dfs.namenode.name.dir</name> <value>file:///data/nn1,file:///data/nn2</value> </property>
3. JobHistoryServer can’t create done directory:
13/12/31 11:21:10 INFO hs.JobHistoryServer: registered UNIX signal handlers for [TERM, HUP, INT] 13/12/31 11:21:12 INFO hs.JobHistory: JobHistory Init 13/12/31 11:21:12 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: hdptest.themarker.com 13/12/31 11:21:12 INFO service.AbstractService: Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [null] at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:503)
It is possible that in your core-site.xml file, your fs.defaultFS value was only set to “NAMENODE” instead of “hdfs://NAMENODE:PORT”. Try to prepend the “hdfs://” and adding “:8020″ as the port at the end. Also you should check directories permissions in your HDFS. Make sure you are running JobHistoryServer as $MAPRED_USER (in my case is mapred).
Assuming you are using /mr-history/tmp as “mapreduce.jobhistory.intermediate-done-dir” and /mr-history/done as “mapreduce.jobhistory.done-dir”.
# su $HDFS_USER $ hadoop fs -mkdir -p /mr-history/tmp $ hadoop fs -chmod -R 1777 /mr-history/tmp $ hadoop fs -mkdir -p /mr-history/done $ hadoop fs -chmod -R 1777 /mr-history/done $ hadoop fs -chown -R $MAPRED_USER:$HDFS_USER /mr-history
No comments:
Post a Comment