快上网专注成都网站设计 成都网站制作 成都网站建设
成都网站建设公司服务热线:028-86922220

网站建设知识

十年网站开发经验 + 多家企业客户 + 靠谱的建站团队

量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决

HDFS实验(四)集群操作

集群设置

创新互联是专业的裕安网站建设公司,裕安接单;提供成都网站设计、成都网站建设,网页设计,网站设计,建网站,PHP网站建设等专业做网站服务;采用PHP框架,可快速的进行裕安网站开发网页制作和功能扩展;专业做搜索引擎喜爱的网站,专业的做网站团队,希望更多企业前来合作!

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html

用户手册

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html

Daemon

Environment Variable

NameNode

HDFS_NAMENODE_OPTS

DataNode

HDFS_DATANODE_OPTS

Secondary NameNode

HDFS_SECONDARYNAMENODE_OPTS

ResourceManager

YARN_RESOURCEMANAGER_OPTS

NodeManager

YARN_NODEMANAGER_OPTS

WebAppProxy

YARN_PROXYSERVER_OPTS

Map Reduce Job History Server

MAPRED_HISTORYSERVER_OPTS

To start a Hadoop cluster you will need to start both the HDFS and YARN cluster.

The first time you bring up HDFS, it must be formatted. Format a new distributed filesystem as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs namenode -format

Start the HDFS NameNode with the following command on the designated node as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start namenode

Start a HDFS DataNode with the following command on each designated node as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start datanode

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes can be started with a utility script. As hdfs:

[hdfs]$ $HADOOP_HOME/sbin/start-dfs.sh

Start the YARN with the following command, run on the designated ResourceManager as yarn:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon start resourcemanager

Run a script to start a NodeManager on each designated host as yarn:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon start nodemanager

Start a standalone WebAppProxy server. Run on the WebAppProxy server as yarn. If multiple servers are used with load balancing it should be run on each of them:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon start proxyserver

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the YARN processes can be started with a utility script. As yarn:

[yarn]$ $HADOOP_HOME/sbin/start-yarn.sh

Start the MapReduce JobHistory Server with the following command, run on the designated server as mapred:

[mapred]$ $HADOOP_HOME/bin/mapred --daemon start historyserver

Hadoop Shutdown

Stop the NameNode with the following command, run on the designated NameNode as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon stop namenode

Run a script to stop a DataNode as hdfs:

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon stop datanode

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes may be stopped with a utility script. As hdfs:

[hdfs]$ $HADOOP_HOME/sbin/stop-dfs.sh

Stop the ResourceManager with the following command, run on the designated ResourceManager as yarn:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon stop resourcemanager

Run a script to stop a NodeManager on a worker as yarn:

[yarn]$ $HADOOP_HOME/bin/yarn --daemon stop nodemanager

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the YARN processes can be stopped with a utility script. As yarn:

[yarn]$ $HADOOP_HOME/sbin/stop-yarn.sh

Stop the WebAppProxy server. Run on the WebAppProxy server as yarn. If multiple servers are used with load balancing it should be run on each of them:

[yarn]$ $HADOOP_HOME/bin/yarn stop proxyserver

Stop the MapReduce JobHistory Server with the following command, run on the designated server as mapred:

[mapred]$ $HADOOP_HOME/bin/mapred --daemon stop historyserver

Web Interfaces

Once the Hadoop cluster is up and running check the web-ui of the components as described below:

Daemon

Web Interface

Notes

NameNode

http://nn_host:port/

Default HTTP port is 9870.

ResourceManager

http://rm_host:port/

Default HTTP port is 8088.

MapReduce JobHistory Server

http://jhs_host:port/

Default HTTP port is 19888.


当前题目:HDFS实验(四)集群操作
网页路径:http://6mz.cn/article/iiedpg.html

其他资讯