搜索
您的当前位置:首页正文

Ubuntu16.04下Hadoop3.0.0安装笔记

来源:二三娱乐

安装JDK8

PPA方式安装OracleJDK

sudo apt-add-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
export JAVA_HOME=/usr/lib/jvm/java-8-oracle

或安装OpenJDK

sudo apt-get install default-jdk

创建hadoop用户

sudo useradd -m hadoop -s /bin/bash
sudo passwd hadoop
sudo adduser hadoop sudo

安装Open SSH Server

sudo apt-get install openssh-server

SSH授权:

cd ~/.ssh/
ssh-keygen -t rsa
cat ./id_rsa.pub >> ./authorized_keys

下载Hadoop

2018-03-22-16-08-21

选择3.0稳定版的binary下载,并解压

安装Hadoop

tar -xzvf hadoop-3.0.0.tar.gz
sudo mv hadoop-3.0.0 /opt/hadoop

PATH

export PATH=$PATH:/opt/hadoop/sbin:/opt/hadoop/bin

设置JDK环境变量

readlink -f /usr/bin/java | sed "s:bin/java::"
/usr/lib/jvm/java-8-oracle/jre/

sudo vi ./etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre/

运行Hadoop

./bin/hadoop

mkdir ~/input
cp /opt/hadoop/etc/hadoop/*.xml ~/input

./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0.jar grep ~/input ~/grep_example 'principal[.]*'

伪分布式配置

vi /opt/hadoop/etc/hadoop/core-site.xml
<configuration>
        <property>
             <name>hadoop.tmp.dir</name>
             <value>file:/opt/hadoop/tmp</value>
             <description>Abase for other temporary directories.</description>
        </property>
        <property>
             <name>fs.defaultFS</name>
             <value>hdfs://localhost:9000</value>
        </property>
</configuration>
vi /opt/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
        <property>
             <name>dfs.replication</name>
             <value>1</value>
        </property>
        <property>
             <name>dfs.namenode.name.dir</name>
             <value>file:/opt/hadoop/tmp/dfs/name</value>
        </property>
        <property>
             <name>dfs.datanode.data.dir</name>
             <value>file:/opt/hadoop/tmp/dfs/data</value>
        </property>
</configuration>

执行 NameNode 的格式化:
./bin/hdfs namenode -format

开启 NameNode 和 DataNode 守护进程:
./sbin/start-dfs.sh
./sbin/stop-dfs.sh

可以执行jps查看进程

运行Hadoop伪分布式实例

在 HDFS 中创建用户目录:
./bin/hdfs dfs -mkdir -p /user/hadoop

将示例xml文件作为输入文件复制到分布式文件系统中
./bin/hdfs dfs -mkdir input
./bin/hdfs dfs -put /opt/hadoop/etc/hadoop/*.xml input

查看文件列表:
./bin/hdfs dfs -ls input

伪分布式运行 MapReduce 作业:
./bin/hadoop jar /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'

查看运行结果:
./bin/hdfs dfs -cat output/*

将文件取回本地:
./bin/hdfs dfs -get output /opt/hadoop/output

启动YARN

vi mapred-site.xml

<configuration>
        <property>
             <name>mapreduce.framework.name</name>
             <value>yarn</value>
        </property>
</configuration>

vi yarn-site.xml

<configuration>
        <property>
             <name>yarn.nodemanager.aux-services</name>
             <value>mapreduce_shuffle</value>
        </property>
</configuration>

启动YARN:

./sbin/start-yarn.sh      # 启动YARN
./sbin/mr-jobhistory-daemon.sh start historyserver  # 开启历史服务器,才能在Web中查看任务运行情况

停止YARN:

./sbin/stop-yarn.sh
./sbin/mr-jobhistory-daemon.sh stop historyserver

参考文章

Top