文本

tech2025-05-01  3

hadoop安装: 配置文件/etc/profile后查看java版本 Java -version(重新配置后先source /etc/profile) javac和Java -version判断java环境是否安装好 先安装jdk环境即可安装 解压:tar -zxvf hadoop-2.6.0-cdh5.14.2.tar.gz 改名:mv hadoop-2.6.0-cdh5.14.2.tar.gz hadoop vi /etc/profile export JAVA_HOME=/opt/java8 export JRE_HOME=/opt/java8/jre export CLASSPATH= J A V A H O M E / l i b / r t . j a r : JAVA_HOME/lib/rt.jar: JAVAHOME/lib/rt.jar:JAVA_HOME/lib/tools.jar: J A V A H O M E / l i b / d t . j a r e x p o r t H A D O O P H O M E = / o p t / h a d o o p e x p o r t H A D O O P M A P R E D H O M E = JAVA_HOME/lib/dt.jar export HADOOP_HOME=/opt/hadoop export HADOOP_MAPRED_HOME= JAVAHOME/lib/dt.jarexportHADOOPHOME=/opt/hadoopexportHADOOPMAPREDHOME=HADOOP_HOME export HADOOP_COMMON_HOME= H A D O O P H O M E e x p o r t H A D O O P H D F S H O M E = HADOOP_HOME export HADOOP_HDFS_HOME= HADOOPHOMEexportHADOOPHDFSHOME=HADOOP_HOME export YARN_HOME= H A D O O P H O M E e x p o r t H A D O O P C O M M O N L I B N A T I V E D I R = HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR= HADOOPHOMEexportHADOOPCOMMONLIBNATIVEDIR=HADOOP_HOME/lib/native export HADOOP_INSTALL= H A D O O P H O M E e x p o r t P A T H = / u s r / l o c a l / s b i n : / u s r / l o c a l / b i n : / u s r / s b i n : / u s r / b i n : / r o o t / b i n e x p o r t P A T H = HADOOP_HOME export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin export PATH= HADOOPHOMEexportPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/binexportPATH=JAVA_HOME/bin: J R E H O M E / b i n : JRE_HOME/bin: JREHOME/bin:HADOOP_HOME/bin: H A D O O P H O M E / s b i n : HADOOP_HOME/sbin: HADOOPHOME/sbin:PATH 修改了文件要source :source /etc/profile hadoop version 查看hadoop版本 echo KaTeX parse error: Expected 'EOF', got '#' at position 440: … hadoop-env.sh #̲export JAVA_HOM…{JAVA_HOME} 注释这一行 加入下一行 export JAVA_HOME=/opt/java8 第四步: mv mapred-site.xml.template mapred-site.xml vi mapred-site.xml mapreduce.framework.name yarn 第五步: vi yarn-site.xml

<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.localhost</name> <value>192.168.5.100</value> </property> <property> <name>yarn.nodemanager.aux-localhost</name> <value>mapreduce_shuffle</value> </property> </configuration>

生成密钥: ssh-keygen cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys hostnamectl set-hostname 名字 ssh hadoop001 yes 开始格式化:两种方式 hdfs namenode –format (对/etc/profile改变一次要删掉tmp 格式化一下) 启动: start-dfs.sh 启动hdfs 此时3+1(jps) which jps :查看jps在哪 jps 看看哪些启动了 start-yarn.sh 启动yarn此时5+1(jps) jps which jps(没找到命令 就是环境变量配置有关系) stop-all.sh (关掉所有) (单独关就stop yarn stop dfs) jps 只是jps 否则kill进程 start-all.sh 开启所有 jps (除了jps还有五个进程是正常的) 如果有问题 =》rm -rf tmp start-all.sh 开启所有 jps 看一下 登录ip+:50075 网页查看 hadoop fs -ls / 上网搜hdfs命令 了解一下 vi etc/hadoop/yarn-site.xml 注意etc前面没有/ yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler 配置文件修改一次删一次:rm -rf tmp 格式化一次:hadoop namenode -format stop-all.sh 关一下 start-all.sh 重启一下jps看看进程 hadoop fs -mkdir /test 建个目录 此时网页可以看见 可能出现超时情况 不是错了 vi etc/hadoop/core-site.xml etc前面没有/ hadoop.proxyuser.root.hosts hadoop.proxyuser.root.groups vi etc/hadoop/hdfs-site.xml dfs.namenode.secondary.http-address 192.168.5.100:50090 ip地址或者主机名 192.168.5.100:8088可以登录网页 vi etc/hadoop/mapred-site.xml mapreduce.jobhistory.address 192.168.5.100:10020 是主机名/ip mapreduce.jobhistory.webapp.address 192.168.5.100:19888 是主机名/ip vi etc/hadoop/yarn-site.xml yarn.resourcemanager.hostname hostnmae hadoop001 主机名 yarn.nodemanager.aux-localhost mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.log-aggregation-enable true yarn.log-aggregation.retain-seconds 604800 vi etc/hadoop/slaves hadoop001 接着: rm -rf tmp rm -rf logs hadoop namenode -format

最新回复(0)