操作系统: MAC OS X

一、准备

1、 JDK 1.8

  下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

2、Hadoop CDH

  下载地址:https://archive.cloudera.com/cdh5/cdh/5/

  本次安装版本:hadoop-2.6.0-cdh5.9.2.tar.gz

二、配置SSH(免密码登录)

1、打开iTerm2 终端,输入:ssh-keygen -t rsa   ,回车,next  -- 生成秘钥
2、cat id_rsa_xxx.pub >> authorized_keys         -- 用于授权你的公钥到本地可以无密码登录
3、chmod 600 authorized_keys      -- 赋权限
4、ssh localhost                              -- 免密码登录,如果显示最后一次登录时间,则登录成功

三、配置Hadoop&环境变量

1、创建hadoop目录&解压

  mkdir -p work/install/hadoop-cdh5.9.2 -- hadoop 主目录
  mkdir -p work/install/hadoop-cdh5.9.2/current/tmp work/install/hadoop-cdh5.9.2/current/nmnode work/install/hadoop-cdh5.9.2/current/dtnode -- hadoop 临时、名称节点、数据节点目录

  tar -xvf hadoop-2.6.0-cdh5.9.2.tar.gz    -- 解压包

2、配置 .bash_profile 环境变量

1 HADOOP_HOME="/Users/kimbo/work/install/hadoop-cdh5.9.2/hadoop-2.6.0-cdh5.9.2"
2
3 JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home"
4 HADOOP_HOME="/Users/kimbo/work/install/hadoop-cdh5.9.2/hadoop-2.6.0-cdh5.9.2"
5
6 PATH="/usr/local/bin:~/cmd:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"
7 CLASSPATH=".:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar"
8
9 export JAVA_HOME PATH CLASSPATH HADOOP_HOME

View Code

  source .bash_profile   -- 生效环境变量

3、修改配置文件(重点)

  cd $HADOOP_HOME/etc/hadoop

  • core-site.xml

<configuration><property><name>hadoop.tmp.dir</name><value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/tmp</value></property><property><name>fs.defaultFS</name><value>hdfs://localhost:8020</value></property><property><name>fs.trash.interval</name><value>4320</value><description> 3 days = 60min*24h*3day </description></property>
</configuration>

View Code

  • hdfs-site.xml

 1 <configuration>
 2   <property>
 3     <name>dfs.namenode.name.dir</name>
 4     <value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/nmnode</value>
 5   </property>
 6   <property>
 7     <name>dfs.datanode.data.dir</name>
 8     <value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/dtnode</value>
 9   </property>
10   <property>
11     <name>dfs.datanode.http.address</name>
12     <value>localhost:50075</value>
13   </property>
14   <property>
15     <name>dfs.replication</name>
16     <value>1</value>
17   </property>
18   <property>
19     <name>dfs.permissions.enabled</name>
20     <value>false</value>
21   </property>
22 </configuration>

View Code

  • yarn-site.xml

 1 <configuration>
 2  <property>
 3     <name>yarn.nodemanager.aux-services</name>
 4     <value>mapreduce_shuffle</value>
 5   </property>
 6   <property>
 7     <name>yarn.log-aggregation-enable</name>
 8     <value>true</value>
 9     <description>Whether to enable log aggregation</description>
10   </property>
11   <property>
12     <name>yarn.nodemanager.remote-app-log-dir</name>
13     <value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/tmp/yarn-logs</value>
14     <description>Where to aggregate logs to.</description>
15   </property>
16   <property>
17     <name>yarn.nodemanager.resource.memory-mb</name>
18     <value>8192</value>
19     <description>Amount of physical memory, in MB, that can be allocated
20       for containers.</description>
21   </property>
22   <property>
23     <name>yarn.nodemanager.resource.cpu-vcores</name>
24     <value>2</value>
25     <description>Number of CPU cores that can be allocated
26       for containers.</description>
27   </property>
28   <property>
29     <name>yarn.scheduler.minimum-allocation-mb</name>
30     <value>1024</value>
31     <description>The minimum allocation for every container request at the RM,
32       in MBs. Memory requests lower than this won't take effect,
33       and the specified value will get allocated at minimum.</description>
34   </property>
35   <property>
36     <name>yarn.scheduler.maximum-allocation-mb</name>
37     <value>2048</value>
38     <description>The maximum allocation for every container request at the RM,
39       in MBs. Memory requests higher than this won't take effect,
40       and will get capped to this value.</description>
41   </property>
42   <property>
43     <name>yarn.scheduler.minimum-allocation-vcores</name>
44     <value>1</value>
45     <description>The minimum allocation for every container request at the RM,
46       in terms of virtual CPU cores. Requests lower than this won't take effect,
47       and the specified value will get allocated the minimum.</description>
48   </property>
49   <property>
50     <name>yarn.scheduler.maximum-allocation-vcores</name>
51     <value>2</value>
52     <description>The maximum allocation for every container request at the RM,
53       in terms of virtual CPU cores. Requests higher than this won't take effect,
54       and will get capped to this value.</description>
55   </property>
56 </configuration>

View Code

  • mapred-site.xml

 1  <property>
 2     <name>mapreduce.jobtracker.address</name>
 3     <value>localhost:8021</value>
 4   </property>
 5   <property>
 6     <name>mapreduce.jobhistory.done-dir</name>
 7     <value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/tmp/job-history/</value>
 8     <description></description>
 9   </property>
10   <property>
11     <name>mapreduce.framework.name</name>
12     <value>yarn</value>
13     <description>The runtime framework for executing MapReduce jobs.
14     Can be one of local, classic or yarn.
15     </description>
16   </property>
17
18   <property>
19     <name>mapreduce.map.cpu.vcores</name>
20     <value>1</value>
21     <description>
22         The number of virtual cores required for each map task.
23     </description>
24   </property>
25   <property>
26     <name>mapreduce.reduce.cpu.vcores</name>
27     <value>1</value>
28     <description>
29         The number of virtual cores required for each reduce task.
30     </description>
31   </property>
32
33   <property>
34     <name>mapreduce.map.memory.mb</name>
35     <value>1024</value>
36     <description>Larger resource limit for maps.</description>
37   </property>
38   <property>
39     <name>mapreduce.reduce.memory.mb</name>
40     <value>1024</value>
41     <description>Larger resource limit for reduces.</description>
42   </property>
43 <configuration>
44   <property>
45     <name>mapreduce.map.java.opts</name>
46     <value>-Xmx768m</value>
47     <description>Heap-size for child jvms of maps.</description>
48   </property>
49   <property>
50     <name>mapreduce.reduce.java.opts</name>
51     <value>-Xmx768m</value>
52     <description>Heap-size for child jvms of reduces.</description>
53   </property>
54
55   <property>
56     <name>yarn.app.mapreduce.am.resource.mb</name>
57     <value>1024</value>
58     <description>The amount of memory the MR AppMaster needs.</description>
59   </property>
60 </configuration>

View Code

  • hadoop-env.sh

export JAVA_HOME=${JAVA_HOME}    -- 添加 java环境变量

四、启动

  1、格式化

    hdfs namenode -format

  如果hdfs命令识别不了, 检查环境变量,是否配置正确了。

  2、启动

    cd $HADOOP_HOME/sbin

    执行命名:start-all.sh  ,按照提示,输入密码

五、验证

  1、在终端输入: jps 

    出现如下截图,说明ok了

  2、登录web页面

    a)HDFS :  http://localhost:50070/dfshealth.html#tab-overview

      

    b)YARN Cluster:  http://localhost:8088/cluster

      

    c)YARN ResourceManager/NodeManager: http://localhost:8042/node

    

转载于:https://www.cnblogs.com/kimbo/p/8724062.html

Mac Hadoop2.6(CDH5.9.2)伪分布式集群安装相关推荐

  1. ZooKeeper伪分布式集群安装及使用

    为什么80%的码农都做不了架构师?>>>    ZooKeeper伪分布式集群安装及使用 让Hadoop跑在云端系列文章,介绍了如何整合虚拟化和Hadoop,让Hadoop集群跑在V ...

  2. ZooKeeper伪分布式集群安装

    为什么80%的码农都做不了架构师?>>>    获取ZooKeeper安装包 下载地址:http://apache.dataguru.cn/zookeeper 选择一个稳定版本进行下 ...

  3. Hadoop集群安装部署_伪分布式集群安装_02

    文章目录 一.解压安装 1. 安装包上传 2. 解压hadoop安装包 二.修改Hadoop相关配置文件 2.1. hadoop-env.sh 2.2. core-site.xml 2.3. hdfs ...

  4. Hadoop集群安装部署_伪分布式集群安装_01

    文章目录 一.配置基础环境 1. 设置静态ip 2. hostname 3. firewalld 4. ssh免密码登录 5. JDK 一.配置基础环境 1. 设置静态ip [root@bigdata ...

  5. Tachyon 0.7.1伪分布式集群安装与测试

    Tachyon是一个高容错的分布式文件系统,允许文件以内存的速度在集群框架中进行可靠的共享,就像Spark和 MapReduce那样.通过利用信息继承,内存侵入,Tachyon获得了高性能.Tachy ...

  6. Hadoop单机/伪分布式集群搭建(新手向)

    此文已由作者朱笑笑授权网易云社区发布. 欢迎访问网易云社区,了解更多网易技术产品运营经验. 本文主要参照官网的安装步骤实现了Hadoop伪分布式集群的搭建,希望能够为初识Hadoop的小伙伴带来借鉴意 ...

  7. 手把手教你搭建Hadoop生态系统伪分布式集群

    Hello,我是 Alex 007,一个热爱计算机编程和硬件设计的小白,为啥是007呢?因为叫 Alex 的人太多了,再加上每天007的生活,Alex 007就诞生了. 手把手教你搭建Hadoop生态 ...

  8. Hadoop伪分布式集群的安装部署

    Hadoop伪分布式集群的安装部署Hadoop伪分布式集群的安装部署 首先可以为Linux虚拟机搭建起来的最初状态做一个快照,方便后期搭建分布式集群时多台Linux虚拟机的准备. 一.如何为虚拟机做快 ...

  9. hadoop搭建伪分布式集群(centos7+hadoop-3.1.1)

    原文地址:https://www.cnblogs.com/zhengna/p/9316424.html Hadoop三种安装模式 搭建伪分布式集群准备条件 第一部分 安装前部署 1.查看虚拟机版本 2 ...

最新文章

  1. Web服务评估工具Nikto
  2. Android中使用File文件进行数据存储
  3. 如何做一个姿势正确的2B产品经理
  4. python网络爬虫基础知识_Python网络爬虫基础知识
  5. 2BizBox-ERP那点事儿系列之4
  6. java远程下载文件到本地_java远程下载文件到本地
  7. 对现有的所能找到的DDOS代码(攻击模块)做出一次分析----CC篇
  8. 丁香园在语义匹配任务上的探索与实践
  9. 最新百度翻译接口JS逆向教程
  10. Linux Kernel中irq handler, softirq handler 和 tasklet
  11. Python编程之输出素数
  12. VM虚拟机装Windows2000成功后VM Tools失败解决方法
  13. 如何在教学中利用计算机网络,教师论苑(二十二)| 利用XMind思维导图优化计算机网络基础课堂教学...
  14. texlive2021
  15. CNAS 认证机构认可规范文件清单
  16. tensorflow:tensorflow进阶
  17. 支付宝小程序动态绑定样式
  18. 宽带连接连接被远程计算机终止win10,Win10宽带无法连接提示“调制解调器报告了一个错误”怎么解决?...
  19. windows注册表命令大全
  20. Ubuntu 14.04 更换源(官方源——阿里源)

热门文章

  1. 改变你人生的励志语言
  2. 【吐血整理】【配图】提高开发效率的Window快捷键
  3. echarts自定义tooltip
  4. table-call布局
  5. 中国电信预计宽带资费未来将持续下降
  6. Prototype 原型模式之深 克隆 与浅 克隆
  7. 代码阅读 :SECOND pytorch版本
  8. 0000015-18
  9. 那些年啊,那些事——一个程序员的奋斗史 ——97
  10. Linux中的vsftpd服务的部署及优化