活动介绍
file-type

Hadoop2.x端口详解:Namenode与YARN组件功能及配置

下载需积分: 45 | 2.29MB | 更新于2024-08-13 | 161 浏览量 | 4 下载量 举报 收藏
download 立即下载
本文主要介绍了Hadoop环境中的关键端口配置以及Hadoop2.x的主要组件和功能。首先,我们关注的是Hadoop的端口,包括: 1. Namenode的常用端口: - 9000:这是namenode的服务端口,通常用于内部通信。 - 8020:RPC调用端口,客户端通过这个端口请求文件系统metadata信息。 - 50070:HTTP接口,用于HDFS的Web界面查看,提供文件系统视图和管理工具。 - 50470:HTTPS版本的50070端口,提供更安全的访问方式。 - 50090:SecondaryNamenode的端口,用于维护元数据的备份和一致性检查。 - 8030-8033:这些端口属于ResourceManager,与YARN的资源管理和调度有关。 接着,文章详细解释了Hadoop2.x的核心模块: - Hadoop Common:提供了基础库和服务,支持其他模块的运行。 - Hadoop DFS(分布式文件系统):高可靠性和高吞吐量的文件存储系统,由NameNode和DataNode组成。 - Hadoop MapReduce:分布式离线并行计算框架,负责任务分割、资源申请和容错处理。 - Hadoop YARN:新一代的MapReduce框架,集成了任务调度和资源管理功能。 此外,文章还涵盖了HDFS系统架构,NameNode作为元数据存储节点,DataNode存储实际数据,而SecondaryNameNode负责定期备份元数据。YARN架构中,ResourceManager负责资源管理和ApplicationMaster的应用程序管理,NodeManager则在每个节点上执行具体任务的Container管理。 最后,文章介绍了Hadoop2.7.1伪分布式安装过程,包括关闭防火墙、设置IP地址、配置网络映射文件、安装Java和Hadoop,以及配置多个核心配置文件如hadoop-env.sh、core-site.xml等。 在整个Hadoop环境中,正确理解和配置这些端口以及组件对于系统的稳定运行至关重要。通过掌握这些信息,用户可以有效地搭建和管理Hadoop环境,进行大数据处理和分析。

相关推荐

filetype

STARTUP_MSG: host = 小明同学/192.168.3.104 STARTUP_MSG: args = [] STARTUP_MSG: version = 3.3.4 STARTUP_MSG: classpath = D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\etc\hadoop;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\accessors-smart-2.4.7.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\animal-sniffer-annotations-1.17.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\asm-5.0.4.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\audience-annotations-0.5.0.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\avro-1.7.7.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\checker-qual-2.5.2.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-beanutils-1.9.4.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-cli-1.2.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-codec-1.15.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-collections-3.2.2.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-compress-1.21.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-configuration2-2.1.1.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-daemon-1.0.13.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-io-2.8.0.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-lang3-3.12.0.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-logging-1.1.3.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-math3-3.1.1.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-net-3.6.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\commons-text-1.4.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\curator-client-4.2.0.jar;D:\Hadoop\hadoop3.3.4\hadoop-3.3.4\share\hadoop\common\lib\curator-framework-4.2.0.jar;D:\Hadoop\hadoo

filetype

我要重新下载安装hbase应该怎么做,我的配置是[manager1@hadoop102 module]$ ls flume-1.9.0 hadoop-3.3.6 hive-3.1.2 jdk8u452-b09 kafka-3.5.2 openssl-1.1.1v spark-3.5.3-bin-hadoop3 zookeeper-3.8.4 hadoop hbase-2.4.18 jdk-11.0.26 kafka_2.13-3.9.1 mysql-8.0.40-linux-glibc2.28-aarch64 Python-3.9.7 sparkapp [manager1@hadoop102 module]$ manager1@hadoop102 kafka_2.13-3.9.1]$ bin/kafka-topics.sh --version 3.9.1我要的hbase需要适配我的配置,我现在的hbase会出现错误:java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected 和SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.3.6/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/module/hbase-2.4.18/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] hadoop102: running zookeeper, logging to /opt/module/hbase-2.4.18/logs/hbase-manager1-zookeeper-hadoop102.out hadoop103: running zookeeper, logging to /opt/module/hbase-2.4.18/logs/hbase-manager1-zookeeper-hadoop103.out hadoop104: running zookeeper, logging to /opt/module/hbase-2.4.18/logs/hbase-manager1-zookeeper-hadoop104.out hadoop102: SLF4J: Class path contains multiple SLF4J bindings. hadoop102: SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.3.6/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] hadoop102: SLF4J: Found binding in [jar:file:/opt/module/hbase-2.4.18/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class] hadoop102: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. hadoop102: SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] running master, logging to /opt/module/hbase-2.4.18/logs/hbase-manager1-master-hadoop102.out hadoop102: running regionserver, logging to /opt/module/hbase-2.4.18/logs/hbase-manager1-regionserver-hadoop102.out hadoop104: regionserver running as process 5855. Stop it first. hadoop103: regionserver running as process 6486. Stop it first. [manager1@hadoop102 module]$ 一会hmaster就自动结束了,还有缺少hadoop-thirdparty-3.2.4.jar包的,我需要能解决这些问题的hbase版本

涟雪沧
  • 粉丝: 29
上传资源 快速赚钱