[root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 19:17:36,429 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:39,510 INFO SessionState: Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:39,639 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:42,218 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,262 INFO session.SessionState: Created local directory: /tmp/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,274 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc/_tmp_space.db 2025-06-17 19:17:42,304 INFO conf.HiveConf: Using the default value passed in for log id: 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,304 INFO session.SessionState: Updating thread name to 174edc05-5da2-4b28-a455-d29ac7bdd8fc main 2025-06-17 19:17:44,440 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:44,512 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 19:17:44,530 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:44,532 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,958 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 19:17:45,507 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 19:17:46,120 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 19:17:46,244 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 19:17:46,262 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 19:17:46,944 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 19:17:47,177 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:47,179 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:47,661 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,662 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,340 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,341 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 19:17:56,489 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 19:17:56,497 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 19:17:56,607 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 19:17:56,969 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 19:17:57,003 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 19:17:57,011 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 19:17:57,170 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,170 INFO SessionState: Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,216 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,222 INFO session.SessionState: Created local directory: /tmp/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,228 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8/_tmp_space.db 2025-06-17 19:17:57,231 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 19:17:57,231 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:17:57,233 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:57,239 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:57,272 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:57,272 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:57,288 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,288 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,314 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,314 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,316 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,316 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,321 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 19:17:57,321 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 19:17:57,322 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,322 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,326 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>
时间: 2025-08-01 19:31:44 浏览: 14
<think>我们正在解决Hive启动时遇到的连接拒绝问题(HDFS9000端口)和配置警告(如HBase未找到、metastore警告)。根据之前的诊断,核心问题是HDFS的NameNode未运行导致端口9000无监听,以及可能的配置不一致。解决步骤将包括:1.确保HDFS服务正常启动(特别是NameNode监听9000端口)2.检查Hive配置是否正确指向HDFS3.处理可能出现的metastore警告(如配置metastore数据库)4.解决HBase未找到的警告(如果不需要HBase集成,可忽略;否则需配置)具体操作如下:一、启动HDFS并验证9000端口1.检查NameNode是否运行:```bashjps|grepNameNode```如果没有输出,则需要启动HDFS。2.启动HDFS(假设Hadoop安装在`/export/servers/hadoop-3.3.5`):```bash/export/servers/hadoop-3.3.5/sbin/start-dfs.sh```3.验证9000端口监听:```bashnetstat-tuln|grep9000```应该看到类似:tcp000.0.0.0:90000.0.0.0:*LISTEN4.再次运行hdfs命令验证:```bashhdfsdfsadmin-report```二、检查Hive配置1.确保`core-site.xml`中的`fs.defaultFS`正确(在Hadoop配置目录):```xml<property><name>fs.defaultFS</name><value>hdfs://hadoop01:9000</value></property>```注意:`hadoop01`必须能被解析(可在`/etc/hosts`中配置)。2.检查Hive的配置文件`hive-site.xml`,确保仓库目录指向正确的HDFS地址:```xml<property><name>hive.metastore.warehouse.dir</name><value>hdfs://hadoop01:9000/user/hive/warehouse</value></property>```三、处理metastore警告Hive默认使用内嵌的Derby数据库,但在多会话情况下可能不支持。如果出现metastore警告,建议配置独立的metastore服务(如MySQL):1.在`hive-site.xml`中配置数据库连接:```xml<property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value></property><property><name>javax.jdo.option.ConnectionUserName</name><value>hive</value></property><property><name>javax.jdo.option.ConnectionPassword</name><value>hive</value></property>```2.将MySQL的JDBC驱动包(如mysql-connector-java-5.1.49.jar)放入`$HIVE_HOME/lib`目录。3.初始化metastore数据库:```bashschematool-dbTypemysql-initSchema```四、关于HBase未找到的警告如果不需要Hive与HBase集成,可以忽略此警告。如果需要,请确保:1.HBase已安装并启动。2.将HBase的依赖jar包(如hbase-client,hbase-common等)复制到`$HIVE_HOME/lib`目录。3.在`hive-site.xml`中配置HBase相关属性(如ZooKeeper地址)。五、防火墙问题如果Hive和HDFS不在同一台机器,或者有防火墙,需要开放9000端口:```bashsudofirewall-cmd--add-port=9000/tcp--permanentsudofirewall-cmd--reload```六、重启服务按照顺序重启HDFS和Hive:```bash#停止Hive(如果已运行)#停止HDFSstop-dfs.sh#启动HDFSstart-dfs.sh#启动Hivemetastore(如果配置了远程metastore)hive--servicemetastore&#启动HiveCLIhive```七、验证在HiveCLI中执行:```sqlshowdatabases;```如果没有错误,则连接问题解决。如果仍有问题,请检查日志:-HDFSNameNode日志:`$HADOOP_HOME/logs/hadoop-*-namenode-*.log`-Hive日志:`$HIVE_HOME/logs/hive.log`注意:请确保所有服务的版本兼容(Hive3.1.3与Hadoop3.x兼容)。通过以上步骤,应该能够解决Hive启动时的连接拒绝和配置警告问题。§§相关问题§§1.Hivemetastore服务启动失败如何排查?2.如何配置Hive使用MySQL作为元数据存储?3.Hive与HBase集成需要哪些步骤?4.Hadoop集群中端口9000的作用是什么?</think>### 解决 Hive 3.1.3 连接拒绝问题与配置警告
#### 一、HDFS 9000 端口连接拒绝问题
**根本原因**:NameNode 未启动导致 HDFS 服务不可用(`netstat` 无 9000 端口监听)
**解决步骤**:
1. **启动 HDFS 服务**:
```bash
# 在 hadoop01 执行
/export/servers/hadoop-3.3.5/sbin/start-dfs.sh
```
2. **验证端口监听**:
```bash
netstat -tuln | grep 9000
# 正常输出:tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN
```
3. **检查 HDFS 状态**:
```bash
hdfs dfsadmin -report
# 应显示 DataNode 和 NameNode 状态
```
4. **修复配置**(`core-site.xml`):
```xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value> <!-- 确保主机名正确 -->
</property>
</configuration>
```
#### 二、HBase "not found" 警告处理
**原因**:Hive 尝试加载 HBase 集成但未找到依赖库
**解决方案**:
1. **不需要 HBase 集成时**:
```bash
# 在 hive-site.xml 禁用 HBase 支持
<property>
<name>hive.aux.jars.path</name>
<value></value> <!-- 清空路径 -->
</property>
```
2. **需要 HBase 集成时**:
```bash
# 将 HBase 依赖库复制到 Hive lib 目录
cp $HBASE_HOME/lib/hbase-*.jar $HIVE_HOME/lib/
cp $HBASE_HOME/lib/guava-*.jar $HIVE_HOME/lib/
```
#### 三、Metastore 警告处理
**常见警告场景**:
1. **未初始化 Metastore**:
```bash
# 初始化元数据存储(MySQL 示例)
$HIVE_HOME/bin/schematool -initSchema -dbType mysql
```
2. **权限问题**:
```bash
# 在 HDFS 创建仓库目录并授权
hdfs dfs -mkdir -p /user/hive/warehouse
hdfs dfs -chmod g+w /user/hive/warehouse
```
3. **驱动缺失**:
```bash
# 将 JDBC 驱动放入 $HIVE_HOME/lib
cp mysql-connector-java-8.0.28.jar $HIVE_HOME/lib/
```
#### 四、完整验证流程
```bash
# 1. 启动 HDFS
start-dfs.sh
# 2. 启动 metastore 服务
hive --service metastore &
# 3. 启动 Hive CLI
$HIVE_HOME/bin/hive
# 4. 执行测试命令
hive> SHOW DATABASES;
hive> CREATE TABLE test(id int);
```
> **关键配置检查点**:
> - `hive.metastore.uris` 是否指向正确地址
> - `javax.jdo.option.ConnectionURL` 的数据库连接串
> - HDFS 路径中主机名与 `core-site.xml` 一致[^1]
#### 五、日志诊断位置
| 组件 | 日志路径 |
|------------|-------------------------------------------|
| **HDFS** | $HADOOP_HOME/logs/hadoop-*-namenode-*.log |
| **Hive** | $HIVE_HOME/logs/hive.log |
| **Metastore** | /tmp/<user>/hive.log |
阅读全文
相关推荐



















