活动介绍

[root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 19:17:36,429 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:39,510 INFO SessionState: Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:39,639 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:42,218 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,262 INFO session.SessionState: Created local directory: /tmp/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,274 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc/_tmp_space.db 2025-06-17 19:17:42,304 INFO conf.HiveConf: Using the default value passed in for log id: 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,304 INFO session.SessionState: Updating thread name to 174edc05-5da2-4b28-a455-d29ac7bdd8fc main 2025-06-17 19:17:44,440 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:44,512 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 19:17:44,530 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:44,532 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,958 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 19:17:45,507 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 19:17:46,120 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 19:17:46,244 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 19:17:46,262 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 19:17:46,944 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 19:17:47,177 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:47,179 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:47,661 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,662 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,340 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,341 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 19:17:56,489 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 19:17:56,497 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 19:17:56,607 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 19:17:56,969 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 19:17:57,003 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 19:17:57,011 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 19:17:57,170 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,170 INFO SessionState: Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,216 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,222 INFO session.SessionState: Created local directory: /tmp/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,228 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8/_tmp_space.db 2025-06-17 19:17:57,231 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 19:17:57,231 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:17:57,233 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:57,239 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:57,272 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:57,272 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:57,288 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,288 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,314 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,314 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,316 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,316 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,321 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 19:17:57,321 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 19:17:57,322 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,322 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,326 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>

时间: 2025-08-01 19:31:44 浏览: 14
<think>我们正在解决Hive启动时遇到的连接拒绝问题(HDFS9000端口)和配置警告(如HBase未找到、metastore警告)。根据之前的诊断,核心问题是HDFS的NameNode未运行导致端口9000无监听,以及可能的配置不一致。解决步骤将包括:1.确保HDFS服务正常启动(特别是NameNode监听9000端口)2.检查Hive配置是否正确指向HDFS3.处理可能出现的metastore警告(如配置metastore数据库)4.解决HBase未找到的警告(如果不需要HBase集成,可忽略;否则需配置)具体操作如下:一、启动HDFS并验证9000端口1.检查NameNode是否运行:```bashjps|grepNameNode```如果没有输出,则需要启动HDFS。2.启动HDFS(假设Hadoop安装在`/export/servers/hadoop-3.3.5`):```bash/export/servers/hadoop-3.3.5/sbin/start-dfs.sh```3.验证9000端口监听:```bashnetstat-tuln|grep9000```应该看到类似:tcp000.0.0.0:90000.0.0.0:*LISTEN4.再次运行hdfs命令验证:```bashhdfsdfsadmin-report```二、检查Hive配置1.确保`core-site.xml`中的`fs.defaultFS`正确(在Hadoop配置目录):```xml<property><name>fs.defaultFS</name><value>hdfs://hadoop01:9000</value></property>```注意:`hadoop01`必须能被解析(可在`/etc/hosts`中配置)。2.检查Hive的配置文件`hive-site.xml`,确保仓库目录指向正确的HDFS地址:```xml<property><name>hive.metastore.warehouse.dir</name><value>hdfs://hadoop01:9000/user/hive/warehouse</value></property>```三、处理metastore警告Hive默认使用内嵌的Derby数据库,但在多会话情况下可能不支持。如果出现metastore警告,建议配置独立的metastore服务(如MySQL):1.在`hive-site.xml`中配置数据库连接:```xml<property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value></property><property><name>javax.jdo.option.ConnectionUserName</name><value>hive</value></property><property><name>javax.jdo.option.ConnectionPassword</name><value>hive</value></property>```2.将MySQL的JDBC驱动包(如mysql-connector-java-5.1.49.jar)放入`$HIVE_HOME/lib`目录。3.初始化metastore数据库:```bashschematool-dbTypemysql-initSchema```四、关于HBase未找到的警告如果不需要Hive与HBase集成,可以忽略此警告。如果需要,请确保:1.HBase已安装并启动。2.将HBase的依赖jar包(如hbase-client,hbase-common等)复制到`$HIVE_HOME/lib`目录。3.在`hive-site.xml`中配置HBase相关属性(如ZooKeeper地址)。五、防火墙问题如果Hive和HDFS不在同一台机器,或者有防火墙,需要开放9000端口:```bashsudofirewall-cmd--add-port=9000/tcp--permanentsudofirewall-cmd--reload```六、重启服务按照顺序重启HDFS和Hive:```bash#停止Hive(如果已运行)#停止HDFSstop-dfs.sh#启动HDFSstart-dfs.sh#启动Hivemetastore(如果配置了远程metastore)hive--servicemetastore&#启动HiveCLIhive```七、验证在HiveCLI中执行:```sqlshowdatabases;```如果没有错误,则连接问题解决。如果仍有问题,请检查日志:-HDFSNameNode日志:`$HADOOP_HOME/logs/hadoop-*-namenode-*.log`-Hive日志:`$HIVE_HOME/logs/hive.log`注意:请确保所有服务的版本兼容(Hive3.1.3与Hadoop3.x兼容)。通过以上步骤,应该能够解决Hive启动时的连接拒绝和配置警告问题。§§相关问题§§1.Hivemetastore服务启动失败如何排查?2.如何配置Hive使用MySQL作为元数据存储?3.Hive与HBase集成需要哪些步骤?4.Hadoop集群中端口9000的作用是什么?</think>### 解决 Hive 3.1.3 连接拒绝问题与配置警告 #### 一、HDFS 9000 端口连接拒绝问题 **根本原因**:NameNode 未启动导致 HDFS 服务不可用(`netstat` 无 9000 端口监听) **解决步骤**: 1. **启动 HDFS 服务**: ```bash # 在 hadoop01 执行 /export/servers/hadoop-3.3.5/sbin/start-dfs.sh ``` 2. **验证端口监听**: ```bash netstat -tuln | grep 9000 # 正常输出:tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN ``` 3. **检查 HDFS 状态**: ```bash hdfs dfsadmin -report # 应显示 DataNode 和 NameNode 状态 ``` 4. **修复配置**(`core-site.xml`): ```xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9000</value> <!-- 确保主机名正确 --> </property> </configuration> ``` #### 二、HBase "not found" 警告处理 **原因**:Hive 尝试加载 HBase 集成但未找到依赖库 **解决方案**: 1. **不需要 HBase 集成时**: ```bash # 在 hive-site.xml 禁用 HBase 支持 <property> <name>hive.aux.jars.path</name> <value></value> <!-- 清空路径 --> </property> ``` 2. **需要 HBase 集成时**: ```bash # 将 HBase 依赖库复制到 Hive lib 目录 cp $HBASE_HOME/lib/hbase-*.jar $HIVE_HOME/lib/ cp $HBASE_HOME/lib/guava-*.jar $HIVE_HOME/lib/ ``` #### 三、Metastore 警告处理 **常见警告场景**: 1. **未初始化 Metastore**: ```bash # 初始化元数据存储(MySQL 示例) $HIVE_HOME/bin/schematool -initSchema -dbType mysql ``` 2. **权限问题**: ```bash # 在 HDFS 创建仓库目录并授权 hdfs dfs -mkdir -p /user/hive/warehouse hdfs dfs -chmod g+w /user/hive/warehouse ``` 3. **驱动缺失**: ```bash # 将 JDBC 驱动放入 $HIVE_HOME/lib cp mysql-connector-java-8.0.28.jar $HIVE_HOME/lib/ ``` #### 四、完整验证流程 ```bash # 1. 启动 HDFS start-dfs.sh # 2. 启动 metastore 服务 hive --service metastore & # 3. 启动 Hive CLI $HIVE_HOME/bin/hive # 4. 执行测试命令 hive> SHOW DATABASES; hive> CREATE TABLE test(id int); ``` > **关键配置检查点**: > - `hive.metastore.uris` 是否指向正确地址 > - `javax.jdo.option.ConnectionURL` 的数据库连接串 > - HDFS 路径中主机名与 `core-site.xml` 一致[^1] #### 五、日志诊断位置 | 组件 | 日志路径 | |------------|-------------------------------------------| | **HDFS** | $HADOOP_HOME/logs/hadoop-*-namenode-*.log | | **Hive** | $HIVE_HOME/logs/hive.log | | **Metastore** | /tmp/<user>/hive.log |
阅读全文

相关推荐

[root@hadoop1 bin]# hive SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/servers/hive-3.1.3/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/servers/hadoop-3.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] /usr/bin/which: no hbase in (/export/servers/hive-3.1.3/bin:/export/servers/flume-1.9.0/bin:/root/.local/bin:/root/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_241/bin:/export/servers/hadoop-3.3.0/bin:/export/servers/hadoop-3.3.0/sbin:/export/servers/jdk1.8.0_241/bin:/export/servers/hadoop-3.3.0/bin:/export/servers/hadoop-3.3.0/sbin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/servers/hive-3.1.3/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/servers/hadoop-3.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Hive Session ID = c684811f-5c32-43ba-b621-c41d4ecf957a Logging initialized using configuration in jar:file:/export/servers/hive-3.1.3/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. hive (default)>

[hadoop@node1 hive]$ tail -100 logs/hiveserver2.log SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/root/myenv:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) 2025-07-20 15:06:00: Starting HiveServer2 Error occurred during initialization of VM java.lang.InternalError: Could not create SecurityManager: allow at sun.misc.Launcher.<init>(Launcher.java:103) at sun.misc.Launcher.<clinit>(Launcher.java:54) at java.lang.ClassLoader.initSystemClassLoader(ClassLoader.java:1444) at java.lang.ClassLoader.getSystemClassLoader(ClassLoader.java:1429) nohup: ignoring input SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/server/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/root/myenv:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) 2025-07-20 15:06:24: Starting HiveServer2 Error occurred during initialization of VM java.lang.InternalError: Could not create SecurityManager: allow at sun.misc.Launcher.<init>(Launcher.java:103) at sun.misc.Launcher.<clinit>(Launcher.java:54) at java.lang.ClassLoader.initSystemClassLoader(ClassLoader.java:1444) at java.lang.ClassLoader.getSystemClassLoader(ClassLoader.java:1429) nohup: ignoring input SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/server/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/root/myenv:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) 2025-07-20 15:08:08: Starting HiveServer2 Error occurred during initialization of VM java.lang.InternalError: Could not create SecurityManager: allow at sun.misc.Launcher.<init>(Launcher.java:103) at sun.misc.Launcher.<clinit>(Launcher.java:54) at java.lang.ClassLoader.initSystemClassLoader(ClassLoader.java:1444) at java.lang.ClassLoader.getSystemClassLoader(ClassLoader.java:1429) nohup: ignoring input SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/server/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/root/myenv:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) 2025-07-20 15:18:27: Starting HiveServer2 Error occurred during initialization of VM java.lang.InternalError: Could not create SecurityManager: allow at sun.misc.Launcher.<init>(Launcher.java:103) at sun.misc.Launcher.<clinit>(Launcher.java:54) at java.lang.ClassLoader.initSystemClassLoader(ClassLoader.java:1444) at java.lang.ClassLoader.getSystemClassLoader(ClassLoader.java:1429) nohup: ignoring input SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/server/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/root/myenv:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) 2025-07-20 15:34:00: Starting HiveServer2 Error occurred during initialization of VM java.lang.InternalError: Could not create SecurityManager: allow at sun.misc.Launcher.<init>(Launcher.java:103) at sun.misc.Launcher.<clinit>(Launcher.java:54) at java.lang.ClassLoader.initSystemClassLoader(ClassLoader.java:1444) at java.lang.ClassLoader.getSystemClassLoader(ClassLoader.java:1429) nohup: ignoring input SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/server/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/root/myenv:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) 2025-07-20 15:34:40: Starting HiveServer2 Error occurred during initialization of VM java.lang.InternalError: Could not create SecurityManager: allow at sun.misc.Launcher.<init>(Launcher.java:103) at sun.misc.Launcher.<clinit>(Launcher.java:54) at java.lang.ClassLoader.initSystemClassLoader(ClassLoader.java:1444) at java.lang.ClassLoader.getSystemClassLoader(ClassLoader.java:1429) nohup: ignoring input SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/server/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/root/myenv:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) 2025-07-20 15:36:56: Starting HiveServer2 Error occurred during initialization of VM java.lang.InternalError: Could not create SecurityManager: allow at sun.misc.Launcher.<init>(Launcher.java:103) at sun.misc.Launcher.<clinit>(Launcher.java:54) at java.lang.ClassLoader.initSystemClassLoader(ClassLoader.java:1444) at java.lang.ClassLoader.getSystemClassLoader(ClassLoader.java:1429) 这个是hiveserver2日志

root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (:/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/sqoop/bin:) 2025-06-17 18:31:23,734 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:29,776 INFO SessionState: Hive Session ID = 1bb412dc-6394-489e-80ca-943bacb068f6 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 18:31:29,920 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 18:31:33,217 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,267 INFO session.SessionState: Created local directory: /tmp/root/1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,280 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/1bb412dc-6394-489e-80ca-943bacb068f6/_tmp_space.db 2025-06-17 18:31:33,323 INFO conf.HiveConf: Using the default value passed in for log id: 1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,323 INFO session.SessionState: Updating thread name to 1bb412dc-6394-489e-80ca-943bacb068f6 main 2025-06-17 18:31:35,971 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 18:31:36,007 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 18:31:36,017 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 18:31:36,019 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 18:31:36,021 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 18:31:36,021 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 18:31:36,023 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 18:31:36,023 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 18:31:36,357 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 18:31:36,899 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 18:31:37,442 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 18:31:37,532 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 18:31:37,674 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 18:31:38,661 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 18:31:39,025 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 18:31:39,031 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 18:31:39,705 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,706 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,707 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,707 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,708 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,709 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,370 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,371 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:48,380 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 18:31:48,380 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 18:31:48,917 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 18:31:48,922 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 18:31:49,028 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 18:31:49,426 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 18:31:49,471 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 18:31:49,474 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 18:31:49,645 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,645 INFO SessionState: Hive Session ID = 4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,697 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,703 INFO session.SessionState: Created local directory: /tmp/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,710 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3/_tmp_space.db 2025-06-17 18:31:49,713 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 18:31:49,714 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 18:31:49,716 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 18:31:49,720 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 18:31:49,755 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 18:31:49,756 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 18:31:49,768 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,769 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,789 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 18:31:49,793 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 18:31:49,795 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,796 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,803 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 18:31:49,803 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 18:31:49,803 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>

[user1@node11 apache-flume-1.9.0-bin]$ bin/flume-ng agent -c ./conf -f ./conf/spool-hdfs.conf -n a1 -Dflume.root.logger=INFO,console Info: Sourcing environment configuration script /home/user1/hadooptools/apache-flume-1.9.0-bin/conf/flume-env.sh Info: Including Hadoop libraries found via (/home/user1/hadooptools/hadoop-3.1.2/bin/hadoop) for HDFS access Info: Including HBASE libraries found via (/home/user1/hadooptools/hbase-2.4.17/bin/hbase) for HBASE access Info: Including Hive libraries found via (/home/user1/hadooptools/hive/apache-hive-3.1.2-bin) for Hive access + exec /home/jdk1.8.0_161/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/home/user1/hadooptools/apache-flume-1.9.0-bin/conf:/home/user1/hadooptools/apache-flume-1.9.0-bin/lib/*:/home/user1/hadooptools/hadoop-3.1.2/etc/hadoop:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/common/lib/*:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/common/*:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/hdfs:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/hdfs/lib/*:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/hdfs/*:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/mapreduce/lib/*:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/mapreduce/*:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/yarn:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/yarn/lib/*:/home/user1/hadooptools/hadoop-3.1.2/share/hadoop/yarn/*:/home/user1/hadooptools/hbase-2.4.17/conf:/home/jdk1.8.0_161/lib/tools.jar:/home/user1/hadooptools/hbase-2.4.17:/home/user1/hadooptools/hbase-2.4.17/lib/shaded-clients/hbase-shaded-client-byo-hadoop-2.4.17.jar:/home/user1/hadooptools/hbase-2.4.17/lib/client-facing-thirdparty/audience-annotations-0.13.0.jar:/home/user1/hadooptools/hbase-2.4.17/lib/client-facing-thirdparty/commons-logging-1.2.jar:/home/user1/hadooptools/hbase-2.4.17/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/home/user1/hadooptools/hbase-2.4.17/lib/client-facing-thirdparty/reload4j-1.2.22.jar:/home/user1/h

[root@hadoop01 apache-hive-3.1.3-bin]# hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/servers/hadoop-3.3.5/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/servers/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 2025-06-16 17:53:36,956 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:40,291 INFO SessionState: Hive Session ID = 30846036-47a7-480e-81e3-48f09d764412 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-16 17:53:40,414 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-16 17:53:43,041 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,099 INFO session.SessionState: Created local directory: /tmp/root/30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,119 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/30846036-47a7-480e-81e3-48f09d764412/_tmp_space.db 2025-06-16 17:53:43,154 INFO conf.HiveConf: Using the default value passed in for log id: 30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,154 INFO session.SessionState: Updating thread name to 30846036-47a7-480e-81e3-48f09d764412 main 2025-06-16 17:53:45,040 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-16 17:53:45,120 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-16 17:53:45,133 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-16 17:53:45,139 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-16 17:53:45,141 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-16 17:53:45,141 INFO conf.MetastoreConf: Found configuration file null 2025-06-16 17:53:45,143 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-16 17:53:45,143 INFO conf.MetastoreConf: Found configuration file null 2025-06-16 17:53:45,603 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-16 17:53:46,052 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-16 17:53:46,556 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-16 17:53:46,645 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-16 17:53:46,677 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-16 17:53:47,494 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-16 17:53:47,815 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-16 17:53:47,820 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-16 17:53:48,285 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,286 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,287 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,287 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,288 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,288 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,987 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,989 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,989 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:57,252 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-16 17:53:57,253 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-16 17:53:57,550 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-16 17:53:57,560 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-16 17:53:57,673 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-16 17:53:58,030 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-16 17:53:58,085 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-16 17:53:58,089 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-16 17:53:58,263 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,266 INFO SessionState: Hive Session ID = c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,341 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,350 INFO session.SessionState: Created local directory: /tmp/root/c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,365 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/c998401c-9255-4863-8e6f-4932f9f591fa/_tmp_space.db 2025-06-16 17:53:58,372 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-16 17:53:58,373 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-16 17:53:58,377 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-16 17:53:58,383 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-16 17:53:58,462 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-16 17:53:58,466 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-16 17:53:58,494 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,495 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,525 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-16 17:53:58,526 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-16 17:53:58,530 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,530 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,539 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-16 17:53:58,539 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-16 17:53:58,539 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized

[root@hjh01 ~]# yarn node -list 2025-07-02 19:05:04,602 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hjh01/192.168.63.101:8032 Total Nodes:3 Node-Id Node-State Node-Http-Address Number-of-Running-Containers hjh03:41086 RUNNING hjh03:8042 0 hjh02:33452 RUNNING hjh02:8042 0 hjh01:34457 RUNNING hjh01:8042 0 [root@hjh01 ~]# yarn top 2025-07-02 19:05:23,316 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hjh01/192.168.63.101:8032 YARN top - 19:06:19, up 0d, 0:26, 0 active users, queue(s): root NodeManager(s): 3 total, 3 active, 0 unhealthy, 0 decommissioned, 0 lost, 0 reb Queue(s) Applications: 0 running, 2 submitted, 0 pending, 0 completed, 0 killed Queue(s) Mem(GB): 24 available, 0 allocated, 0 pending, 0 reserved Queue(s) VCores: 24 available, 0 allocated, 0 pending, 0 reserved Queue(s) Containers: 0 allocated, 0 pending, 0 reserved q APPLICATIONID USER TYPE QUEUE PRIOR #CONT [root@hjh01 ~]# vim /usr/local/hadoop/etc/hadoop/yarn-site.xml [root@hjh01 ~]# hive which: no hbase in (/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/local/jdk/bin:/usr/local/hadoop/bin:/usr/local/hadoop/sbin:/usr/local/hive/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Hive Session ID = e131a6d9-ab5a-4b4c-92ef-b2ef2c57e699 Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = f58a189b-6fa4-4d22-9937-2b2ffd8bf3f6 hive> SET mapreduce.map.memory.mb=2048; hive> SET mapreduce.reduce.memory.mb=4096; hive> SELECT COUNT(1) FROM hjh_table LIMIT 1; FAILED: SemanticException [Error 10001]: Line 1:21 Table not found 'hjh_table' hive> SELECT > city, > COUNT(*) AS user_count, > ROUND(AVG(age), 1) AS avg_age > FROM users > GROUP BY city > ORDER BY user_count DESC; Query ID = root_20250702192000_765e8e26-cd76-45c6-b01a-5817bcfe7ff9 Total jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1751506776899_0003, Tracking URL = http://hjh01:8088/proxy/application_1751506776899_0003/ Kill Command = /usr/local/hadoop/bin/mapred job -kill job_1751506776899_0003 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0 2025-07-02 19:20:53,194 Stage-1 map = 0%, reduce = 0% Ended Job = job_1751506776899_0003 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec hive>

[root@localhost sqoop-1.4.7.bin__hadoop-2.6.0]# bin/sqoop export --connect jdbc:mysql://localhost:3306/dbtaobao --username root --password 123456 --table user_log --export-dir '/user/hive/warehouse/dbtaobao.db/inner_user_log' --fields-terminated-by ','; Warning: /usr/sqoop/sqoop-1.4.7.bin__hadoop-2.6.0/../hbase does not exist! HBase imports will fail. Please set $HBASE_HOME to the root of your HBase installation. Warning: /usr/sqoop/sqoop-1.4.7.bin__hadoop-2.6.0/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/sqoop/sqoop-1.4.7.bin__hadoop-2.6.0/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /usr/sqoop/sqoop-1.4.7.bin__hadoop-2.6.0/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. 2025-06-06 00:44:19,902 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 2025-06-06 00:44:20,015 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 2025-06-06 00:44:20,281 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 2025-06-06 00:44:20,287 INFO tool.CodeGenTool: Beginning code generation Loading class com.mysql.jdbc.Driver'. This is deprecated. The new driver class is com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. 2025-06-06 00:44:22,137 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM user_log AS t LIMIT 1 2025-06-06 00:44:22,270 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM user_log AS t LIMIT 1 2025-06-06 00:44:22,293 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hadoop/hadoop-3.1.3 注: /tmp/sqoop-root/compile/2647ebe9a3777fcaa95ca65a919294ec/user_log.java使用或覆盖了已过时的 API。 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 2025-06-06 00:44:26,610 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/2647ebe9a3777fcaa95ca65a919294ec/user_log.jar 2025-06-06 00:44:26,629 INFO mapreduce.ExportJobBase: Beginning export of user_log 2025-06-06 00:44:26,629 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2025-06-06 00:44:27,018 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 2025-06-06 00:44:28,689 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-06 00:44:28,961 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 2025-06-06 00:44:28,966 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2025-06-06 00:44:28,966 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2025-06-06 00:44:29,494 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2025-06-06 00:44:29,762 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2025-06-06 00:44:29,762 INFO impl.MetricsSystemImpl: JobTracker metrics system started 2025-06-06 00:44:30,031 INFO input.FileInputFormat: Total input files to process : 1 2025-06-06 00:44:30,053 INFO input.FileInputFormat: Total input files to process : 1 2025-06-06 00:44:30,136 INFO mapreduce.JobSubmitter: number of splits:4 2025-06-06 00:44:30,284 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2025-06-06 00:44:30,660 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local583218486_0001 2025-06-06 00:44:30,660 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-06-06 00:44:31,271 INFO mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-root/mapred/local/1749141870870/libjars <- /usr/sqoop/sqoop-1.4.7.bin__hadoop-2.6.0/libjars/* 2025-06-06 00:44:31,278 WARN fs.FileUtil: Command 'ln -s /tmp/hadoop-root/mapred/local/1749141870870/libjars /usr/sqoop/sqoop-1.4.7.bin__hadoop-2.6.0/libjars/*' failed 1 with: ln: 无法创建符号链接"/usr/sqoop/sqoop-1.4.7.bin__hadoop-2.6.0/libjars/*": 没有那个文件或目录 2025-06-06 00:44:31,278 WARN mapred.LocalDistributedCacheManager: Failed to create symlink: /tmp/hadoop-root/mapred/local/1749141870870/libjars <- /usr/sqoop/sqoop-1.4.7.bin__hadoop-2.6.0/libjars/* 2025-06-06 00:44:31,278 INFO mapred.LocalDistributedCacheManager: Localized file:/tmp/hadoop/mapred/staging/root583218486/.staging/job_local583218486_0001/libjars as file:/tmp/hadoop-root/mapred/local/1749141870870/libjars 2025-06-06 00:44:31,569 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 2025-06-06 00:44:31,571 INFO mapreduce.Job: Running job: job_local583218486_0001 2025-06-06 00:44:31,614 INFO mapred.LocalJobRunner: OutputCommitter set in config null 2025-06-06 00:44:31,631 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter 2025-06-06 00:44:31,798 INFO mapred.LocalJobRunner: Waiting for map tasks 2025-06-06 00:44:31,802 INFO mapred.LocalJobRunner: Starting task: attempt_local583218486_0001_m_000000_0 2025-06-06 00:44:32,000 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2025-06-06 00:44:32,006 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/dbtaobao.db/inner_user_log/000000_0:355065+59179,/user/hive/warehouse/dbtaobao.db/inner_user_log/000000_0:414244+59179 2025-06-06 00:44:32,012 INFO Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file 2025-06-06 00:44:32,012 INFO Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start 2025-06-06 00:44:32,012 INFO Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length 2025-06-06 00:44:32,050 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-06 00:44:32,260 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-06 00:44:32,299 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2025-06-06 00:44:32,328 INFO mapred.LocalJobRunner: Starting task: attempt_local583218486_0001_m_000001_0 2025-06-06 00:44:32,357 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2025-06-06 00:44:32,359 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/dbtaobao.db/inner_user_log/000000_0:0+118355 2025-06-06 00:44:32,387 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-06 00:44:32,524 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2025-06-06 00:44:32,553 INFO mapred.LocalJobRunner: Starting task: attempt_local583218486_0001_m_000002_0 2025-06-06 00:44:32,566 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2025-06-06 00:44:32,567 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/dbtaobao.db/inner_user_log/000000_0:118355+118355 2025-06-06 00:44:32,585 INFO mapreduce.Job: Job job_local583218486_0001 running in uber mode : false 2025-06-06 00:44:32,587 INFO mapreduce.Job: map 0% reduce 0% 2025-06-06 00:44:32,616 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-06 00:44:32,745 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-06 00:44:32,799 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2025-06-06 00:44:32,866 INFO mapred.LocalJobRunner: Starting task: attempt_local583218486_0001_m_000003_0 2025-06-06 00:44:32,901 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2025-06-06 00:44:32,903 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/dbtaobao.db/inner_user_log/000000_0:236710+118355 2025-06-06 00:44:32,930 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-06 00:44:33,004 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-06 00:44:33,034 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2025-06-06 00:44:33,049 INFO mapred.LocalJobRunner: map task executor complete. 2025-06-06 00:44:33,050 WARN mapred.LocalJobRunner: job_local583218486_0001 java.lang.Exception: java.io.IOException: java.lang.ClassNotFoundException: user_log at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) Caused by: java.io.IOException: java.lang.ClassNotFoundException: user_log at org.apache.sqoop.mapreduce.TextExportMapper.setup(TextExportMapper.java:74) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: user_log at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.sqoop.mapreduce.TextExportMapper.setup(TextExportMapper.java:70) ... 10 more 2025-06-06 00:44:33,590 INFO mapreduce.Job: Job job_local583218486_0001 failed with state FAILED due to: NA 2025-06-06 00:44:33,599 INFO mapreduce.Job: Counters: 0 2025-06-06 00:44:33,623 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 2025-06-06 00:44:33,625 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 4.6274 seconds (0 bytes/sec) 2025-06-06 00:44:33,626 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2025-06-06 00:44:33,626 INFO mapreduce.ExportJobBase: Exported 0 records. 2025-06-06 00:44:33,626 ERROR mapreduce.ExportJobBase: Export job failed! 2025-06-06 00:44:33,626 ERROR tool.ExportTool: Error during export: Export job failed! at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252) [root@localhost sqoop-1.4.7.bin__hadoop-2.6.0]# 给我解决方案

大家在看

recommend-type

Xilinx ISE rs_decoder_ipcore and encoder License

Xilinx ISE RS编码解码IP核的License
recommend-type

毕业设计&课设-一个基于Matlab的PET仿真和重建框架,具有系统矩阵的分析建模,能够结合各种数据….zip

matlab算法,工具源码,适合毕业设计、课程设计作业,所有源码均经过严格测试,可以直接运行,可以放心下载使用。有任何使用问题欢迎随时与博主沟通,第一时间进行解答! matlab算法,工具源码,适合毕业设计、课程设计作业,所有源码均经过严格测试,可以直接运行,可以放心下载使用。有任何使用问题欢迎随时与博主沟通,第一时间进行解答! matlab算法,工具源码,适合毕业设计、课程设计作业,所有源码均经过严格测试,可以直接运行,可以放心下载使用。有任何使用问题欢迎随时与博主沟通,第一时间进行解答! matlab算法,工具源码,适合毕业设计、课程设计作业,所有源码均经过严格测试,可以直接运行,可以放心下载使用。有任何使用问题欢迎随时与博主沟通,第一时间进行解答! matlab算法,工具源码,适合毕业设计、课程设计作业,所有源码均经过严格测试,可以直接运行,可以放心下载使用。有任何使用问题欢迎随时与博主沟通,第一时间进行解答! matlab算法,工具源码,适合毕业设计、课程设计作业,所有源码均经过严格测试,可以直接运行,可以放心下载使用。有任何使用问题欢迎随
recommend-type

MATLAB机械臂简单控制仿真(Simulink篇-总).zip

MATLAB下机器人可视化与控制---simulink篇中的简单例子,在Simulink中做了预定义轨迹的运动和Slider Gain控制的运动,用GUI控制的关节代码在MATLAB下机器人可视化与控制
recommend-type

使用 GCC 构建 STM23F0 ARM 项目的模板源码

使用 GCC 构建 STM23F0 ARM 项目的模板源码,具体请看 README
recommend-type

详细说明 VC++的MFC开发串口调试助手源代码,包括数据发送,接收,显示制式等29782183com

详细说明 VC++的MFC开发串口调试助手源代码,包括数据发送,接收,显示制式等29782183com

最新推荐

recommend-type

计算机网络学习中学员常见问题与改进方法

计算机网络学习中学员常见问题与改进方法+
recommend-type

基于高斯混合模型(GMM)和主成分分析(PCA)的疲劳语音识别.zip

1.版本:matlab2014a/2019b/2024b 2.附赠案例数据可直接运行。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。
recommend-type

Java毕业设计基于SpringBoot+Vue开发的智慧农业系统源码+数据库(高分项目)

Java毕业设计基于SpringBoot+Vue开发的智慧农业系统源码+数据库(高分项目),个人大四的毕业设计、经导师指导并认可通过的高分设计项目,评审分99分,代码完整确保可以运行,小白也可以亲自搞定,主要针对计算机相关专业的正在做毕设的学生和需要项目实战练习的学习者,也可作为课程设计、期末大作业。 Java毕业设计基于SpringBoot+Vue开发的智慧农业系统源码+数据库(高分项目)Java毕业设计基于SpringBoot+Vue开发的智慧农业系统源码+数据库(高分项目)Java毕业设计基于SpringBoot+Vue开发的智慧农业系统源码+数据库(高分项目)Java毕业设计基于SpringBoot+Vue开发的智慧农业系统源码+数据库(高分项目)Java毕业设计基于SpringBoot+Vue开发的智慧农业系统源码+数据库(高分项目)Java毕业设计个人大四的毕业设计、经导师指导并认可通过的高分设计项目,评审分99分,代码完整确保可以运行,小白也可以亲自搞定,主要针对计算机相关专业的正在做毕设的学生和需要项目实战练习的学习者,也可作为课程设计、期末大作业。个人大四的毕业设计 收起
recommend-type

用bp神经网络预测油田产量

资源下载链接为: https://pan.quark.cn/s/82cb66f4d6b4 用bp神经网络预测油田产量(最新、最全版本!打开链接下载即可用!)
recommend-type

美国国际航空交通数据分析报告(1990-2020)

根据给定的信息,我们可以从中提取和分析以下知识点: 1. 数据集概述: 该数据集名为“U.S. International Air Traffic data(1990-2020)”,记录了美国与国际间航空客运和货运的详细统计信息。数据集涵盖的时间范围从1990年至2020年,这说明它包含了长达30年的时间序列数据,对于进行长期趋势分析非常有价值。 2. 数据来源及意义: 此数据来源于《美国国际航空客运和货运统计报告》,该报告是美国运输部(USDOT)所管理的T-100计划的一部分。T-100计划旨在收集和发布美国和国际航空公司在美国机场的出入境交通报告,这表明数据的权威性和可靠性较高,适用于政府、企业和学术研究等领域。 3. 数据内容及应用: 数据集包含两个主要的CSV文件,分别是“International_Report_Departures.csv”和“International_Report_Passengers.csv”。 a. International_Report_Departures.csv文件可能包含了以下内容: - 离港航班信息:记录了各航空公司的航班号、起飞和到达时间、起飞和到达机场的代码以及国际地区等信息。 - 航空公司信息:可能包括航空公司代码、名称以及所属国家等。 - 飞机机型信息:如飞机类型、座位容量等,这有助于分析不同机型的使用频率和趋势。 - 航线信息:包括航线的起始和目的国家及城市,对于研究航线网络和优化航班计划具有参考价值。 这些数据可以用于航空交通流量分析、机场运营效率评估、航空市场分析等。 b. International_Report_Passengers.csv文件可能包含了以下内容: - 航班乘客信息:可能包括乘客的国籍、年龄、性别等信息。 - 航班类型:如全客机、全货机或混合型航班,可以分析乘客运输和货物运输的比例。 - 乘客数量:记录了各航班或航线的乘客数量,对于分析航空市场容量和增长趋势很有帮助。 - 飞行里程信息:有助于了解国际间不同航线的长度和飞行距离,为票价设置和燃油成本分析提供数据支持。 这些数据可以用于航空客运市场分析、需求预测、收益管理等方面。 4. 数据分析和应用实例: - 航空流量分析:通过分析离港航班数据,可以观察到哪些航线最为繁忙,哪些机场的国际航空流量最大,这有助于航空公司调整航班时刻表和运力分配。 - 市场研究:乘客数据可以揭示不同国家和地区之间的人口流动趋势,帮助航空公司和政府机构了解国际旅行市场的需求变化。 - 飞机利用率:结合飞机机型和飞行频率信息,可以对特定机型的使用率进行分析,评估飞机维护需求和燃油效率。 - 安全监管:通过对比不同航空公司和航班的安全记录,监管机构可以更有效地评估航空公司的安全性能,并采取必要的监管措施。 5. 技术和方法论: 分析此类数据通常涉及数据清洗、数据整合、统计分析、时间序列分析、预测建模等数据科学方法。使用Excel、SQL、R、Python等工具进行数据处理和分析是常见的做法。例如,可以使用Python的Pandas库来清洗和准备数据,使用Matplotlib和Seaborn库来可视化数据,然后利用Scikit-learn或Statsmodels库来构建预测模型。 通过以上知识点的提取和分析,我们可以理解到“U.S. International Air Traffic data(1990-2020)-数据集”的重要性,它不仅记录了跨越30年的航空交通数据,还为各种分析和应用提供了详实的基础信息。对于航空业从业者、政策制定者、研究人员以及数据分析师来说,这是一个极具价值的数据资源。
recommend-type

统计学视角:深入理解最小二乘法的概率论基础

# 1. 最小二乘法的基本概念 最小二乘法(Least Squares Method, LSM)是统计学和数据分析中广泛使用的一种数学优化技术。其主要目的是通过最小化误差的平方和来寻找数据的最佳函数匹配。这种方法最早由高斯提出,但在工程、物理、经济和其他领域中有着广泛的应用。 在本章中,我们将首先了
recommend-type

vscode中使用Codeium

<think>好的,我需要回答用户如何在VSCode中使用Codeium插件的问题。首先,我需要回顾用户提供的引用内容,看看是否有相关信息。用户提供的引用[1]提到了Codeium Chat可以解释代码中的问题,引用[2]提到了在IDEA中的一些问题,可能对VSCode的安装有帮助。用户还提到了安装步骤、基本功能、常见问题等。 首先,我应该分步骤说明安装过程,包括在VSCode扩展商店搜索Codeium并安装。然后,登录部分可能需要用户访问仪表板获取API密钥,引用[2]中提到登录问题,可能需要提醒用户注意网络或权限设置。 接下来是基本功能,比如代码自动补全和Chat功能。引用[1]提到C
recommend-type

UniMoCo:统一框架下的多监督视觉学习方法

在详细解析“unimoco”这个概念之前,我们需要明确几个关键点。首先,“unimoco”代表的是一种视觉表示学习方法,它在机器学习尤其是深度学习领域中扮演着重要角色。其次,文章作者通过这篇论文介绍了UniMoCo的全称,即“Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning”,其背后的含义是在于UniMoCo框架整合了无监督学习、半监督学习和全监督学习三种不同的学习策略。最后,该框架被官方用PyTorch库实现,并被提供给了研究者和开发者社区。 ### 1. 对比学习(Contrastive Learning) UniMoCo的概念根植于对比学习的思想,这是一种无监督学习的范式。对比学习的核心在于让模型学会区分不同的样本,通过将相似的样本拉近,将不相似的样本推远,从而学习到有效的数据表示。对比学习与传统的分类任务最大的不同在于不需要手动标注的标签来指导学习过程,取而代之的是从数据自身结构中挖掘信息。 ### 2. MoCo(Momentum Contrast) UniMoCo的实现基于MoCo框架,MoCo是一种基于队列(queue)的对比学习方法,它在训练过程中维持一个动态的队列,其中包含了成对的负样本。MoCo通过 Momentum Encoder(动量编码器)和一个队列来保持稳定和历史性的负样本信息,使得模型能够持续地进行对比学习,即使是在没有足够负样本的情况下。 ### 3. 无监督学习(Unsupervised Learning) 在无监督学习场景中,数据样本没有被标记任何类别或标签,算法需自行发现数据中的模式和结构。UniMoCo框架中,无监督学习的关键在于使用没有标签的数据进行训练,其目的是让模型学习到数据的基础特征表示,这对于那些标注资源稀缺的领域具有重要意义。 ### 4. 半监督学习(Semi-Supervised Learning) 半监督学习结合了无监督和有监督学习的优势,它使用少量的标注数据与大量的未标注数据进行训练。UniMoCo中实现半监督学习的方式,可能是通过将已标注的数据作为对比学习的一部分,以此来指导模型学习到更精准的特征表示。这对于那些拥有少量标注数据的场景尤为有用。 ### 5. 全监督学习(Full-Supervised Learning) 在全监督学习中,所有的训练样本都有相应的标签,这种学习方式的目的是让模型学习到映射关系,从输入到输出。在UniMoCo中,全监督学习用于训练阶段,让模型在有明确指示的学习目标下进行优化,学习到的任务相关的特征表示。这通常用于有充足标注数据的场景,比如图像分类任务。 ### 6. PyTorch PyTorch是一个开源机器学习库,由Facebook的人工智能研究团队开发,主要用于计算机视觉和自然语言处理等任务。它被广泛用于研究和生产环境,并且因其易用性、灵活性和动态计算图等特性受到研究人员的青睐。UniMoCo官方实现选择PyTorch作为开发平台,说明了其对科研社区的支持和对易于实现的重视。 ### 7. 可视化表示学习(Visual Representation Learning) 可视化表示学习的目的是从原始视觉数据中提取特征,并将它们转换为能够反映重要信息且更易于处理的形式。在UniMoCo中,无论是无监督、半监督还是全监督学习,最终的目标都是让模型学习到有效的视觉表示,这些表示可以用于下游任务,如图像分类、目标检测、图像分割等。 ### 8. 标签队列(Label Queue) UniMoCo通过标签队列维护受监管的标签,这可能意味着对于那些半监督或全监督学习的任务,模型在进行对比学习时,会参考这些来自标签队列的数据。标签队列机制能帮助模型更好地利用有限的标注数据,增强模型的泛化能力。 ### 结论 UniMoCo的提出,以及其官方PyTorch实现的发布,将对计算机视觉领域产生深远影响。它不仅提供了一个统一的对比学习框架,使得从无监督到全监督的学习过程更加灵活和高效,而且为研究者们提供了一个强力的工具,以便更好地探索和实现各种视觉任务。UniMoCo的研究和应用前景,为机器学习尤其是深度学习在视觉领域的研究和实践提供了新的视角和可能。
recommend-type

【MATLAB算法精讲】:最小二乘法的实现与案例深度分析

# 1. 最小二乘法的基本原理 最小二乘法是一种数学优化技术,它通过最小化误差的平方和来寻找数据的最佳函数匹配。其核心思想是选择一条曲线,使得所有观察点到这条曲线的距离之和最小。这种方法广泛应用于统计学、信号处理、工程学和经济学等领域,尤其适用于需要通过一组数据点来确定函数参数的情况。 ## 1.1 统计学视角下的最小二乘法 在统计学中,最小二乘法经常用于
recommend-type

Idea使用教程+jdk配置

<think>好的,我现在需要回答用户关于IntelliJ IDEA使用教程和JDK配置方法的问题。首先,用户明确提到了参考站内引用中的内容,引用[1]和引用[2]都涉及JDK的配置步骤,但看起来这些引用内容可能来自其他问题或文档,可能与当前问题不完全相关,但用户希望我利用这些信息来组织回答。 首先,我需要确认IntelliJ IDEA配置JDK的基本步骤,并整合用户提供的引用内容。引用[1]提到选择JDK安装根目录,例如D:\develop\Java\jdk-17,这说明配置时需要定位到JDK的主目录。引用[2]则提到了通过New按钮选择JDK版本,并完成项目创建,这部分可能涉及到项目设置