活动介绍

2025-06-18 19:46:10,831 - INFO - Using Any for unsupported type: typing.Sequence[~T] 2025-06-18 19:46:11,071 - INFO - No module named google.cloud.bigquery_storage_v1. As a result, the ReadFromBigQuery transform *CANNOT* be used with `method=DIRECT_READ`. 2025-06-18 19:46:13,954 - ERROR - Error executing catalog: CREATE CATALOG IF NOT EXISTS hive_catalog WITH ( 'type' = 'hive', 'hive-conf-dir' = '/opt/hive/conf' ) An error occurred while calling o100.executeSql. : org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered "NOT" at line 2, column 23. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.table.planner.parse.CalciteParser.parseSqlList(CalciteParser.java:82) at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:102) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:758) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282) at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79) at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.calcite.sql.parser.SqlParseException: Encountered "NOT" at line 2, column 23. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.convertException(FlinkSqlParserImpl.java:490) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.normalizeException(FlinkSqlParserImpl.java:254) at org.apache.calcite.sql.parser.SqlParser.handleException(SqlParser.java:145) at org.apache.calcite.sql.parser.SqlParser.parseStmtList(SqlParser.java:200) at org.apache.flink.table.planner.parse.CalciteParser.parseSqlList(CalciteParser.java:77) ... 13 more Caused by: org.apache.flink.sql.parser.impl.ParseException: Encountered "NOT" at line 2, column 23. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.generateParseException(FlinkSqlParserImpl.java:46382) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.jj_consume_token(FlinkSqlParserImpl.java:46190) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmtList(FlinkSqlParserImpl.java:3522) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.parseSqlStmtList(FlinkSqlParserImpl.java:306) at org.apache.calcite.sql.parser.SqlParser.parseStmtList(SqlParser.java:198) ... 14 more Traceback (most recent call last): File "/home/hadoop/PycharmProjects/SparkProject/src/flinkCDC.py", line 237, in <module> main() File "/home/hadoop/PycharmProjects/SparkProject/src/flinkCDC.py", line 97, in main t_env.execute_sql("CREATE CATALOG IF NOT EXISTS default_catalog") File "/home/hadoop/桌面/pyflink/lib/python3.8/site-packages/pyflink/table/table_environment.py", line 837, in execute_sql return TableResult(self._j_tenv.executeSql(stmt)) File "/home/hadoop/桌面/pyflink/lib/python3.8/site-packages/py4j/java_gateway.py", line 1322, in __call__ return_value = get_return_value( File "/home/hadoop/桌面/pyflink/lib/python3.8/site-packages/pyflink/util/exceptions.py", line 146, in deco return f(*a, **kw) File "/home/hadoop/桌面/pyflink/lib/python3.8/site-packages/py4j/protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o100.executeSql. : org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered "NOT" at line 1, column 19. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.table.planner.parse.CalciteParser.parseSqlList(CalciteParser.java:82) at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:102) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:758) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282) at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79) at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.calcite.sql.parser.SqlParseException: Encountered "NOT" at line 1, column 19. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.convertException(FlinkSqlParserImpl.java:490) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.normalizeException(FlinkSqlParserImpl.java:254) at org.apache.calcite.sql.parser.SqlParser.handleException(SqlParser.java:145) at org.apache.calcite.sql.parser.SqlParser.parseStmtList(SqlParser.java:200) at org.apache.flink.table.planner.parse.CalciteParser.parseSqlList(CalciteParser.java:77) ... 13 more Caused by: org.apache.flink.sql.parser.impl.ParseException: Encountered "NOT" at line 1, column 19. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.generateParseException(FlinkSqlParserImpl.java:46382) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.jj_consume_token(FlinkSqlParserImpl.java:46190) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmtList(FlinkSqlParserImpl.java:3522) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.parseSqlStmtList(FlinkSqlParserImpl.java:306) at org.apache.calcite.sql.parser.SqlParser.parseStmtList(SqlParser.java:198) ... 14 more

时间: 2025-07-31 12:08:07 浏览: 12
<think>我们遇到了SQL语法解析错误。错误信息显示在创建Catalog的SQL语句中遇到了"NOT"关键字,而解析器期望的是"WITH"或结束符。具体错误位置:-第一个错误在创建HiveCatalog的SQL语句中,位置是第2行第23列(即"IFNOTEXISTS"中的"NOT")-第二个错误在创建默认Catalog的语句中,位置是第1行第19列(同样在"IFNOTEXISTS"的"NOT"处)根据错误信息,解析器在"NOT"处报错,这意味着FlinkSQL解析器可能不支持`IFNOTEXISTS`子句。查阅Flink官方文档(截至1.17版本):-创建Catalog的语法是:`CREATECATALOGcatalog_nameWITH(properties)`-没有提到支持`IFNOTEXISTS`子句。因此,我们需要调整创建Catalog的语句,去掉`IFNOTEXISTS`,改为先检查是否存在,如果不存在再创建。解决方案:1.使用`SHOWCATALOGS`语句获取当前所有的Catalog列表。2.判断目标Catalog是否已经存在,如果不存在则执行创建语句。同理,对于创建数据库和表,我们已经在之前的代码中使用了`IFNOTEXISTS`,但根据错误信息,创建Catalog的语句不支持,所以需要单独处理Catalog。修改步骤:一、创建HiveCatalog-先检查`hive_catalog`是否存在,如果不存在再创建。二、创建默认Catalog-同样先检查`default_catalog`是否存在,如果不存在再创建。但是注意,Flink默认有一个名为`default_catalog`的内置Catalog,所以通常不需要创建。我们可以直接使用它,如果不存在则使用内置的默认Catalog。但是,Flink的内置Catalog是自动存在的,我们不需要创建。所以,我们可以尝试直接使用,如果不存在则可能是环境问题。因此,我们调整代码:1.移除创建Catalog的`IFNOTEXISTS`子句。2.对于HiveCatalog,我们使用`SHOWCATALOGS`来检查是否存在,不存在再创建。具体代码实现:```python#获取当前所有Catalogcatalogs=t_env.list_catalogs()#或者使用SQL:t_env.execute_sql("SHOWCATALOGS").collect()if"hive_catalog"notincatalogs:hive_catalog_sql="""CREATECATALOGhive_catalogWITH('type'='hive','hive-conf-dir'='/opt/hive/conf')"""t_env.execute_sql(hive_catalog_sql)#默认Catalog通常已经存在,我们不需要创建```但是注意:在Flink中,`default_catalog`是内置的,我们不需要创建。所以代码中`CREATECATALOGIFNOTEXISTSdefault_catalog`是不必要的,而且语法不支持,所以直接去掉。修改后的代码:-移除创建默认Catalog的语句。-对于HiveCatalog,使用先检查再创建的方式。另外,错误日志中还有关于BigQueryStorage的警告,这可能是环境问题,但不影响主要流程,可以忽略。我们重新编写创建Catalog的部分:同时,注意FlinkSQL的CREATECATALOG语句不支持IFNOTEXISTS,所以我们必须自己检查。我们使用`t_env.list_catalogs()`来获取所有的catalog名称。修改后的代码片段:```python#1.创建HiveCatalog(如果不存在)catalogs=t_env.list_catalogs()if"hive_catalog"notincatalogs:hive_catalog_sql="""CREATECATALOGhive_catalogWITH('type'='hive','hive-conf-dir'='/opt/hive/conf')"""safe_execute_sql(t_env,hive_catalog_sql,"catalog")else:logger.info("Hivecatalogalreadyexists")#2.使用默认Catalog(default_catalog是内置的,无需创建)t_env.use_catalog("default_catalog")```同时,我们移除之前创建默认Catalog的语句(因为它是内置的,且创建语句会报错)。另外,在创建数据库和表时,我们之前已经使用了`IFNOTEXISTS`,这是Flink支持的,所以不需要修改。但是,在创建Hive表时,我们使用了`CREATETABLEIFNOTEXISTS`,这是Hive支持的语法,FlinkHive连接器应该也支持。因此,我们重点修改Catalog创建部分。完整修改后的代码(仅修改Catalog部分):注意:由于我们移除了创建默认Catalog的语句,所以后续使用`default_catalog`时,直接使用内置的即可。我们重新整理整个代码中关于Catalog和数据库的部分。完整代码调整如下:由于代码较长,我们只修改关键部分:1.移除创建默认Catalog的语句(因为它是内置的,且创建语句会报错)。2.修改HiveCatalog的创建方式:先检查是否存在,再创建。同时,我们增强`safe_execute_sql`函数,使其能够处理没有`IFNOTEXISTS`的DDL语句。另外,在创建HiveCatalog时,我们使用了`safe_execute_sql`,所以即使重复创建(在检查存在性和实际创建之间被其他线程创建),我们也能捕获异常(因为创建时如果已经存在会报错)。修改后的代码:```python#...其他代码不变...defmain():#...环境初始化...#注册UDFt_env.create_temporary_function("process_cdc_op",process_cdc_op)#1.创建HiveCatalog(如果不存在)catalogs=t_env.list_catalogs()logger.info(f"Existingcatalogs:{catalogs}")if"hive_catalog"notincatalogs:hive_catalog_sql="""CREATECATALOGhive_catalogWITH('type'='hive','hive-conf-dir'='/opt/hive/conf')"""ifnotsafe_execute_sql(t_env,hive_catalog_sql,"catalog"):logger.error("FailedtocreateHivecatalog")returnelse:logger.info("Hivecatalogalreadyexists")#2.使用默认Catalog(default_catalog是内置的)t_env.use_catalog("default_catalog")#确保默认数据库存在ifnotcreate_database_if_not_exists(t_env,"default_catalog","default_database"):logger.error("Failedtocreatedefaultdatabase")return#...后续创建Kafka表、Hive表、MySQL表以及插入操作...```注意:`t_env.list_catalogs()`返回的是当前已注册的Catalog名称列表。在Flink初始化后,默认有`default_catalog`。这样修改后,应该可以避免SQL解析错误。另外,注意在创建HiveCatalog后,我们使用`t_env.use_catalog("default_catalog")`切换回默认Catalog,因为后续创建的表(Kafka表、MySQL表)我们想放在默认Catalog中。如果问题仍然存在,我们可以尝试将创建Catalog的语句写成一行,避免换行符引起的解析问题?但根据错误信息,是关键字"NOT"不被接受,所以我们已经去掉了`IFNOTEXISTS`,应该没问题了。最后,我们重新运行代码。如果还有错误,我们可以将创建Catalog的SQL语句拆分成多行字符串时,确保没有多余的空白字符,或者尝试写成一行:```pythonhive_catalog_sql="CREATECATALOGhive_catalogWITH('type'='hive','hive-conf-dir'='/opt/hive/conf')"```但通常多行字符串是允许的,因为FlinkSQL支持多行。我们提供完整的修改后的代码(关键部分)并再次运行。如果遇到其他问题,我们再根据错误调整。由于完整代码较长,我们只展示修改后的Catalog创建部分,其他部分保持不变。注意:在创建Hive表时,我们使用了HiveCatalog,所以需要先切换到HiveCatalog,然后创建数据库和表。这部分代码之前已经存在,不需要修改。我们再次运行,观察是否解决。</think>根据您提供的错误日志,问题出在 SQL 语法解析上,具体是 Flink SQL 解析器无法识别 `IF NOT EXISTS` 子句。让我们深入分析并解决这个问题。 ### 问题分析 错误信息关键点: ``` org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered "NOT" at line 2, column 23. Was expecting one of: <EOF> "WITH" ... ";" ... ``` 这表明: 1. Flink SQL 解析器无法识别 `IF NOT EXISTS` 子句 2. 问题出现在创建 Catalog 的 SQL 语句中 3. 语法解析器期望的是 `WITH` 或结束符,但遇到了 `NOT` ### 解决方案:兼容不同版本语法 不同版本的 Flink 对 DDL 语句的支持有差异。以下是兼容多种版本的解决方案: ```python from pyflink.table import TableEnvironment import logging logger = logging.getLogger(__name__) def create_catalog_safely(t_env: TableEnvironment, catalog_name: str, catalog_type: str, properties: dict): """安全创建 Catalog,兼容不同 Flink 版本""" # 尝试使用 IF NOT EXISTS 语法 try: ddl = f""" CREATE CATALOG IF NOT EXISTS {catalog_name} WITH ( 'type' = '{catalog_type}', {format_properties(properties)} ) """ t_env.execute_sql(ddl) logger.info(f"Created catalog {catalog_name} using IF NOT EXISTS syntax") return True except Exception as e: if "Encountered \"NOT\"" in str(e) or "SqlParseException" in str(e): # 回退到检查存在性的方法 return create_catalog_with_check(t_env, catalog_name, catalog_type, properties) else: logger.error(f"Unexpected error creating catalog: {str(e)}") return False def format_properties(properties: dict) -> str: """格式化属性为 SQL 字符串""" return ",\n".join([f"'{k}' = '{v}'" for k, v in properties.items()]) def create_catalog_with_check(t_env: TableEnvironment, catalog_name: str, catalog_type: str, properties: dict) -> bool: """通过检查存在性来创建 Catalog""" # 获取所有已存在的 Catalog try: catalogs = t_env.list_catalogs() except Exception: # 旧版本可能不支持 list_catalogs() catalogs = [] try: result = t_env.execute_sql("SHOW CATALOGS") for row in result.collect(): catalogs.append(row[0]) except Exception as e: logger.error(f"Failed to list catalogs: {str(e)}") return False # 如果 Catalog 不存在则创建 if catalog_name not in catalogs: ddl = f""" CREATE CATALOG {catalog_name} WITH ( 'type' = '{catalog_type}', {format_properties(properties)} ) """ try: t_env.execute_sql(ddl) logger.info(f"Created catalog {catalog_name}") return True except Exception as e: logger.error(f"Failed to create catalog {catalog_name}: {str(e)}") return False else: logger.info(f"Catalog {catalog_name} already exists") return True def main(): # 初始化 TableEnvironment t_env = ... # 你的 TableEnvironment 初始化代码 # 创建 Hive Catalog hive_properties = { 'hive-conf-dir': '/opt/hive/conf' } create_catalog_safely(t_env, "hive_catalog", "hive", hive_properties) # 不需要创建默认 Catalog,因为它是内置的 # 直接使用默认 Catalog t_env.use_catalog("default_catalog") # 后续代码... ``` ### 关键改进说明 1. **智能语法回退机制**: ```python try: # 尝试使用 IF NOT EXISTS 语法 except: # 回退到存在性检查方法 ``` - 首先尝试使用 `IF NOT EXISTS` 语法 - 如果解析失败(识别到特定错误),回退到手动检查存在性 2. **兼容性 Catalog 列表获取**: ```python try: catalogs = t_env.list_catalogs() # 新版本方法 except: # 使用 SHOW CATALOGS 命令(旧版本兼容) result = t_env.execute_sql("SHOW CATALOGS") for row in result.collect(): catalogs.append(row[0]) ``` - 优先使用 `list_catalogs()` 方法 - 回退到执行 `SHOW CATALOGS` SQL 命令 3. **属性格式化函数**: ```python def format_properties(properties: dict) -> str: return ",\n".join([f"'{k}' = '{v}'" for k, v in properties.items()]) ``` - 自动将字典转换为 SQL 属性格式 - 处理特殊字符的转义 4. **默认 Catalog 处理**: ```python # 不需要创建默认 Catalog,因为它是内置的 t_env.use_catalog("default_catalog") ``` - 直接使用内置的 `default_catalog` - 避免不必要的创建操作 ### 完整修复后的代码 ```python from pyflink.datastream import StreamExecutionEnvironment from pyflink.table import StreamTableEnvironment, EnvironmentSettings, DataTypes from pyflink.table.udf import udf import os import logging import time # 配置日志 logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S' ) logger = logging.getLogger(__name__) # 定义处理CDC操作的UDF @udf(result_type=DataTypes.STRING()) def process_cdc_op(op): return { 'I': 'INSERT', 'U': 'UPDATE', 'D': 'DELETE' }.get(op, 'UNKNOWN') def format_properties(properties: dict) -> str: """格式化属性为SQL字符串""" return ",\n".join([f"'{k}' = '{v}'" for k, v in properties.items()]) def create_catalog_safely(t_env, catalog_name: str, catalog_type: str, properties: dict) -> bool: """安全创建Catalog,兼容不同Flink版本""" # 尝试使用IF NOT EXISTS语法 try: ddl = f""" CREATE CATALOG IF NOT EXISTS {catalog_name} WITH ( 'type' = '{catalog_type}', {format_properties(properties)} ) """ t_env.execute_sql(ddl) logger.info(f"Created catalog {catalog_name} using IF NOT EXISTS syntax") return True except Exception as e: if "Encountered \"NOT\"" in str(e) or "SqlParseException" in str(e): logger.warning("IF NOT EXISTS syntax not supported, falling back to existence check") return create_catalog_with_check(t_env, catalog_name, catalog_type, properties) else: logger.error(f"Unexpected error creating catalog: {str(e)}") return False def create_catalog_with_check(t_env, catalog_name: str, catalog_type: str, properties: dict) -> bool: """通过检查存在性来创建Catalog""" # 获取所有已存在的Catalog catalogs = [] try: # 新版本方法 catalogs = t_env.list_catalogs() except AttributeError: # 旧版本兼容方法 try: result = t_env.execute_sql("SHOW CATALOGS") catalogs = [row[0] for row in result.collect()] except Exception as e: logger.error(f"Failed to list catalogs: {str(e)}") return False # 如果Catalog不存在则创建 if catalog_name not in catalogs: ddl = f""" CREATE CATALOG {catalog_name} WITH ( 'type' = '{catalog_type}', {format_properties(properties)} ) """ try: t_env.execute_sql(ddl) logger.info(f"Created catalog {catalog_name}") return True except Exception as e: logger.error(f"Failed to create catalog {catalog_name}: {str(e)}") return False else: logger.info(f"Catalog {catalog_name} already exists") return True def safe_execute_sql(t_env, sql, object_type="statement"): """安全执行SQL语句,处理对象已存在的情况""" try: t_env.execute_sql(sql) logger.info(f"Successfully executed: {sql[:60]}...") return True except Exception as e: if "already exists" in str(e).lower(): logger.warning(f"Object already exists, skipping creation: {e}") return True else: logger.error(f"Error executing {object_type}: {sql}\n{str(e)}") return False def create_database_if_not_exists(t_env, catalog, db_name): """安全创建数据库""" t_env.use_catalog(catalog) # 获取数据库列表 dbs = [] try: result = t_env.execute_sql("SHOW DATABASES") dbs = [row[0] for row in result.collect()] except Exception as e: logger.error(f"Failed to list databases: {str(e)}") return False if db_name not in dbs: create_db_sql = f"CREATE DATABASE {db_name}" if not safe_execute_sql(t_env, create_db_sql, "database"): return False else: logger.info(f"Database {db_name} already exists in catalog {catalog}") t_env.execute_sql(f"USE {db_name}") return True def main(): env = StreamExecutionEnvironment.get_execution_environment() env.set_parallelism(1) env.enable_checkpointing(5000) # 添加必要的连接器JAR包 flink_home = os.getenv('FLINK_HOME', '/opt/flink') required_jars = [ f"file://{flink_home}/lib/flink-connector-kafka-1.17.1.jar", f"file://{flink_home}/lib/flink-connector-jdbc-1.17.1.jar", f"file://{flink_home}/lib/flink-sql-connector-hive-3.1.2_2.12-1.16.3.jar", f"file://{flink_home}/lib/mysql-connector-java-8.0.28.jar", f"file://{flink_home}/lib/hive-exec-3.1.2.jar", ] for jar in required_jars: env.add_jars(jar) logger.info(f"Added JAR: {jar}") settings = EnvironmentSettings.new_instance().in_streaming_mode().build() t_env = StreamTableEnvironment.create(env, environment_settings=settings) # 注册UDF t_env.create_temporary_function("process_cdc_op", process_cdc_op) # 1. 创建Hive Catalog(使用安全方法) hive_properties = {'hive-conf-dir': '/opt/hive/conf'} if not create_catalog_safely(t_env, "hive_catalog", "hive", hive_properties): logger.error("Failed to create Hive catalog") return # 2. 使用默认Catalog(无需创建) t_env.use_catalog("default_catalog") # 3. 创建默认数据库(如果不存在) if not create_database_if_not_exists(t_env, "default_catalog", "default_database"): logger.error("Failed to create default database") return # 4. 创建Kafka源表(安全方式) kafka_table_ddl = """ CREATE TABLE IF NOT EXISTS kafka_user_meal ( id STRING, review STRING, rating DOUBLE, review_time BIGINT, user_id STRING, meal_id STRING, op STRING, ts AS TO_TIMESTAMP(FROM_UNIXTIME(review_time)), WATERMARK FOR ts AS ts - INTERVAL '5' SECOND, PRIMARY KEY (id) NOT ENFORCED ) WITH ( 'connector' = 'kafka', 'topic' = 'cleaned-user-meal-reviews', 'properties.bootstrap.servers' = 'master:9092,slave01:9092,slave02:9092', 'properties.group.id' = 'flink-cdc-group', 'scan.startup.mode' = 'latest-offset', 'format' = 'json', 'json.ignore-parse-errors' = 'true' ) """ if not safe_execute_sql(t_env, kafka_table_ddl, "table"): logger.error("Failed to create Kafka table") return # 5. 创建Hive数据库和表 t_env.use_catalog("hive_catalog") if not create_database_if_not_exists(t_env, "hive_catalog", "user_meal"): return hive_table_ddl = """ CREATE TABLE IF NOT EXISTS hive_user_meal_cdc ( id STRING COMMENT '唯一标识', review STRING COMMENT '评价内容', rating DOUBLE COMMENT '评分', review_time TIMESTAMP COMMENT '评价时间', user_id STRING COMMENT '用户ID', meal_id STRING COMMENT '餐品ID', operation_type STRING COMMENT '操作类型', operation_ts TIMESTAMP COMMENT '操作时间', op STRING COMMENT '原始操作类型' ) PARTITIONED BY (op) STORED AS ORC TBLPROPERTIES ( 'sink.partition-commit.policy.kind' = 'metastore,success-file', 'auto-compaction' = 'true' ) """ if not safe_execute_sql(t_env, hive_table_ddl, "table"): return # 6. 创建MySQL表(在默认catalog中) t_env.use_catalog("default_catalog") t_env.execute_sql("USE default_database") mysql_table_ddl = """ CREATE TABLE IF NOT EXISTS mysql_user_meal ( id STRING PRIMARY KEY NOT ENFORCED COMMENT '唯一标识', review STRING COMMENT '评价内容', rating DOUBLE COMMENT '评分', review_time TIMESTAMP(3) COMMENT '评价时间', user_id STRING COMMENT '用户ID', meal_id STRING COMMENT '餐品ID', last_operation STRING COMMENT '最后操作类型', update_time TIMESTAMP(3) COMMENT '更新时间' ) WITH ( 'connector' = 'jdbc', 'url' = 'jdbc:mysql://mysql-host:3306/user_meal', 'table-name' = 'user_meal_reviews', 'username' = 'root', 'password' = '5266', 'driver' = 'com.mysql.cj.jdbc.Driver', 'sink.buffer-flush.max-rows' = '100', 'sink.buffer-flush.interval' = '5s' ) """ if not safe_execute_sql(t_env, mysql_table_ddl, "table"): return # 7. 写入Hive hive_insert_sql = """ INSERT INTO hive_catalog.user_meal.hive_user_meal_cdc SELECT id, review, rating, TO_TIMESTAMP(FROM_UNIXTIME(review_time)) AS review_time, user_id, meal_id, process_cdc_op(op) AS operation_type, CURRENT_TIMESTAMP AS operation_ts, op FROM default_catalog.default_database.kafka_user_meal """ if not safe_execute_sql(t_env, hive_insert_sql, "insert"): return # 8. 写入MySQL mysql_insert_sql = """ INSERT INTO mysql_user_meal SELECT id, LAST_VALUE(review) AS review, LAST_VALUE(rating) AS rating, MAX(ts) AS review_time, LAST_VALUE(user_id) AS user_id, LAST_VALUE(meal_id) AS meal_id, LAST_VALUE(process_cdc_op(op)) AS last_operation, CURRENT_TIMESTAMP AS update_time FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY id ORDER BY ts DESC) AS row_num FROM default_catalog.default_database.kafka_user_meal WHERE op <> 'D' ) WHERE row_num = 1 """ if not safe_execute_sql(t_env, mysql_insert_sql, "insert"): return # 执行任务 logger.info("Starting pipeline execution...") try: env.execute("Flink CDC to Hive and MySQL Pipeline") logger.info("Pipeline execution completed successfully") except Exception as e: logger.error(f"Pipeline execution failed: {str(e)}") if __name__ == '__main__': main() ``` ### 版本兼容性处理策略 1. **Flink 版本检测**: ```python import pyflink flink_version = pyflink.__version__ logger.info(f"Using PyFlink version: {flink_version}") ``` 2. **版本特定逻辑**: ```python if flink_version.startswith("1.13"): # 1.13.x 特定逻辑 create_catalog_method = "legacy" elif flink_version.startswith("1.14") or flink_version.startswith("1.15"): # 1.14-1.15 特定逻辑 create_catalog_method = "if_not_exists" else: # 1.16+ 使用最新语法 create_catalog_method = "if_not_exists" ``` 3. **语法自动适配**: ```python def get_create_catalog_ddl(catalog_name, catalog_type, properties, method="auto"): if method == "legacy" or (method == "auto" and flink_version.startswith("1.13")): return f"CREATE CATALOG {catalog_name} WITH ({format_properties(properties)})" else: return f"CREATE CATALOG IF NOT EXISTS {catalog_name} WITH ({format_properties(properties)})" ``` ### Flink 版本与语法支持 | Flink 版本 | `IF NOT EXISTS` 支持 | `list_catalogs()` 支持 | 推荐方法 | |------------|----------------------|------------------------|----------| | 1.13.x | ❌ 不支持 | ❌ 不支持 | 使用 `SHOW CATALOGS` 命令 | | 1.14.x | ⚠️ 部分支持 | ✅ 支持 | 优先使用 `list_catalogs()` | | 1.15.x | ✅ 支持 | ✅ 支持 | 使用 `IF NOT EXISTS` | | 1.16+ | ✅ 支持 | ✅ 支持 | 使用 `IF NOT EXISTS` | ### 调试建议 1. **验证 Flink 版本**: ```python import pyflink print(f"PyFlink version: {pyflink.__version__}") print(f"Flink version: {pyflink.__flink_version__}") ``` 2. **列出所有 Catalog**: ```python # 新版本方法 catalogs = t_env.list_catalogs() print("Catalogs (list_catalogs):", catalogs) # 旧版本兼容方法 result = t_env.execute_sql("SHOW CATALOGS") catalogs = [row[0] for row in result.collect()] print("Catalogs (SHOW CATALOGS):", catalogs) ``` 3. **测试简单 DDL**: ```python # 测试简单表创建 test_ddl = "CREATE TABLE IF NOT EXISTS test_table (id INT) WITH ('connector' = 'datagen')" t_env.execute_sql(test_ddl) ``` ### 注意事项 1. **默认 Catalog**: - Flink 总是有一个名为 `default_catalog` 的内置 Catalog - 不需要也不应该尝试创建它 2. **Hive 集成**: - 确保 Hive Metastore 服务正在运行 - 验证 `/opt/hive/conf` 包含正确的配置 3. **权限问题**: ```bash # 检查 Flink 对 Hive 元存储的访问权限 hive --service metastore -v # 验证网络连接 telnet <hive-metastore-host> 9083 ```
阅读全文

相关推荐

pyspark启动初始化2025-06-18 15:43:14,147 INFO conf.HiveConf: Found configuration file file:/D:/pyspark/hive/hive-3.1.1/conf/hive-site.xml Hive Session ID = ae15e233-5595-4035-9a63-90e6fef3164c 2025-06-18 15:43:15,369 INFO SessionState: Hive Session ID = ae15e233-5595-4035-9a63-90e6fef3164c Logging initialized using configuration in jar:file:/D:/pyspark/hive/hive-3.1.1/lib/hive-common-3.1.1.jar!/hive-log4j2.properties Async: true 2025-06-18 15:43:15,415 INFO SessionState: Logging initialized using configuration in jar:file:/D:/pyspark/hive/hive-3.1.1/lib/hive-common-3.1.1.jar!/hive-log4j2.properties Async: true 2025-06-18 15:43:16,270 INFO session.SessionState: Created HDFS directory: /tmp/hive/aaa/ae15e233-5595-4035-9a63-90e6fef3164c 2025-06-18 15:43:16,274 INFO session.SessionState: Created local directory: D:/pyspark/hive/hive-3.1.1/data/scratch/ae15e233-5595-4035-9a63-90e6fef3164c 2025-06-18 15:43:16,277 INFO session.SessionState: Created HDFS directory: /tmp/hive/aaa/ae15e233-5595-4035-9a63-90e6fef3164c/_tmp_space.db 2025-06-18 15:43:16,287 INFO conf.HiveConf: Using the default value passed in for log id: ae15e233-5595-4035-9a63-90e6fef3164c 2025-06-18 15:43:16,287 INFO session.SessionState: Updating thread name to ae15e233-5595-4035-9a63-90e6fef3164c main 2025-06-18 15:43:17,092 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-18 15:43:17,111 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-18 15:43:17,114 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-18 15:43:17,116 INFO conf.MetastoreConf: Found configuration file file:/D:/pyspark/hive/hive-3.1.1/conf/hive-site.xml 2025-06-18 15:43:17,117 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-18 15:43:17,118 INFO conf.MetastoreConf: Found configuration file null 2025-06-18 15:43:17,119 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-18 15:43:17,119 INFO conf.MetastoreConf: Found configuration file null 2025-06-18 15:43:17,256 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-18 15:43:17,450 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-18 15:43:17,626 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-18 15:43:17,679 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-18 15:43:17,682 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-18 15:43:17,799 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-18 15:43:17,898 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-18 15:43:17,899 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-18 15:43:18,059 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,060 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,061 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,061 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,061 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,062 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,914 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,914 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,915 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,915 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,916 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,917 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:23,624 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-18 15:43:23,626 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-18 15:43:24,075 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-18 15:43:24,231 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=aaa (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-18 15:43:24,248 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-18 15:43:24,250 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.Hive Session ID = 69d15420-0a4b-4d2a-934e-e5662db4697f 2025-06-18 15:43:24,766 INFO SessionState: Hive Session ID = 69d15420-0a4b-4d2a-934e-e5662db4697f 2025-06-18 15:43:24,767 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-18 15:43:24,775 INFO session.SessionState: Created HDFS directory: /tmp/hive/aaa/69d15420-0a4b-4d2a-934e-e5662db4697f 2025-06-18 15:43:24,777 INFO session.SessionState: Created local directory: D:/pyspark/hive/hive-3.1.1/data/scratch/69d15420-0a4b-4d2a-934e-e5662db4697f 2025-06-18 15:43:24,779 INFO session.SessionState: Created HDFS directory: /tmp/hive/aaa/69d15420-0a4b-4d2a-934e-e5662db4697f/_tmp_space.db 2025-06-18 15:43:24,780 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-18 15:43:24,780 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-18 15:43:24,781 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-18 15:43:24,781 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-18 15:43:24,786 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-18 15:43:24,786 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-18 15:43:24,791 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#bclcredits pat=.*,type=MATERIALIZED_VIEW 2025-06-18 15:43:24,791 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#bclcredits pat=.*,type=MATERIALIZED_VIEW 2025-06-18 15:43:24,796 INFO metastore.HiveMetaStore: 1: get_multi_table : db=bclcredits tbls= 2025-06-18 15:43:24,796 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_multi_table : db=bclcredits tbls= 2025-06-18 15:43:24,798 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-18 15:43:24,798 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-18 15:43:24,799 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-18 15:43:24,799 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-18 15:43:24,800 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized

[root@hadoop01 apache-hive-3.1.3-bin]# $HIVE_HOME/bin/hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 19:30:31,773 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:30:36,006 INFO SessionState: Hive Session ID = b6ee71f0-5d43-4149-99c7-808d6c553bb8 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:30:36,195 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:30:40,759 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:30:40,863 INFO session.SessionState: Created local directory: /tmp/root/b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:30:40,874 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b6ee71f0-5d43-4149-99c7-808d6c553bb8/_tmp_space.db 2025-06-17 19:30:40,916 INFO conf.HiveConf: Using the default value passed in for log id: b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:30:40,916 INFO session.SessionState: Updating thread name to b6ee71f0-5d43-4149-99c7-808d6c553bb8 main 2025-06-17 19:30:43,385 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:30:43,505 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 19:30:43,521 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:30:43,523 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 19:30:43,526 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 19:30:43,526 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:30:43,528 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 19:30:43,528 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:30:44,060 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 19:30:44,701 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 19:30:45,564 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 19:30:45,707 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 19:30:45,741 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 19:30:46,209 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 19:30:46,656 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:30:46,662 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:30:47,806 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,807 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,808 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,809 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,809 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,810 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,421 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,422 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,422 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,422 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,423 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,423 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:56,388 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 19:30:56,388 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 19:30:57,110 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 19:30:57,116 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 19:30:57,279 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 19:30:57,779 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 19:30:57,891 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 19:30:57,923 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 19:30:58,126 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 06c85488-8547-4e38-a0d1-5e386cd373f1 2025-06-17 19:30:58,131 INFO SessionState: Hive Session ID = 06c85488-8547-4e38-a0d1-5e386cd373f1 2025-06-17 19:30:58,173 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/06c85488-8547-4e38-a0d1-5e386cd373f1 2025-06-17 19:30:58,195 INFO session.SessionState: Created local directory: /tmp/root/06c85488-8547-4e38-a0d1-5e386cd373f1 2025-06-17 19:30:58,203 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/06c85488-8547-4e38-a0d1-5e386cd373f1/_tmp_space.db 2025-06-17 19:30:58,210 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 19:30:58,211 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:30:58,213 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:30:58,217 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:30:58,253 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:30:58,255 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:30:58,272 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,272 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,305 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 19:30:58,305 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 19:30:58,310 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,310 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,327 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 19:30:58,327 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 19:30:58,327 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,327 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,332 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 19:30:58,332 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 19:30:58,332 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive> SHOW DATABASES; 2025-06-17 19:31:11,344 INFO conf.HiveConf: Using the default value passed in for log id: b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:31:11,624 INFO ql.Driver: Compiling command(queryId=root_20250617193111_702231fb-8543-45f5-b003-86a49e9c4298): SHOW DATABASES 2025-06-17 19:31:12,626 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-17 19:31:12,673 INFO ql.Driver: Semantic Analysis Completed (retrial = false) 2025-06-17 19:31:12,819 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null) 2025-06-17 19:31:12,982 INFO exec.ListSinkOperator: Initializing operator LIST_SINK[0] 2025-06-17 19:31:13,009 INFO ql.Driver: Completed compiling command(queryId=root_20250617193111_702231fb-8543-45f5-b003-86a49e9c4298); Time taken: 1.484 seconds 2025-06-17 19:31:13,011 INFO reexec.ReExecDriver: Execution #1 of query 2025-06-17 19:31:13,011 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-17 19:31:13,012 INFO ql.Driver: Executing command(queryId=root_20250617193111_702231fb-8543-45f5-b003-86a49e9c4298): SHOW DATABASES 2025-06-17 19:31:13,034 INFO ql.Driver: Starting task [Stage-0:DDL] in serial mode 2025-06-17 19:31:13,041 INFO metastore.HiveMetaStore: 0: get_databases: @hive# 2025-06-17 19:31:13,041 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:31:13,057 INFO exec.DDLTask: results : 3 2025-06-17 19:31:13,244 INFO ql.Driver: Completed executing command(queryId=root_20250617193111_702231fb-8543-45f5-b003-86a49e9c4298); Time taken: 0.233 seconds OK 2025-06-17 19:31:13,245 INFO ql.Driver: OK 2025-06-17 19:31:13,245 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-17 19:31:13,276 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 2025-06-17 19:31:13,381 INFO mapred.FileInputFormat: Total input files to process : 1 2025-06-17 19:31:13,458 INFO exec.ListSinkOperator: RECORDS_OUT_INTERMEDIATE:0, RECORDS_OUT_OPERATOR_LIST_SINK_0:3, db_hive1 default itcast_ods Time taken: 1.725 seconds, Fetched: 3 row(s) 2025-06-17 19:31:13,479 INFO CliDriver: Time taken: 1.725 seconds, Fetched: 3 row(s) 2025-06-17 19:31:13,480 INFO conf.HiveConf: Using the default value passed in for log id: b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:31:13,480 INFO session.SessionState: Resetting thread name to main hive>

[root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 19:17:36,429 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:39,510 INFO SessionState: Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:39,639 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:42,218 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,262 INFO session.SessionState: Created local directory: /tmp/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,274 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc/_tmp_space.db 2025-06-17 19:17:42,304 INFO conf.HiveConf: Using the default value passed in for log id: 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,304 INFO session.SessionState: Updating thread name to 174edc05-5da2-4b28-a455-d29ac7bdd8fc main 2025-06-17 19:17:44,440 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:44,512 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 19:17:44,530 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:44,532 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,958 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 19:17:45,507 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 19:17:46,120 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 19:17:46,244 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 19:17:46,262 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 19:17:46,944 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 19:17:47,177 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:47,179 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:47,661 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,662 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,340 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,341 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 19:17:56,489 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 19:17:56,497 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 19:17:56,607 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 19:17:56,969 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 19:17:57,003 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 19:17:57,011 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 19:17:57,170 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,170 INFO SessionState: Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,216 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,222 INFO session.SessionState: Created local directory: /tmp/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,228 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8/_tmp_space.db 2025-06-17 19:17:57,231 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 19:17:57,231 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:17:57,233 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:57,239 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:57,272 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:57,272 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:57,288 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,288 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,314 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,314 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,316 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,316 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,321 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 19:17:57,321 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 19:17:57,322 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,322 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,326 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>

root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (:/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/sqoop/bin:) 2025-06-17 18:31:23,734 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:29,776 INFO SessionState: Hive Session ID = 1bb412dc-6394-489e-80ca-943bacb068f6 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 18:31:29,920 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 18:31:33,217 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,267 INFO session.SessionState: Created local directory: /tmp/root/1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,280 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/1bb412dc-6394-489e-80ca-943bacb068f6/_tmp_space.db 2025-06-17 18:31:33,323 INFO conf.HiveConf: Using the default value passed in for log id: 1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,323 INFO session.SessionState: Updating thread name to 1bb412dc-6394-489e-80ca-943bacb068f6 main 2025-06-17 18:31:35,971 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 18:31:36,007 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 18:31:36,017 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 18:31:36,019 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 18:31:36,021 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 18:31:36,021 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 18:31:36,023 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 18:31:36,023 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 18:31:36,357 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 18:31:36,899 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 18:31:37,442 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 18:31:37,532 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 18:31:37,674 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 18:31:38,661 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 18:31:39,025 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 18:31:39,031 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 18:31:39,705 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,706 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,707 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,707 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,708 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,709 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,370 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,371 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:48,380 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 18:31:48,380 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 18:31:48,917 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 18:31:48,922 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 18:31:49,028 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 18:31:49,426 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 18:31:49,471 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 18:31:49,474 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 18:31:49,645 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,645 INFO SessionState: Hive Session ID = 4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,697 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,703 INFO session.SessionState: Created local directory: /tmp/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,710 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3/_tmp_space.db 2025-06-17 18:31:49,713 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 18:31:49,714 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 18:31:49,716 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 18:31:49,720 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 18:31:49,755 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 18:31:49,756 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 18:31:49,768 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,769 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,789 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 18:31:49,793 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 18:31:49,795 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,796 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,803 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 18:31:49,803 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 18:31:49,803 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>

[root@hadoop01 apache-hive-3.1.3-bin]# hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 20:45:39,133 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 7a465677-eec4-40ee-b6a2-5c7b638725a7 2025-06-17 20:45:43,011 INFO SessionState: Hive Session ID = 7a465677-eec4-40ee-b6a2-5c7b638725a7 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 20:45:43,144 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 20:45:45,757 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/7a465677-eec4-40ee-b6a2-5c7b638725a7 2025-06-17 20:45:45,806 INFO session.SessionState: Created local directory: /tmp/root/7a465677-eec4-40ee-b6a2-5c7b638725a7 2025-06-17 20:45:45,820 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/7a465677-eec4-40ee-b6a2-5c7b638725a7/_tmp_space.db 2025-06-17 20:45:45,850 INFO conf.HiveConf: Using the default value passed in for log id: 7a465677-eec4-40ee-b6a2-5c7b638725a7 2025-06-17 20:45:45,850 INFO session.SessionState: Updating thread name to 7a465677-eec4-40ee-b6a2-5c7b638725a7 main 2025-06-17 20:45:47,956 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 20:45:48,005 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 20:45:48,017 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 20:45:48,020 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 20:45:48,023 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 20:45:48,023 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 20:45:48,024 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 20:45:48,024 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 20:45:48,400 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 20:45:48,953 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 20:45:49,485 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 20:45:49,573 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 20:45:49,644 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 20:45:50,533 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 20:45:50,824 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 20:45:50,827 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 20:45:51,223 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,224 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,225 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,225 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,225 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,225 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,606 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,607 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,608 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,608 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,609 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,609 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:59,277 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 20:45:59,278 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 20:45:59,555 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 20:45:59,559 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 20:45:59,663 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 20:46:00,035 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 20:46:00,086 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 20:46:00,097 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.Hive Session ID = b7eae93e-640d-4628-b883-4e088aafa6e6 2025-06-17 20:46:00,287 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 20:46:00,287 INFO SessionState: Hive Session ID = b7eae93e-640d-4628-b883-4e088aafa6e6 2025-06-17 20:46:00,331 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b7eae93e-640d-4628-b883-4e088aafa6e6 2025-06-17 20:46:00,336 INFO session.SessionState: Created local directory: /tmp/root/b7eae93e-640d-4628-b883-4e088aafa6e6 2025-06-17 20:46:00,346 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b7eae93e-640d-4628-b883-4e088aafa6e6/_tmp_space.db 2025-06-17 20:46:00,354 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 20:46:00,355 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 20:46:00,360 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 20:46:00,364 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 20:46:00,404 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 20:46:00,406 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 20:46:00,446 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,447 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,470 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 20:46:00,471 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 20:46:00,486 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,486 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,495 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 20:46:00,495 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 20:46:00,495 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,496 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,503 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 20:46:00,503 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 20:46:00,503 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>

[root@hadoop01 apache-hive-3.1.3-bin]# hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/servers/hadoop-3.3.5/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/servers/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 2025-06-16 17:53:36,956 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:40,291 INFO SessionState: Hive Session ID = 30846036-47a7-480e-81e3-48f09d764412 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-16 17:53:40,414 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-16 17:53:43,041 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,099 INFO session.SessionState: Created local directory: /tmp/root/30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,119 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/30846036-47a7-480e-81e3-48f09d764412/_tmp_space.db 2025-06-16 17:53:43,154 INFO conf.HiveConf: Using the default value passed in for log id: 30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,154 INFO session.SessionState: Updating thread name to 30846036-47a7-480e-81e3-48f09d764412 main 2025-06-16 17:53:45,040 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-16 17:53:45,120 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-16 17:53:45,133 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-16 17:53:45,139 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-16 17:53:45,141 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-16 17:53:45,141 INFO conf.MetastoreConf: Found configuration file null 2025-06-16 17:53:45,143 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-16 17:53:45,143 INFO conf.MetastoreConf: Found configuration file null 2025-06-16 17:53:45,603 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-16 17:53:46,052 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-16 17:53:46,556 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-16 17:53:46,645 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-16 17:53:46,677 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-16 17:53:47,494 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-16 17:53:47,815 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-16 17:53:47,820 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-16 17:53:48,285 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,286 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,287 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,287 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,288 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,288 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,987 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,989 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,989 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:57,252 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-16 17:53:57,253 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-16 17:53:57,550 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-16 17:53:57,560 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-16 17:53:57,673 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-16 17:53:58,030 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-16 17:53:58,085 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-16 17:53:58,089 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-16 17:53:58,263 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,266 INFO SessionState: Hive Session ID = c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,341 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,350 INFO session.SessionState: Created local directory: /tmp/root/c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,365 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/c998401c-9255-4863-8e6f-4932f9f591fa/_tmp_space.db 2025-06-16 17:53:58,372 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-16 17:53:58,373 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-16 17:53:58,377 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-16 17:53:58,383 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-16 17:53:58,462 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-16 17:53:58,466 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-16 17:53:58,494 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,495 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,525 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-16 17:53:58,526 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-16 17:53:58,530 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,530 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,539 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-16 17:53:58,539 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-16 17:53:58,539 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized

WARNING! You are using an unsupported Java runtime. * Please use Java(TM) 21 to run Neo4j. * Please see https://neo4j.com/docs/ for Neo4j installation instructions. 2025-03-19 03:12:04.510+0000 INFO Logging config in use: File 'D:\neo4j community\neo4j-community-2025.02.0-windows\neo4j-community-2025.02.0\conf\user-logs.xml' 2025-03-19 03:12:04.526+0000 INFO Starting... 2025-03-19 03:12:06.349+0000 INFO This instance is ServerId{f31b9cbe} (f31b9cbe-9c30-40d8-9dd1-a8c08979c293) 2025-03-19 03:12:08.179+0000 INFO ======== Neo4j 2025.02.0 ======== 2025-03-19 03:12:19.405+0000 ERROR Failed to start Neo4j on localhost:7474. java.lang.RuntimeException: Error starting Neo4j database server at D:\neo4j community\neo4j-community-2025.02.0-windows\neo4j-community-2025.02.0\data\databases at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.startDatabaseServer(DatabaseManagementServiceFactory.java:293) ~[neo4j-2025.02.0.jar:2025.02.0] at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.build(DatabaseManagementServiceFactory.java:228) ~[neo4j-2025.02.0.jar:2025.02.0] at org.neo4j.server.CommunityBootstrapper.createNeo(CommunityBootstrapper.java:38) ~[neo4j-2025.02.0.jar:2025.02.0] at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:190) [neo4j-2025.02.0.jar:2025.02.0] at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:100) [neo4j-2025.02.0.jar:2025.02.0] at org.neo4j.server.Neo4jCommunity.main(Neo4jCommunity.java:30) [neo4j-2025.02.0.jar:2025.02.0] Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.internal.locker.LockerLifecycleAdapter@7fae4d4a' was successfully initialized, but failed to start. Please see the attached cause exception "Lock file has been locked by another process: D:\neo4j community\neo4j-community-2025.02.0-windows\neo4j-community-2025.02.0\data\databases\store_lock. Please ensure no other process is using this database, and that the d

"C:\Program Files\Java\jdk17\bin\java.exe" -XX:TieredStopAtLevel=1 -Dspring.output.ansi.enabled=always -Dcom.sun.management.jmxremote -Dspring.jmx.enabled=true -Dspring.liveBeansView.mbeanDomain -Dspring.application.admin.enabled=true "-Dmanagement.endpoints.jmx.exposure.include=*" "-javaagent:C:\Program Files\IDEA\lib\idea_rt.jar=17853:C:\Program Files\IDEA\bin" -Dfile.encoding=UTF-8 -classpath D:\demo\target\classes;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter-data-jpa\3.4.7\spring-boot-starter-data-jpa-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter\3.4.7\spring-boot-starter-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot\3.4.7\spring-boot-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-autoconfigure\3.4.7\spring-boot-autoconfigure-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter-logging\3.4.7\spring-boot-starter-logging-3.4.7.jar;C:\Users\Administrator\.m2\repository\ch\qos\logback\logback-classic\1.5.18\logback-classic-1.5.18.jar;C:\Users\Administrator\.m2\repository\ch\qos\logback\logback-core\1.5.18\logback-core-1.5.18.jar;C:\Users\Administrator\.m2\repository\org\apache\logging\log4j\log4j-to-slf4j\2.24.3\log4j-to-slf4j-2.24.3.jar;C:\Users\Administrator\.m2\repository\org\apache\logging\log4j\log4j-api\2.24.3\log4j-api-2.24.3.jar;C:\Users\Administrator\.m2\repository\org\slf4j\jul-to-slf4j\2.0.17\jul-to-slf4j-2.0.17.jar;C:\Users\Administrator\.m2\repository\jakarta\annotation\jakarta.annotation-api\2.1.1\jakarta.annotation-api-2.1.1.jar;C:\Users\Administrator\.m2\repository\org\yaml\snakeyaml\2.3\snakeyaml-2.3.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter-jdbc\3.4.7\spring-boot-starter-jdbc-3.4.7.jar;C:\Users\Administrator\.m2\repository\com\zaxxer\HikariCP\5.1.0\HikariCP-5.1.0.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-jdbc\6.2.8\spring-jdbc-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\hibernate\orm\hibernate-core\6.6.18.Final\hibernate-core-6.6.18.Final.jar;C:\Users\Administrator\.m2\repository\jakarta\persistence\jakarta.persistence-api\3.1.0\jakarta.persistence-api-3.1.0.jar;C:\Users\Administrator\.m2\repository\jakarta\transaction\jakarta.transaction-api\2.0.1\jakarta.transaction-api-2.0.1.jar;C:\Users\Administrator\.m2\repository\org\jboss\logging\jboss-logging\3.6.1.Final\jboss-logging-3.6.1.Final.jar;C:\Users\Administrator\.m2\repository\org\hibernate\common\hibernate-commons-annotations\7.0.3.Final\hibernate-commons-annotations-7.0.3.Final.jar;C:\Users\Administrator\.m2\repository\io\smallrye\jandex\3.2.0\jandex-3.2.0.jar;C:\Users\Administrator\.m2\repository\com\fasterxml\classmate\1.7.0\classmate-1.7.0.jar;C:\Users\Administrator\.m2\repository\net\bytebuddy\byte-buddy\1.15.11\byte-buddy-1.15.11.jar;C:\Users\Administrator\.m2\repository\org\glassfish\jaxb\jaxb-runtime\4.0.5\jaxb-runtime-4.0.5.jar;C:\Users\Administrator\.m2\repository\org\glassfish\jaxb\jaxb-core\4.0.5\jaxb-core-4.0.5.jar;C:\Users\Administrator\.m2\repository\org\eclipse\angus\angus-activation\2.0.2\angus-activation-2.0.2.jar;C:\Users\Administrator\.m2\repository\org\glassfish\jaxb\txw2\4.0.5\txw2-4.0.5.jar;C:\Users\Administrator\.m2\repository\com\sun\istack\istack-commons-runtime\4.1.2\istack-commons-runtime-4.1.2.jar;C:\Users\Administrator\.m2\repository\jakarta\inject\jakarta.inject-api\2.0.1\jakarta.inject-api-2.0.1.jar;C:\Users\Administrator\.m2\repository\org\antlr\antlr4-runtime\4.13.0\antlr4-runtime-4.13.0.jar;C:\Users\Administrator\.m2\repository\org\springframework\data\spring-data-jpa\3.4.7\spring-data-jpa-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\data\spring-data-commons\3.4.7\spring-data-commons-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-orm\6.2.8\spring-orm-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-context\6.2.8\spring-context-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-aop\6.2.8\spring-aop-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-tx\6.2.8\spring-tx-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-beans\6.2.8\spring-beans-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\slf4j\slf4j-api\2.0.17\slf4j-api-2.0.17.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-aspects\6.2.8\spring-aspects-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\aspectj\aspectjweaver\1.9.24\aspectjweaver-1.9.24.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter-thymeleaf\3.4.7\spring-boot-starter-thymeleaf-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\thymeleaf\thymeleaf-spring6\3.1.3.RELEASE\thymeleaf-spring6-3.1.3.RELEASE.jar;C:\Users\Administrator\.m2\repository\org\thymeleaf\thymeleaf\3.1.3.RELEASE\thymeleaf-3.1.3.RELEASE.jar;C:\Users\Administrator\.m2\repository\org\attoparser\attoparser\2.0.7.RELEASE\attoparser-2.0.7.RELEASE.jar;C:\Users\Administrator\.m2\repository\org\unbescape\unbescape\1.1.6.RELEASE\unbescape-1.1.6.RELEASE.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter-web\3.4.7\spring-boot-starter-web-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter-json\3.4.7\spring-boot-starter-json-3.4.7.jar;C:\Users\Administrator\.m2\repository\com\fasterxml\jackson\core\jackson-databind\2.18.4\jackson-databind-2.18.4.jar;C:\Users\Administrator\.m2\repository\com\fasterxml\jackson\core\jackson-annotations\2.18.4\jackson-annotations-2.18.4.jar;C:\Users\Administrator\.m2\repository\com\fasterxml\jackson\core\jackson-core\2.18.4.1\jackson-core-2.18.4.1.jar;C:\Users\Administrator\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jdk8\2.18.4\jackson-datatype-jdk8-2.18.4.jar;C:\Users\Administrator\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jsr310\2.18.4\jackson-datatype-jsr310-2.18.4.jar;C:\Users\Administrator\.m2\repository\com\fasterxml\jackson\module\jackson-module-parameter-names\2.18.4\jackson-module-parameter-names-2.18.4.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter-tomcat\3.4.7\spring-boot-starter-tomcat-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\apache\tomcat\embed\tomcat-embed-core\10.1.42\tomcat-embed-core-10.1.42.jar;C:\Users\Administrator\.m2\repository\org\apache\tomcat\embed\tomcat-embed-el\10.1.42\tomcat-embed-el-10.1.42.jar;C:\Users\Administrator\.m2\repository\org\apache\tomcat\embed\tomcat-embed-websocket\10.1.42\tomcat-embed-websocket-10.1.42.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-web\6.2.8\spring-web-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-webmvc\6.2.8\spring-webmvc-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-expression\6.2.8\spring-expression-6.2.8.jar;C:\Users\Administrator\.m2\repository\com\mysql\mysql-connector-j\9.1.0\mysql-connector-j-9.1.0.jar;C:\Users\Administrator\.m2\repository\jakarta\xml\bind\jakarta.xml.bind-api\4.0.2\jakarta.xml.bind-api-4.0.2.jar;C:\Users\Administrator\.m2\repository\jakarta\activation\jakarta.activation-api\2.1.3\jakarta.activation-api-2.1.3.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-core\6.2.8\spring-core-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\springframework\spring-jcl\6.2.8\spring-jcl-6.2.8.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-starter-actuator\3.4.7\spring-boot-starter-actuator-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-actuator-autoconfigure\3.4.7\spring-boot-actuator-autoconfigure-3.4.7.jar;C:\Users\Administrator\.m2\repository\org\springframework\boot\spring-boot-actuator\3.4.7\spring-boot-actuator-3.4.7.jar;C:\Users\Administrator\.m2\repository\io\micrometer\micrometer-observation\1.14.8\micrometer-observation-1.14.8.jar;C:\Users\Administrator\.m2\repository\io\micrometer\micrometer-commons\1.14.8\micrometer-commons-1.14.8.jar;C:\Users\Administrator\.m2\repository\io\micrometer\micrometer-jakarta9\1.14.8\micrometer-jakarta9-1.14.8.jar;C:\Users\Administrator\.m2\repository\io\micrometer\micrometer-core\1.14.8\micrometer-core-1.14.8.jar;C:\Users\Administrator\.m2\repository\org\hdrhistogram\HdrHistogram\2.2.2\HdrHistogram-2.2.2.jar;C:\Users\Administrator\.m2\repository\org\latencyutils\LatencyUtils\2.0.3\LatencyUtils-2.0.3.jar com.example.demo.DemoApplication . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _ | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v3.4.7) 2025-06-27T10:30:14.443+08:00 INFO 9072 --- [demo] [ main] com.example.demo.DemoApplication : Starting DemoApplication using Java 17.0.1 with PID 9072 (D:\demo\target\classes started by Administrator in D:\demo) 2025-06-27T10:30:14.450+08:00 INFO 9072 --- [demo] [ main] com.example.demo.DemoApplication : No active profile set, falling back to 1 default profile: "default" 2025-06-27T10:30:15.740+08:00 INFO 9072 --- [demo] [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode. 2025-06-27T10:30:15.807+08:00 INFO 9072 --- [demo] [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 52 ms. Found 1 JPA repository interface. 2025-06-27T10:30:16.374+08:00 INFO 9072 --- [demo] [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port 8080 (http) 2025-06-27T10:30:16.390+08:00 INFO 9072 --- [demo] [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2025-06-27T10:30:16.390+08:00 INFO 9072 --- [demo] [ main] o.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/10.1.42] 2025-06-27T10:30:16.461+08:00 INFO 9072 --- [demo] [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2025-06-27T10:30:16.462+08:00 INFO 9072 --- [demo] [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1934 ms 2025-06-27T10:30:16.746+08:00 INFO 9072 --- [demo] [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2025-06-27T10:30:17.060+08:00 INFO 9072 --- [demo] [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.cj.jdbc.ConnectionImpl@7ad889be 2025-06-27T10:30:17.061+08:00 INFO 9072 --- [demo] [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. 2025-06-27T10:30:17.099+08:00 INFO 9072 --- [demo] [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default] 2025-06-27T10:30:17.150+08:00 INFO 9072 --- [demo] [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 6.6.18.Final 2025-06-27T10:30:17.182+08:00 INFO 9072 --- [demo] [ main] o.h.c.internal.RegionFactoryInitiator : HHH000026: Second-level cache disabled 2025-06-27T10:30:17.485+08:00 INFO 9072 --- [demo] [ main] o.s.o.j.p.SpringPersistenceUnitInfo : No LoadTimeWeaver setup: ignoring JPA class transformer 2025-06-27T10:30:17.574+08:00 INFO 9072 --- [demo] [ main] org.hibernate.orm.connections.pooling : HHH10001005: Database info: Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] Database driver: undefined/unknown Database version: 8.0.12 Autocommit mode: undefined/unknown Isolation level: undefined/unknown Minimum pool size: undefined/unknown Maximum pool size: undefined/unknown 2025-06-27T10:30:18.349+08:00 INFO 9072 --- [demo] [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 2025-06-27T10:30:18.352+08:00 INFO 9072 --- [demo] [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default' 2025-06-27T10:30:18.653+08:00 WARN 9072 --- [demo] [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 2025-06-27T10:30:19.324+08:00 INFO 9072 --- [demo] [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 1 endpoint beneath base path '/actuator' 2025-06-27T10:30:19.464+08:00 INFO 9072 --- [demo] [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port 8080 (http) with context path '/' 2025-06-27T10:30:19.492+08:00 INFO 9072 --- [demo] [ main] com.example.demo.DemoApplication : Started DemoApplication in 5.68 seconds (process running for 6.546) 2025-06-27T10:30:19.973+08:00 INFO 9072 --- [demo] [)-10.128.10.155] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet' 2025-06-27T10:30:19.974+08:00 INFO 9072 --- [demo] [)-10.128.10.155] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' 2025-06-27T10:30:19.977+08:00 INFO 9072 --- [demo] [)-10.128.10.155] o.s.web.servlet.DispatcherServlet : Completed initialization in 3 ms

2025-03-27T18:00:11.729Z[RC-E][WARN]: L-imsdk_ver-O | {"commitId":"6d681dbc4c78eb23bb4ea2400eeadea4e5901f92","version":"5.8.2-enterprise.4"} | undefined 2025-03-27T18:00:11.737Z[RCRTC][WARN]: L-index_install_rtc_plugin-O | {"RCRTC Version":"5.6.13-enterprise.1","Commit":"ba2b09b5a7e45d09a1b4d4964843d407c879f553","browserInfo.browser":"chrome","browserInfo.supportsUnifiedPlan":true,"browserInfo.version":134} | undefined 2025-03-27T18:00:11.784Z[RCCall][WARN]: _ | RCCall Version: 5.1.2-enterprise.4, Commit: 36a0f3a722bbc85328120e8db4bd13bea4d7f71a | undefined 2025-03-27T18:00:11.786Z[RCCall][WARN]: _ | RCCallEngine Version: 5.1.2-enterprise.4 CommitId: ace301360bc38bcf6ae38c7f5443b01c36be6be5 | undefined 2025-03-27T18:00:11.791Z[RC-E][WARN]: L-connect-S | status: 1, code: undefined | undefined 2025-03-27T18:00:11.792Z[RC-E][WARN]: A-connect-S | status: 1, code: undefined | undefined 2025-03-27T18:00:12.297Z[RC-E][WARN]: A-connect-S | status: 0, code: undefined | undefined 2025-03-27T18:00:19.605Z[RCCall][WARN]: _ | CallMessageHandler] sendCallMesage sendInvite | {"roomType":0,"channelId":"","conversationType":3,"targetId":"GROUPBJ2025032800107785","callId":"CMa4kInGEMkMX8L9iG311x_1743098419604_360","extra":"","pushTitle":"","pushContent":"","mediaType":2,"inviteUserIds":["wxw"]} 2025-03-27T18:00:19.686Z[RCCall][WARN]: _ | [RCCallStateMachine] notifyUserStateChange -> info: {"user":{"userId":"cs_renning","state":1,"isCaller":true,"isRemote":false}} | undefined 2025-03-27T18:00:19.687Z[RCCall][WARN]: _ | [RCCallStateMachine] notifyUserStateChange -> info: {"user":{"userId":"wxw","state":1,"isCaller":false,"isRemote":true}} | undefined 2025-03-27T18:00:19.687Z[RCCall][WARN]: _ | _getTimeout -> timeout: 59796 | undefined 2025-03-27T18:00:19.688Z[RCCall][WARN]: _ | [RCCallStateMachine] notifyStateChange -> info: {"state":0} | undefined 2025-03-27T18:00:20.080Z[RCCall][WARN]: _ | onMessage -> msgBufferList: 1 | undefined 2025-03-27T18:00:20.321Z[RCCall][WARN]: _ | _handleBu

hive> create database bclcredits; 2025-06-18 16:01:34,708 INFO conf.HiveConf: Using the default value passed in for log id: 949847db-5ab3-4891-b83b-d3071c4b727b 2025-06-18 16:01:34,709 INFO session.SessionState: Updating thread name to 949847db-5ab3-4891-b83b-d3071c4b727b main 2025-06-18 16:01:34,710 INFO ql.Driver: Compiling command(queryId=aaa_20250618160134_462642e2-55ea-4fa2-bd3d-7f4f5dbee217): create database bclcredits 2025-06-18 16:01:34,717 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-18 16:01:34,722 INFO ql.Driver: Semantic Analysis Completed (retrial = false) 2025-06-18 16:01:34,722 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) 2025-06-18 16:01:34,722 INFO ql.Driver: Completed compiling command(queryId=aaa_20250618160134_462642e2-55ea-4fa2-bd3d-7f4f5dbee217); Time taken: 0.012 seconds 2025-06-18 16:01:34,722 INFO reexec.ReExecDriver: Execution #1 of query 2025-06-18 16:01:34,723 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-18 16:01:34,723 INFO ql.Driver: Executing command(queryId=aaa_20250618160134_462642e2-55ea-4fa2-bd3d-7f4f5dbee217): create database bclcredits 2025-06-18 16:01:34,723 INFO ql.Driver: Starting task [Stage-0:DDL] in serial mode 2025-06-18 16:01:34,742 INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=949847db-5ab3-4891-b83b-d3071c4b727b, clientType=HIVECLI] 2025-06-18 16:01:34,744 WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. 2025-06-18 16:01:34,745 INFO metastore.HiveMetaStoreClient: Mestastore configuration metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook 2025-06-18 16:01:34,746 INFO metastore.HiveMetaStore: 0: Cleaning up thread local RawStore... 2025-06-18 16:01:34,746 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=Cleaning up thread local RawStore... 2025-06-18 16:01:34,746 INFO metastore.HiveMetaStore: 0: Done cleaning up thread local RawStore 2025-06-18 16:01:34,747 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore 2025-06-18 16:01:34,748 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-18 16:01:34,748 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-18 16:01:34,748 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-18 16:01:34,751 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-18 16:01:34,752 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-18 16:01:34,753 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=aaa (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-18 16:01:34,755 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-18 16:01:34,756 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-18 16:01:34,758 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-18 16:01:34,758 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-18 16:01:34,759 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=aaa (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-18 16:01:34,759 INFO metastore.HiveMetaStore: 0: create_database: Database(name:bclcredits, description:null, locationUri:hdfs://localhost:9000/user/hive/warehouse/bclcredits.db, parameters:null, ownerName:aaa, ownerType:USER, catalogName:hive) 2025-06-18 16:01:34,759 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=create_database: Database(name:bclcredits, description:null, locationUri:hdfs://localhost:9000/user/hive/warehouse/bclcredits.db, parameters:null, ownerName:aaa, ownerType:USER, catalogName:hive) 2025-06-18 16:01:34,764 ERROR metastore.RetryingHMSHandler: AlreadyExistsException(message:Database bclcredits already exists) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:1311) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy38.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:809) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) at com.sun.proxy.$Proxy39.createDatabase(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:477) at org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:4828) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:393) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2025-06-18 16:01:34,764 ERROR exec.DDLTask: Failed org.apache.hadoop.hive.ql.metadata.HiveException: Database bclcredits already exists at org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:4832) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:393) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) Caused by: AlreadyExistsException(message:Database bclcredits already exists) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:1311) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy38.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:809) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) at com.sun.proxy.$Proxy39.createDatabase(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:477) at org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:4828) ... 22 more 2025-06-18 16:01:34,768 INFO reexec.ReOptimizePlugin: ReOptimization: retryPossible: false FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Database bclcredits already exists 2025-06-18 16:01:34,769 ERROR ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Database bclcredits already exists 2025-06-18 16:01:34,769 INFO ql.Driver: Completed executing command(queryId=aaa_20250618160134_462642e2-55ea-4fa2-bd3d-7f4f5dbee217); Time taken: 0.046 seconds 2025-06-18 16:01:34,769 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-18 16:01:34,772 INFO conf.HiveConf: Using the default value passed in for log id: 949847db-5ab3-4891-b83b-d3071c4b727b 2025-06-18 16:01:34,772 INFO session.SessionState: Resetting thread name to main

hive> create database bclcredits; 2025-06-23 10:54:32,928 INFO conf.HiveConf: Using the default value passed in for log id: a0cb540e-63b3-4c46-b13f-f2a5b6ab7ce2 2025-06-23 10:54:32,929 INFO session.SessionState: Updating thread name to a0cb540e-63b3-4c46-b13f-f2a5b6ab7ce2 main 2025-06-23 10:54:32,930 INFO ql.Driver: Compiling command(queryId=aaa_20250623105432_133779d8-5bc0-4f88-b570-00c704c1efd7): create database bclcredits 2025-06-23 10:54:32,970 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-23 10:54:32,983 INFO ql.Driver: Semantic Analysis Completed (retrial = false) 2025-06-23 10:54:32,986 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) 2025-06-23 10:54:32,991 INFO ql.Driver: Completed compiling command(queryId=aaa_20250623105432_133779d8-5bc0-4f88-b570-00c704c1efd7); Time taken: 0.061 seconds 2025-06-23 10:54:32,991 INFO reexec.ReExecDriver: Execution #1 of query 2025-06-23 10:54:32,992 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-23 10:54:32,992 INFO ql.Driver: Executing command(queryId=aaa_20250623105432_133779d8-5bc0-4f88-b570-00c704c1efd7): create database bclcredits 2025-06-23 10:54:33,003 INFO ql.Driver: Starting task [Stage-0:DDL] in serial mode 2025-06-23 10:54:33,023 INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=a0cb540e-63b3-4c46-b13f-f2a5b6ab7ce2, clientType=HIVECLI] 2025-06-23 10:54:33,025 WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. 2025-06-23 10:54:33,026 INFO metastore.HiveMetaStoreClient: Mestastore configuration metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook 2025-06-23 10:54:33,027 INFO metastore.HiveMetaStore: 0: Cleaning up thread local RawStore... 2025-06-23 10:54:33,027 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=Cleaning up thread local RawStore... 2025-06-23 10:54:33,027 INFO metastore.HiveMetaStore: 0: Done cleaning up thread local RawStore 2025-06-23 10:54:33,027 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore 2025-06-23 10:54:33,029 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-23 10:54:33,030 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-23 10:54:33,030 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-23 10:54:33,033 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-23 10:54:33,033 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-23 10:54:33,033 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=aaa (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-23 10:54:33,037 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-23 10:54:33,037 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-23 10:54:33,039 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-23 10:54:33,039 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-23 10:54:33,040 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=aaa (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-23 10:54:33,040 INFO metastore.HiveMetaStore: 0: create_database: Database(name:bclcredits, description:null, locationUri:hdfs://localhost:9000/user/hive/warehouse/bclcredits.db, parameters:null, ownerName:aaa, ownerType:USER, catalogName:hive) 2025-06-23 10:54:33,040 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=create_database: Database(name:bclcredits, description:null, locationUri:hdfs://localhost:9000/user/hive/warehouse/bclcredits.db, parameters:null, ownerName:aaa, ownerType:USER, catalogName:hive) 2025-06-23 10:54:33,045 ERROR metastore.RetryingHMSHandler: AlreadyExistsException(message:Database bclcredits already exists) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:1311) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy38.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:809) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) at com.sun.proxy.$Proxy39.createDatabase(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:477) at org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:4828) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:393) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2025-06-23 10:54:33,047 ERROR exec.DDLTask: Failed org.apache.hadoop.hive.ql.metadata.HiveException: Database bclcredits already exists at org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:4832) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:393) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) Caused by: AlreadyExistsException(message:Database bclcredits already exists) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:1311) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy38.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:809) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) at com.sun.proxy.$Proxy39.createDatabase(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:477) at org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:4828) ... 22 more 2025-06-23 10:54:33,048 INFO reexec.ReOptimizePlugin: ReOptimization: retryPossible: false FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Database bclcredits already exists 2025-06-23 10:54:33,049 ERROR ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Database bclcredits already exists 2025-06-23 10:54:33,049 INFO ql.Driver: Completed executing command(queryId=aaa_20250623105432_133779d8-5bc0-4f88-b570-00c704c1efd7); Time taken: 0.057 seconds 2025-06-23 10:54:33,049 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-23 10:54:33,050 INFO conf.HiveConf: Using the default value passed in for log id: a0cb540e-63b3-4c46-b13f-f2a5b6ab7ce2 2025-06-23 10:54:33,050 INFO session.SessionState: Resetting thread name to main

time=2025-03-15T06:11:09.007Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-15 14:11:09 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-15 14:11:09 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-15 14:11:09 ggml_cuda_init: found 1 CUDA devices: 2025-03-15 14:11:09 Device 0: NVIDIA GeForce RTX 4080 Laptop GPU, compute capability 8.9, VMM: yes 2025-03-15 14:11:09 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-15 14:11:09 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so 2025-03-15 14:11:09 time=2025-03-15T06:11:09.093Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) 2025-03-15 14:11:09 time=2025-03-15T06:11:09.106Z level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:41495" 2025-03-15 14:11:09 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4080 Laptop GPU) - 11047 MiB free 2025-03-15 14:11:09 time=2025-03-15T06:11:09.248Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" 2025-03-15 14:11:11 llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))

Error starting ApplicationContext. To display the condition evaluation report re-run your application with 'debug' enabled. 2025-07-22T15:20:38.331+08:00 ERROR 12636 --- [demo1] [ main] o.s.boot.SpringApplication : Application run failed org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'productMapper' defined in file [D:\weimeng\新建文件夹\demo1\target\classes\com\example\demo\DAO\ProductMapper.class]: Invalid value type for attribute 'factoryBeanObjectType': java.lang.String at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryBean(AbstractAutowireCapableBeanFactory.java:864) ~[spring-beans-6.2.8.jar:6.2.8] at org.springframework.beans.factory.support.AbstractBeanFactory.getType(AbstractBeanFactory.java:745) ~[spring-beans-6.2.8.jar:6.2.8] at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAnnotationOnBean(DefaultListableBeanFactory.java:817) ~[spring-beans-6.2.8.jar:6.2.8] at org.springframework.boot.sql.init.dependency.AnnotationDependsOnDatabaseInitializationDetector.detect(AnnotationDependsOnDatabaseInitializationDetector.java:36) ~[spring-boot-3.5.3.jar:3.5.3] at org.springframework.boot.sql.init.dependency.DatabaseInitializationDependencyConfigurer$DependsOnDatabaseInitializationPostProcessor.detectDependsOnInitializationBeanNames(DatabaseInitializationDependencyConfigurer.java:152) ~[spring-boot-3.5.3.jar:3.5.3] at org.springframework.boot.sql.init.dependency.DatabaseInitializationDependencyConfigurer$DependsOnDatabaseInitializationPostProcessor.postProcessBeanFactory(DatabaseInitializationDependencyConfigurer.java:115) ~[spring-boot-3.5.3.jar:3.5.3] at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:363) ~[spring-context-6.2.8.jar:6.2.8] at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:197) ~[spring-context-6.2.8.jar:6.2.8] at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:791) ~[spring-context-6.2.8.jar:6.2.8] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:609) ~[spring-context-6.2.8.jar:6.2.8] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:146) ~[spring-boot-3.5.3.jar:3.5.3] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752) ~[spring-boot-3.5.3.jar:3.5.3] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:439) ~[spring-boot-3.5.3.jar:3.5.3] at org.springframework.boot.SpringApplication.run(SpringApplication.java:318) ~[spring-boot-3.5.3.jar:3.5.3] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1361) ~[spring-boot-3.5.3.jar:3.5.3] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1350) ~[spring-boot-3.5.3.jar:3.5.3] at com.example.demo.Demo1Application.main(Demo1Application.java:11) ~[classes/:na]

大家在看

recommend-type

Toolbox使用说明.pdf

Toolbox 是快思聪公司新近推出的一款集成多种调试功能于一体的工具软件,它可以实现多种硬件检 测, 调试功能。完全可替代 Viewport 实现相应的功能。它提供了有 Text Console, SMW Program Tree, Network Device Tree, Script Manager, System Info, File Manager, Network Analyzer, Video Test Pattern 多个 检测调试工具, 其中 Text Console 主要执行基于文本编辑的命令; SMW Program Tree 主要罗列出相应 Simpl Windows 程序中设计到的相关快思聪设备, 并可对显示出的相关设备进行效验, 更新 Firmware, 上传 Project 等操作; Network Device Tree 主要使用于显示检测连接到 Cresnet 网络上相关设备, 可对网络上设备进行 ID 设置,侦测设备线路情况; Script Manager 主要用于运行脚本命令; System Info 则用于显示联机的控制系统 软硬件信息,也可对相应信息进行修改,刷新; File Manager 显示控制系统主机内存文件系统信息,可进行 修改,建立等管理操作; Video Test Pattern 则用于产生一个测试图调较屏幕显示; Network Analyzer 用于检 测连接到 Cresnet 网络上所有设备的通信线路情况。以上大致介绍了 Toolbox 中各工具软件的用途,下面将 分别讲述一下各工具的实际用法
recommend-type

EVE-NG-Win-Client-Pack.zip

EVE模拟器插件,抓包软件Wireshark与vnc客户端下载,并进行关联,使得可以在EVE模拟器中使用Wireshark进行抓包
recommend-type

昆明各乡镇街道shp文件 最新

地理数据,精心制作,欢迎下载! 昆明各街道乡镇shp文件,内含昆明各区县shp文件! 主上大人: 您与其耗费时间精力去星辰大海一样的网络搜寻文件,并且常常搜不到,倒不如在此直接购买下载现成的,也就少喝两杯奶茶,还减肥了呢!而且,如果数据有问题,我们会负责到底,帮你处理,包您满意! 小的祝您天天开心,论文顺利!
recommend-type

无线通信技术.rar--华为内部培训资料

华为内部培训资料,有关通信方面的,很实用,参考性较强!
recommend-type

simplified_eye_hand_calibration.zip

simplified_eye_hand_calibration的代码和示例数据。 项目地址:https://github.com/ZiqiChai/simplified_eye_hand_calibration

最新推荐

recommend-type

中医元仔智能医疗机器人-基于LangChain4j与阿里通义千问的中医诊疗对话AI-集成多轮对话记忆与RAG知识检索的智能助手-支持预约挂号与取消功能的医疗系统-采用Java17.zip

cursor免费次数用完中医元仔智能医疗机器人_基于LangChain4j与阿里通义千问的中医诊疗对话AI_集成多轮对话记忆与RAG知识检索的智能助手_支持预约挂号与取消功能的医疗系统_采用Java17.zip
recommend-type

LabVIEW结合YOLOv5与TensorRT实现高效并行推理及DLL封装技术在工业领域的应用 · DLL封装

LabVIEW平台结合YOLOv5和TensorRT进行高效并行推理的技术及其应用。首先简述了YOLOv5作为一种高效目标检测算法的优势,接着探讨了TensorRT作为深度学习推理引擎的作用,特别是在LabVIEW平台上通过DLL封装实现高效、灵活的模型推理。文中重点讲解了支持多模型并行推理的功能,使得视频和图片识别速度达到6ms以内。此外,还提供了从pt模型到engine模型的转换工具,以适应不同平台的需求。最后展示了该技术在工业自动化、视频监控、智能安防等领域的广泛应用前景,并强调了其高性能和灵活性。 适合人群:从事工业自动化、视频监控、智能安防等相关领域的技术人员,尤其是对深度学习技术和LabVIEW平台有一定了解的研发人员。 使用场景及目标:适用于需要高效视频和图片识别的场景,如工业自动化生产线的质量检测、视频监控系统的目标跟踪、智能安防系统的入侵检测等。目标是提升识别速度和准确性,优化资源配置,降低成本。 阅读建议:读者可以通过本文深入了解YOLOv5和TensorRT在LabVIEW平台上的集成方式和技术细节,掌握多模型并行推理的方法,从而更好地应用于实际项目中。
recommend-type

反弹头发福瑞特如果热隔热

如果如果热隔热隔热个人果然
recommend-type

MATLAB中ABS防抱死系统加入干扰并使用PID进行校正的方法 MATLAB

如何在MATLAB环境中构建ABS防抱死系统的模型,探讨了如何引入现实驾驶中的干扰因素,并使用PID控制器进行校正。首先,文章解释了ABS系统的基本原理及其重要性,然后逐步引导读者在MATLAB中建立ABS系统的模型,包括车辆轮胎、刹车系统和控制算法。接着,讨论了如何通过设置随机噪声或特定函数来模拟实际驾驶中的干扰因素。随后,深入讲解了PID控制器的工作机制及其在ABS系统中的具体应用,展示了如何通过调整PID参数来优化ABS系统的性能。最后,进行了仿真实验,验证了PID控制器的有效性和改进效果。 适合人群:汽车工程专业学生、研究人员以及对汽车控制系统感兴趣的工程师。 使用场景及目标:适用于希望深入了解ABS防抱死系统工作原理和技术实现的人群,旨在帮助他们掌握如何在MATLAB中建模、引入干扰因素并通过PID控制器进行校正的技术方法。 其他说明:本文不仅提供了理论知识,还包含了具体的实验步骤和结果分析,有助于读者全面理解和实践ABS系统的控制策略。
recommend-type

OTA升级方案上位机源码(支持整包和差分)

OTA升级方案上位机源码(支持整包和差分)
recommend-type

Notes App API开发与使用指南

### API基础知识 #### 标题分析:“notes-app-api” 从标题“notes-app-api”可以推断,此API(Application Programming Interface,应用程序接口)是专为一个名为“notes-app”的应用程序设计的。这种API通常被用来允许不同的软件组件之间进行通信。在这个案例中,“notes-app”可能是一款笔记应用,该API提供了笔记数据的获取、更新、删除等操作的接口。 #### 描述分析:“API休息说明” 在提供的“API休息说明”中,我们可以看到几个重要的操作指令: 1. **指令“dev”:** `npm run dev` - 这是一个用于启动开发模式的命令。通常情况下,`npm run dev`会使用Node.js环境下的某种热重载功能,让开发者在开发过程中实时看到代码更改的效果。 - `npm`是Node.js的包管理器,用于安装项目所需的依赖、运行脚本等。 - `dev`是脚本命令的缩写,实际对应的是`package.json`文件中定义的某个开发环境下的脚本命令。 2. **指令“服务”:** `npm start` - 这是一个用于启动应用程序服务的命令。 - 同样利用Node.js的`npm`包管理器执行,其目的是部署应用程序,使其对外提供服务。 3. **指令“构建”:** `npm run build` - 这是用于构建项目的命令,通常会将源代码进行压缩、转译等操作,生成用于生产环境的代码。 - 例如,如果项目使用了TypeScript,构建过程可能包括将TypeScript代码编译成JavaScript,因为浏览器不能直接运行TypeScript代码。 #### 标签分析:“TypeScript” TypeScript是JavaScript的超集,提供了静态类型检查和ES6+的特性。使用TypeScript可以提高代码的可读性和可维护性,同时在编译阶段发现潜在的错误。 1. **TypeScript的特性:** - **静态类型检查:** 有助于在开发阶段捕捉类型错误,降低运行时错误的概率。 - **ES6+特性支持:** TypeScript支持最新的JavaScript语法和特性,可以使用装饰器、异步编程等现代JavaScript特性。 - **丰富的配置选项:** 开发者可以根据项目需求进行各种配置,如模块化系统、编译目标等。 2. **TypeScript的使用场景:** - 大型项目:在大型项目中,TypeScript有助于维护和扩展代码库。 - 多人协作:团队开发时,类型定义有助于减少沟通成本,提高代码一致性。 - 错误敏感应用:如金融、医疗等领域的应用,可以利用TypeScript的静态类型检查减少bug。 #### 文件分析:“压缩包子文件的文件名称列表: notes-app-api-develop” 这个文件列表中包含了“notes-app-api-develop”,它表明存在一个与开发相关的压缩包或存档文件。这个文件很可能包含了应用程序的源代码,通常还会包括`package.json`文件,这个文件定义了项目的依赖关系和可运行的脚本命令。在开发和部署过程中,开发者通常会根据`package.json`中定义的脚本来执行不同的任务,如`npm run dev`或`npm start`等。 ### Docker使用说明 在描述中还提到了使用Docker的命令: 1. **构建镜像:** `docker build -t notes-api .` - 这个命令用于构建一个名为`notes-api`的Docker镜像。 - `.`表示Dockerfile在当前目录。 - `-t`指定镜像的名称和标签。 2. **运行容器:** `docker run -d -it -p 3005:3005 notes-api` - 该命令用于从`notes-api`镜像启动一个容器,并在后台运行。 - `-d`表示后台运行容器。 - `-it`则是将容器的标准输入打开,并分配一个伪终端。 - `-p 3005:3005`将容器内部的3005端口映射到宿主机的3005端口,允许宿主机与容器内部的服务进行通信。 Docker作为容器化平台,它允许开发者打包应用及其依赖环境为一个轻量级、可移植的容器,这样无论在什么环境下,都能保证应用运行的一致性。这一点对于开发和部署API来说尤为重要,因为它确保了开发环境与生产环境的一致性,减少了所谓的“在我的机器上可以运行”的问题。 ### 总结 综上所述,“notes-app-api”是一个为笔记应用程序设计的API,开发者可以通过一系列npm脚本命令进行开发、服务和构建操作。此外,该API还涉及到使用Docker进行容器化部署的流程。最后,提及的TypeScript标签意味着该项目在前端开发上采用了TypeScript,以期获得更加健壮和可维护的代码库。了解这些知识点对开发、部署和维护现代Web应用至关重要。
recommend-type

【PMSM建模与测试最佳实践】:MATLAB电机仿真模型的权威指导

# 1. PMSM电机模型的基础知识 在现代工业自动化和电动车辆领域,永磁同步电机(PMSM
recommend-type

如何通过四元数避免万向节死锁?

<think>好的,用户想了解如何使用四元数来避免万向节锁问题,尤其是在3D旋转中保持姿态的稳定性。首先,我需要回忆万向节锁的基本概念和四元数如何解决这个问题。万向节锁发生在使用欧拉角时,当两个旋转轴对齐导致失去一个自由度。而四元数通过四维空间的旋转避免了这种顺序依赖。 接下来,我应该解释万向节锁的原因,比如三个轴依次旋转时,某个轴可能与其他轴对齐,导致无法正确旋转。然后对比四元数的优势,比如四元数的连续性和无奇异性。需要提到四元数的数学表示,如单位四元数和旋转插值方法(如球面线性插值),以及它们如何避免万向节锁。 还要考虑用户可能的实际应用场景,比如游戏开发或机器人学,是否需要示例代码?
recommend-type

Python实现Couchbase大规模数据复制技术

标题中提到的技术“couchbase-massive-replication”是一种针对Couchbase数据库的开源Python开发工具,专门用于高效地实现跨集群的大量存储桶和索引的复制。Couchbase是一个高性能、可扩展、容错的NoSQL文档数据库,它支持同步分布式复制(XDCR),能够实现跨地域的数据复制。 描述部分详细阐述了该技术的主要用途和优势。它解决了一个常见问题:在进行XDCR复制时,迁移大量存储桶可能会遇到需要手动检查并迁移缺失存储桶的繁琐步骤。Couchbase-massive-replication技术则允许用户在源和目标集群之间无需进行存储桶配置,简化了迁移过程。开发者可以通过简单的curl请求,向集群发送命令,从而实现大规模存储桶的自动化迁移。 此外,为了帮助用户更容易部署和使用该技术,项目提供了一个Dockerfile,允许用户通过Docker容器来运行程序。Docker是一种流行的容器化平台,可以将应用及其依赖打包到一个可移植的容器中,便于部署和扩展。用户只需执行几个Docker命令,即可快速启动一个名为“cbmigrator”的容器,版本为0.1。启动容器后,可以通过发送简单的POST请求来操作迁移任务。 项目中还提到了Docker Hub,这是一个公共的Docker镜像注册中心,用户可以在其中找到并拉取其他用户分享的镜像,其中就包括了“cbmigrator”镜像,即demir94/cbmigrator:0.1。这大大降低了部署和使用该技术的门槛。 根据标签“Python”,我们可以推断出该项目是使用Python开发的。Python是一种广泛使用的高级编程语言,以其简洁的语法和强大的库支持而闻名。该项目中Python的使用意味着用户可能需要具备一定的Python基础知识,以便对项目进行定制或故障排除。Python的动态类型系统和解释执行机制,使得开发过程中可以快速迭代和测试。 最后,从提供的压缩包子文件的文件名称列表“couchbase-massive-replication-main”来看,该项目的源代码文件夹可能遵循了通用的开源项目结构,其中“main”文件夹通常包含了项目的主要代码和入口文件。用户在获取项目后,可以在这个文件夹中找到相关的代码文件,包括配置文件、数据库模型、业务逻辑实现以及API接口等。 综合来看,这个项目涉及的技术点包括: - Couchbase数据库:一种文档数据库,广泛用于构建可扩展的应用程序。 - XDCR(Cross-Datacenter Replication):Couchbase提供的跨数据中心数据复制机制,实现数据的无缝迁移和灾难恢复。 - Python编程语言:用来开发该项目的高级编程语言,以其易读性和简洁的语法著称。 - Docker容器化技术:用于打包、分发和运行应用程序的平台,提供了一种便捷的部署方式。 - Docker Hub:一个存放和分享Docker镜像的平台,可以简化镜像的查找、下载和管理过程。 这个项目对于需要在多个Couchbase集群间迁移大量数据的开发者和运维人员来说是一个宝贵的资源,因为它大大简化了存储桶迁移的过程,并提高了操作的便利性和效率。
recommend-type

【MATLAB电机性能评估案例】:仿真环境下的深度研究

# 1. MATLAB在电机性能评估中的应用概述 电机作为现代工业中不可或缺的电力传动设备,其性能优劣直接影响整个系统的可靠性和效率。在众多的电机性能评估工具中,MATLAB凭借其强大的数值计算能力和丰富的工具箱资源,成为该领域研究和工程实践中的有力工具。本章将对MATLAB在电机性能评估中的应用进行概述,并介绍其在电机仿真、故障诊断和性能优化等方面的具体应用前景和价值。MA