hbase常见错误汇总

1、报错信息

2014-02-24 12:15:48,507 WARN  [Thread-2] util.DynamicClassLoader (DynamicClassLoader.java:<init>(106)) - Failed to identify the fs of dir hdfs://fulonghadoop/hbase/lib, ignored
java.io.IOException: No FileSystem for scheme: hdfs

解决办法

在配置文件中加入
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-hdfs</artifactId>
    <version>2.7.2</version>
</dependency>

在hdfs-site.xml或者core-site.xml中加入
<property>
    <name>fs.hdfs.impl</name>
 	<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>

2、报错信息

ERROR [ClientFinalizer-shutdown-hook] hdfs.DFSClient: Failed to close inode 148879
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /hbase/oldWALs/hadoop-4%2C16020%2C1544498293590.default.1545167908560 (inode 148879): File is not open for writing. Holder DFSClient_NONMAPREDUCE_328068851_1 does not have any open files

解决办法
(1)调整HDFS中配置参数

#,默认为4096过小,导致HBase宕机。 根据实际情况修改
dfs.datanode.max.transfer.threads

#最大传输线程数,dfs.datanode.max.transfer.threads对于datanode来说,就如同linux上的文件句柄的限制,当datanode 上面的连接数操作配置中的设置时,datanode就会拒绝连接。 一般都会将此参数调的很大,40000+左右。
dfs.datanode.max.xcievers

(2)dfs.datanode.max.xcievers值修改为8192(之前为4096)根据实际情况修改!

(3)设置打开文件的最大数值

ulimit -a
ulimit -n 65535
vim /etc/security/limits.conf
#新增内容
* soft nofile 65535
* hard nofile 65535

//修改hdfs配置
<property>
    <name>dfs.datanode.max.transfer.threads</name>
    <value>40000</value>
</property>
<property>
    <name>dfs.datanode.max.xcievers</name>
    <value>65535</value>
</property>

#dfs.datanode.max.xcievers属性表示每个datanode任一时刻可以打开的文件数量上限。此数目不能大于系统打开文件数的设置,即/etc/security/limits.conf中nofile的数值。

3、报错

INFO  [regionserver/hadoop-4/192.168.168.86:16020-SendThread(hadoop-6:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 26667ms for sessionid 0x3682c0f03c60033, closing socket connection and attempting reconnect
2019-01-09 21:54:13,016 INFO  [main-SendThread(hadoop-6:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 26669ms for sessionid 0x3682c0f03c60032, closing socket connection and attempting reconnect
2019-01-09 21:54:13,018 INFO  [LeaseRenewer:work@cluster1] retry.RetryInvocationHandler: Exception while invoking renewLease of class ClientNamenodeProtocolTranslatorPB over hadoop-1/192.168.168.83:9000. Trying to fail over immediately.
org.apache.hadoop.net.ConnectTimeoutException: Call From hadoop-4/192.168.168.86 to hadoop-1:9000 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=hadoop-1/192.168.168.83:9000]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout

解决办法(怀疑是HBASE合并hfile时导致的超时现象)

hbase修改
<property>
<name>hbase.rpc.timeout</name>
<value>3600000</value>
</property>


hdfs修改
	<property>
        <name>dfs.datanode.socket.write.timeout</name>
        <value>3600000</value>
    </property>

    <property>
        <name>dfs.socket.timeout</name>
        <value>3600000</value>
    </property> 
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

平凡似水的人生

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值