当我正常启动Logstash向HDFS写入数据的时候,报错:
[WARN ][logstash.outputs.webhdfs ] Failed to flush outgoing items {:outgoing_count=>1, :exception=>"LogStash::Error",
:backtrace=>["org/logstash/ext/JrubyEventExtLibrary.java:205:in `sprintf'",
"/export/servers/logstash-5.6.9/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:194:in `flush'",
"org/jruby/RubyArray.java:2409:in `collect'",
"/export/servers/logstash-5.6.9/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:189:in `flush'",
"/export/servers/logstash-5.6.9/vendor/bundle/jruby/1.9/gems/stud-0.0.23/lib/stud/buffer.rb:219:in `buffer_flush'",
"org/jruby/RubyHash.java:1342:in `each'",
"/export/servers/logstash-5.6.9/vendor/bundle/jruby/1.9/gems/stud-0.0.23/lib/stud/buffer.rb:216:in `buffer_flush'",
"/export/servers/logstash-5.6.9/vendor/bundle/jruby/1.9/gems/stud-0.0.23/lib/stud/buffer.rb:159:in `buffer_receive'",
"/export/servers/logstash-5.6.9/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:182:in `receive'",
"/export/servers/logstash-5.6.9/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'",
"org/jruby/RubyArray.java:1613:in `each'",
"/export/servers/logstash-5.6.9/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'",
"/export/servers/logstash-5.6.9/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:22:in `multi_receive'",
"/export/servers/logstash-5.6.9/logstash-core/lib/logstash/output_delegator.rb:49:in `multi_receive'",
"/export/servers/logstash-5.6.9/logstash-core/lib/logstash/pipeline.rb:434:in `output_batch'",
"org/jruby/RubyHash.java:1342:in `each'",
"/export/servers/logstash-5.6.9/logstash-core/lib/logstash/pipeline.rb:433:in `output_batch'",
"/export/servers/logstash-5.6.9/logstash-core/lib/logstash/pipeline.rb:381:in `worker_loop'",
"/export/servers/logstash-5.6.9/logstash-core/lib/logstash/pipeline.rb:342:in `start_workers'"]}
为了大家好查找,特意贴出问题。经过查找是没有写入权限的问题。
现在我们着手解决:
1. 首先通过命令查看HDFS上的文件的权限和所属组:
hdfs dfs -ls /
2. 看到其属于hadoop组下,进入hadoop用户下,修正所属组:
sudo su hadoop
hdfs dfs -chmod -R 777 /hive
修改文件夹的owner和所属组:
hdfs dfs -chown root:root /hive
hdfs dfs -chgrp -R supergroup /
3. 再次查看,已经更改成功。
hdfs dfs -ls /
追加:后续在解决这个问题的时候,其实还发现不完全是权限的问题。我在写配置文件的时候,将写入HDFS的路径配置为:
实际这是不正确的,不应该完全由变量命名文件夹。大家在使用的时候切记。