After a recent upgrade to Logstash and ilk (ELK) to 5.x, I began to experience some issues with the S3 input plugin. Running logstash in debug mode:

/usr/share/logstash/bin/logstash --path.settings /etc/logstash --log.level debug

Revealed this:

[2016-12-19T14:34:05,926][DEBUG][logstash.inputs.s3       ] S3 input processing {:bucket=>"bucket-of-logs", :key=>"2016/12/19/somehost-0001_2016-12-19T14:13:49,084847108+00:00.gz"}
[2016-12-19T14:34:05,926][DEBUG][logstash.inputs.s3       ] S3 input: Download remote file {:remote_key=>"2016/12/19/somehost-0001_2016-12-19T14:13:49,084847108+00:00.gz", :local_filename=>"/tmp/logstash/somehost-00091_2016-12-19T14:13:49,084847108+00:00.gz"}
[2016-12-19T14:34:05,937][DEBUG][logstash.inputs.s3       ] Processing file {:filename=>"/tmp/logstash/somehost-0061_2016-12-19T14:13:49,084847108+00:00.gz"}
[2016-12-19T14:34:05,937][DEBUG][logstash.util.decorators ] inputs/LogStash::Inputs::S3: adding tag {"tag"=>"s3"}
[2016-12-19T14:34:05,938][DEBUG][logstash.util.decorators ] inputs/LogStash::Inputs::S3: adding tag {"tag"=>"s3"}
[2016-12-19T14:34:05,938][DEBUG][logstash.util.decorators ] inputs/LogStash::Inputs::S3: adding tag {"tag"=>"s3"}
[2016-12-19T14:34:05,961][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: "bucket-of-logs", backup_to_bucket=>"bucket-of-logs-archive", delete=>true, type=>"syslog", tags=>["s3"], codec=>"json_96f4bf7d-b85e-419c-84fe-7b446f51ea22", enable_metric=>true, charset=>"UTF-8">, sincedb_path=>"/dev/null", id=>"1096dc26c3361b2210ba0cb77f3fe840385cfdb4-1", enable_metric=>true, region=>"us-east-1", interval=>60, temporary_directory=>"/tmp/logstash">
  Error: uninitialized constant Aws::Client::Errors
  Exception: NameError
  Stack: org/jruby/RubyModule.java:2719:in `const_missing'
org/jruby/RubyModule.java:2638:in `const_get'
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/aws-sdk-core/xml/error_handler.rb:25:in `error'
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/aws-sdk-core/xml/error_handler.rb:9:in `call'
...
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-s3-3.1.1/lib/logstash/inputs/s3.rb:89:in `run'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:331:in `inputworker'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:325:in `start_input'

Running on configuration:

input {
  s3 {
    bucket => "bucket-of-logs"
    backup_to_bucket => "bucket-of-logs-archive"
    delete => true
    type => "syslog"
    tags => [ "s3" ]
    codec => "json"
    sincedb_path => "/dev/null"
  }
}

I had solved this temporarily by commenting out backup_to_bucket, eliminating one of the desired outcomes of a “log-to-s3” setup, that being immediate log archival. But that at least points to the issue at hand: writing to s3.

After falling into various rabbit-holes following various sources related to various errors, and looking at s3 policies, the ultimate cause is simply atrocious file naming:

2016/12/19/somehost-0001_2016-12-19T14:13:49,084847108+00:00.gz

Without commas, colons, and pluses, things work just fine:

2016/12/19/somehost-0001_20161219T141349.084847108Z.gz

I am fairly certain this was not an issue pre 5.x. Beware s3 filenames and cryptic errors.