-
Notifications
You must be signed in to change notification settings - Fork 80
Description
I am running Logstash 6.2.0 in Docker with the following config:
input {
beats {
port => 5044
}
}
filter {
}
output {
redis {
host => "redis"
key => "logstash"
data_type => "list"
batch => true
batch_events => 125
}
}
and 6 GB heap size.
LS_JAVA_OPTS: -Xmx6g
PIPELINE_WORKERS: 4
When overloading Logstash with a lot of events, with logstash-input-beats (5.0.6)
I see the same pattern as in elastic/logstash#9195 (used heap hits max heap, events processing stops). Beats stops processing the incoming events and crashes most of the time.
After finding #286, I did an update to logstash-input-beats (5.0.10)
, now used heap stays low, but I get a bunch of those:
[2018-03-01T16:28:02,678][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 6419419652, max: 6425018368)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:214) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
[2018-03-01T16:28:02,678][INFO ][org.logstash.beats.BeatsHandler] [local: x.x.x.x:5044, remote: x.x.x.x:62324] Handling exception: failed to allocate 16777216 byte(s) of direct memory (used: 6419419652, max: 6425018368)
I would prefer it to reliably crash (so it can be restarted automatically) instead of hanging around and doing nothing.
kurt5 and paveljanda