Elasticsearch crashing, Dashboards missing

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Elasticsearch crashing, Dashboards missing

xiro86

Hi,

I've set up ELK-Stack for Monitoring of our Firewall logging. It works fine, we have two dashboards rotating on a screen and are very happy with it. Unfortunately it crashed now already for the second time with the same error. This won't be that dramatic, since it works after a restart of the services - but the critical thing about it is, that the saved dashboards disappear also.

Since I can't figure out where the dashboards are saved, I can't even do a backup of them.

Is it even possible to backup the dashboard only - and where is it stored?

Regarding the error:

[2015-05-02 02:00:09,800][INFO ][cluster.metadata         ] [abc] [logstash-2015.05.02] creating index, cause [auto(bulk api)], shards [5]/[1], mappings [_default_]
[2015-05-02 02:00:29,890][INFO ][cluster.metadata         ] [abc] [logstash-2015.05.02] update_mapping [syslog] (dynamic)
[2015-05-02 02:00:31,581][INFO ][cluster.metadata         ] [abc] [logstash-2015.05.02] update_mapping [syslog] (dynamic)
[2015-05-02 03:01:03,038][WARN ][index.engine.internal    ] [abc] [logstash-2015.05.01][4] failed engine [out of memory]
[2015-05-02 03:01:04,085][DEBUG][action.search.type       ] [abc] [2183065] Failed to execute fetch phase
org.elasticsearch.ElasticsearchException: Java heap space
    at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:40)
    at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:467)
    at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:410)
    at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:407)
    at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:133)
    at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:347)
    at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:288)
    at org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
    at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:196)
    at org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:228)
    at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:156)
    at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:455)
    ... 6 more
[2015-05-02 03:01:35,204][WARN ][index.engine.internal    ] [abc] [logstash-2015.05.02][2] failed engine [out of memory]
[2015-05-02 03:01:07,133][WARN ][index.engine.internal    ] [abc] [logstash-2015.05.01][4] failed to flush after setting shard to inactive
org.elasticsearch.index.engine.FlushFailedEngineException: [logstash-2015.05.01][4] Flush failed
    at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:781)
    at org.elasticsearch.index.engine.internal.InternalEngine.updateIndexingBufferSize(InternalEngine.java:233)
    at org.elasticsearch.indices.memory.IndexingMemoryController$ShardsIndicesStatusChecker.run(IndexingMemoryController.java:201)
    at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:454)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space


Thanks in advance!

BR

Xiro

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c8af2bf3-35f3-4fa5-88f3-2e8bc922932b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Elasticsearch crashing, Dashboards missing

Mark Walkom-2
Kibana dashboards are saved in kibana-int for KB 3 and .kibana for KB4.

It looks like you are running out of heap, ie an OOM. How much memory have you assigned to ES and can you increase that?

On 4 May 2015 at 18:12, xiro86 <[hidden email]> wrote:

Hi,

I've set up ELK-Stack for Monitoring of our Firewall logging. It works fine, we have two dashboards rotating on a screen and are very happy with it. Unfortunately it crashed now already for the second time with the same error. This won't be that dramatic, since it works after a restart of the services - but the critical thing about it is, that the saved dashboards disappear also.

Since I can't figure out where the dashboards are saved, I can't even do a backup of them.

Is it even possible to backup the dashboard only - and where is it stored?

Regarding the error:

[2015-05-02 02:00:09,800][INFO ][cluster.metadata         ] [abc] [logstash-2015.05.02] creating index, cause [auto(bulk api)], shards [5]/[1], mappings [_default_]
[2015-05-02 02:00:29,890][INFO ][cluster.metadata         ] [abc] [logstash-2015.05.02] update_mapping [syslog] (dynamic)
[2015-05-02 02:00:31,581][INFO ][cluster.metadata         ] [abc] [logstash-2015.05.02] update_mapping [syslog] (dynamic)
[2015-05-02 03:01:03,038][WARN ][index.engine.internal    ] [abc] [logstash-2015.05.01][4] failed engine [out of memory]
[2015-05-02 03:01:04,085][DEBUG][action.search.type       ] [abc] [2183065] Failed to execute fetch phase
org.elasticsearch.ElasticsearchException: Java heap space
    at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:40)
    at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:467)
    at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:410)
    at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:407)
    at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:133)
    at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:347)
    at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:288)
    at org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
    at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:196)
    at org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:228)
    at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:156)
    at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:455)
    ... 6 more
[2015-05-02 03:01:35,204][WARN ][index.engine.internal    ] [abc] [logstash-2015.05.02][2] failed engine [out of memory]
[2015-05-02 03:01:07,133][WARN ][index.engine.internal    ] [abc] [logstash-2015.05.01][4] failed to flush after setting shard to inactive
org.elasticsearch.index.engine.FlushFailedEngineException: [logstash-2015.05.01][4] Flush failed
    at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:781)
    at org.elasticsearch.index.engine.internal.InternalEngine.updateIndexingBufferSize(InternalEngine.java:233)
    at org.elasticsearch.indices.memory.IndexingMemoryController$ShardsIndicesStatusChecker.run(IndexingMemoryController.java:201)
    at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:454)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space


Thanks in advance!

BR

Xiro

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c8af2bf3-35f3-4fa5-88f3-2e8bc922932b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_4VShax2r4nc3eukMeVjN2kFwug-zfvxjY10z4Xm_9WA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.