Quantcast

how can I handling the out of memory error exception?

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

how can I handling the out of memory error exception?

hoong
Hi, ES users!!

We're using the Elastic Search to develop web log search engines,
our system has 16g memory, and both ES_MIN_MEM, and ES_MAX_MEM are 8g with bootstrap.mlockall=true setting.

the size of index is almost 150 million documents, and its volume is 360g.
We just using one node, with one shards now.

When we do some facet searches on the field of index with 5~7 filters, ES just stops sometimes and give us exception messages like below.

For the stability of our search engines, we want to automatically restart the ES node or make some notice that ES node stops.
So, we tried to catch the OutOfMemoryError, but it is not caught from our API side.

Is there any ways to catch the OutOfMemoryError like below?
or, any ES API or methods to prevent OutOfMemory failures?

please, help us.

best regards.

p.s sorry for my premature English skills ;(



Exception in thread "elasticsearch[node1][search][T#1]" java.lang.OutOfMemoryError: loading field [s_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.index.field.data.support.FieldDataLoader.load(FieldDataLoader.java:44)
at org.elasticsearch.index.field.data.strings.StringFieldData.load(StringFieldData.java:90)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:56)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:34)
at org.elasticsearch.index.field.data.FieldData.load(FieldData.java:111)
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:130)
... 19 more
Exception in thread "elasticsearch[node1][search][T#5]" java.lang.OutOfMemoryError: loading field [d_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how can I handling the out of memory error exception?

hoong
in addition, our ES version is 0.20.1

On Sunday, February 17, 2013 10:48:21 PM UTC+9, hoong wrote:
Hi, ES users!!

We're using the Elastic Search to develop web log search engines,
our system has 16g memory, and both ES_MIN_MEM, and ES_MAX_MEM are 8g with bootstrap.mlockall=true setting.

the size of index is almost 150 million documents, and its volume is 360g.
We just using one node, with one shards now.

When we do some facet searches on the field of index with 5~7 filters, ES just stops sometimes and give us exception messages like below.

For the stability of our search engines, we want to automatically restart the ES node or make some notice that ES node stops.
So, we tried to catch the OutOfMemoryError, but it is not caught from our API side.

Is there any ways to catch the OutOfMemoryError like below?
or, any ES API or methods to prevent OutOfMemory failures?

please, help us.

best regards.

p.s sorry for my premature English skills ;(



Exception in thread "elasticsearch[node1][search][T#1]" java.lang.OutOfMemoryError: loading field [s_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.index.field.data.support.FieldDataLoader.load(FieldDataLoader.java:44)
at org.elasticsearch.index.field.data.strings.StringFieldData.load(StringFieldData.java:90)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:56)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:34)
at org.elasticsearch.index.field.data.FieldData.load(FieldData.java:111)
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:130)
... 19 more
Exception in thread "elasticsearch[node1][search][T#5]" java.lang.OutOfMemoryError: loading field [d_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

On Sunday, February 17, 2013 10:48:21 PM UTC+9, hoong wrote:
Hi, ES users!!

We're using the Elastic Search to develop web log search engines,
our system has 16g memory, and both ES_MIN_MEM, and ES_MAX_MEM are 8g with bootstrap.mlockall=true setting.

the size of index is almost 150 million documents, and its volume is 360g.
We just using one node, with one shards now.

When we do some facet searches on the field of index with 5~7 filters, ES just stops sometimes and give us exception messages like below.

For the stability of our search engines, we want to automatically restart the ES node or make some notice that ES node stops.
So, we tried to catch the OutOfMemoryError, but it is not caught from our API side.

Is there any ways to catch the OutOfMemoryError like below?
or, any ES API or methods to prevent OutOfMemory failures?

please, help us.

best regards.

p.s sorry for my premature English skills ;(



Exception in thread "elasticsearch[node1][search][T#1]" java.lang.OutOfMemoryError: loading field [s_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.index.field.data.support.FieldDataLoader.load(FieldDataLoader.java:44)
at org.elasticsearch.index.field.data.strings.StringFieldData.load(StringFieldData.java:90)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:56)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:34)
at org.elasticsearch.index.field.data.FieldData.load(FieldData.java:111)
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:130)
... 19 more
Exception in thread "elasticsearch[node1][search][T#5]" java.lang.OutOfMemoryError: loading field [d_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

On Sunday, February 17, 2013 10:48:21 PM UTC+9, hoong wrote:
Hi, ES users!!

We're using the Elastic Search to develop web log search engines,
our system has 16g memory, and both ES_MIN_MEM, and ES_MAX_MEM are 8g with bootstrap.mlockall=true setting.

the size of index is almost 150 million documents, and its volume is 360g.
We just using one node, with one shards now.

When we do some facet searches on the field of index with 5~7 filters, ES just stops sometimes and give us exception messages like below.

For the stability of our search engines, we want to automatically restart the ES node or make some notice that ES node stops.
So, we tried to catch the OutOfMemoryError, but it is not caught from our API side.

Is there any ways to catch the OutOfMemoryError like below?
or, any ES API or methods to prevent OutOfMemory failures?

please, help us.

best regards.

p.s sorry for my premature English skills ;(



Exception in thread "elasticsearch[node1][search][T#1]" java.lang.OutOfMemoryError: loading field [s_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.index.field.data.support.FieldDataLoader.load(FieldDataLoader.java:44)
at org.elasticsearch.index.field.data.strings.StringFieldData.load(StringFieldData.java:90)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:56)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:34)
at org.elasticsearch.index.field.data.FieldData.load(FieldData.java:111)
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:130)
... 19 more
Exception in thread "elasticsearch[node1][search][T#5]" java.lang.OutOfMemoryError: loading field [d_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

On Sunday, February 17, 2013 10:48:21 PM UTC+9, hoong wrote:
Hi, ES users!!

We're using the Elastic Search to develop web log search engines,
our system has 16g memory, and both ES_MIN_MEM, and ES_MAX_MEM are 8g with bootstrap.mlockall=true setting.

the size of index is almost 150 million documents, and its volume is 360g.
We just using one node, with one shards now.

When we do some facet searches on the field of index with 5~7 filters, ES just stops sometimes and give us exception messages like below.

For the stability of our search engines, we want to automatically restart the ES node or make some notice that ES node stops.
So, we tried to catch the OutOfMemoryError, but it is not caught from our API side.

Is there any ways to catch the OutOfMemoryError like below?
or, any ES API or methods to prevent OutOfMemory failures?

please, help us.

best regards.

p.s sorry for my premature English skills ;(



Exception in thread "elasticsearch[node1][search][T#1]" java.lang.OutOfMemoryError: loading field [s_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.index.field.data.support.FieldDataLoader.load(FieldDataLoader.java:44)
at org.elasticsearch.index.field.data.strings.StringFieldData.load(StringFieldData.java:90)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:56)
at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:34)
at org.elasticsearch.index.field.data.FieldData.load(FieldData.java:111)
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:130)
... 19 more
Exception in thread "elasticsearch[node1][search][T#5]" java.lang.OutOfMemoryError: loading field [d_ip] caused out of memory failure
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how can I handling the out of memory error exception?

Rafał Kuć-3
In reply to this post by hoong
Re: how can I handling the out of memory error exception? Hello!

Out of memory says that your heap is not enough for ElasticSearch to perform given operation like querying for example. There is not much you can do about it apart from adding more nodes and spreading your index across them, however you have a single shard current, which means that you would have to create a new index and re-index your data. 

On the other hand, you can try going for soft cache type, however its advisable from performance point of view (have a look at this blog post: 
http://blog.sematext.com/2012/05/17/elasticsearch-cache-usage/). However please remember than your facet performance will degrade when using other than the default cache type.

The third option is to try out 0.21, which beta should be available shortly. You can clone it directly from Github and easily compile. But I don't know if this is a valid approach for you if this is your production environment. 

-- 
Regards,
 Rafał Kuć
 Sematext :: 
http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch


Hi, ES users!!

We're using the Elastic Search to develop web log search engines,
our system has 16g memory, and both ES_MIN_MEM, and ES_MAX_MEM are 8g with bootstrap.mlockall=true setting.

the size of index is almost 150 million documents, and its volume is 360g.
We just using one node, with one shards now.

When we do some facet searches on the field of index with 5~7 filters, ES just stops sometimes and give us exception messages like below.

For the stability of our search engines, we want to automatically restart the ES node or make some notice that ES node stops.
So, we tried to catch the OutOfMemoryError, but it is not caught from our API side.

Is there any ways to catch the OutOfMemoryError like below?
or, any ES API or methods to prevent OutOfMemory failures?

please, help us.

best regards.

p.s sorry for my premature English skills ;(



Exception in thread "elasticsearch[node1][search][T#1]" java.lang.OutOfMemoryError: loading field [s_ip] caused out of memory failure
 at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
 at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
 at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
 at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
 at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
 at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
 at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
 at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
 at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
 at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
 at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
 at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space
 at org.elasticsearch.index.field.data.support.FieldDataLoader.load(FieldDataLoader.java:44)
 at org.elasticsearch.index.field.data.strings.StringFieldData.load(StringFieldData.java:90)
 at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:56)
 at org.elasticsearch.index.field.data.strings.StringFieldDataType.load(StringFieldDataType.java:34)
 at org.elasticsearch.index.field.data.FieldData.load(FieldData.java:111)
 at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:130)
 ... 19 more
Exception in thread "elasticsearch[node1][search][T#5]" java.lang.OutOfMemoryError: loading field [d_ip] caused out of memory failure
 at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:138)
 at org.elasticsearch.search.facet.terms.strings.TermsStringOrdinalsFacetCollector.doSetNextReader(TermsStringOrdinalsFacetCollector.java:128)
 at org.elasticsearch.search.facet.AbstractFacetCollector.setNextReader(AbstractFacetCollector.java:81)
 at org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:67)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:576)
 at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:195)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:426)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
 at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:178)
 at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:316)
 at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
 at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
 at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
 at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
 at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit 
https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Loading...