java.lang.OutOfMemoryError: GC overhead limit exceeded

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

java.lang.OutOfMemoryError: GC overhead limit exceeded

Ankit Jain
Hi All,
 
I am getting below error on indexing records into ES cluster.
Exception in thread "elasticsearch[Ghost][transport_client_worker][T#8]{New I/O worker #93}" java.lang.OutOfMemoryError: GC overhead limit exceeded.
 
We have 2 nodes cluster and each ES node has 10 GB of data.
 
Total number of indices are 500 and number of shards on each node is 500.
 
Total number of documents in cluster is around 6 millions
 
We are using version 6 of java
 
Regards,
Ankit Jain

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

mohit.Kumar
hi ankit,

what is the size of the following...
 1. ES_MIN_MEM  = ?? (Minimum Size)
 2. ES_MAX_MEM = ?? (Maximum Size)

By default it is 256m and 1g.
just change the size according to yours requirement.
I hope this will reslove yours problem..


On Monday, October 21, 2013 11:57:50 PM UTC+5:30, Ankit Jain wrote:
Hi All,
 
I am getting below error on indexing records into ES cluster.
Exception in thread "elasticsearch[Ghost][transport_client_worker][T#8]{New I/O worker #93}" java.lang.OutOfMemoryError: GC overhead limit exceeded.
 
We have 2 nodes cluster and each ES node has 10 GB of data.
 
Total number of indices are 500 and number of shards on each node is 500.
 
Total number of documents in cluster is around 6 millions
 
We are using version 6 of java
 
Regards,
Ankit Jain

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

Otis Gospodnetic
In reply to this post by Ankit Jain
Hi Ankit,

Maybe your JVM heap is just too small?

Coincidentally: http://blog.sematext.com/2013/10/21/jvm-memory-pool-monitoring/

This won't completely help you as it won't point out the exact OOM culprit, but it will at least tell you what's going on with your JVM memory pools, and if you correlate that to your other Elasticsearch and/or JVM metrics, you may be able to trace this down more easily.

Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/



On Monday, October 21, 2013 2:27:50 PM UTC-4, Ankit Jain wrote:
Hi All,
 
I am getting below error on indexing records into ES cluster.
Exception in thread "elasticsearch[Ghost][transport_client_worker][T#8]{New I/O worker #93}" java.lang.OutOfMemoryError: GC overhead limit exceeded.
 
We have 2 nodes cluster and each ES node has 10 GB of data.
 
Total number of indices are 500 and number of shards on each node is 500.
 
Total number of documents in cluster is around 6 millions
 
We are using version 6 of java
 
Regards,
Ankit Jain

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

Ankit Jain
In reply to this post by mohit.Kumar
Hi Mohit,

Please find the size of min and max memory
ES_MIN_MEM = 256m
ES_MAX_MEM=10 GB

Thanks for your response..



On Tuesday, 22 October 2013 00:34:09 UTC+5:30, Mohit Kumar Yadav wrote:
hi ankit,

what is the size of the following...
 1. ES_MIN_MEM  = ?? (Minimum Size)
 2. ES_MAX_MEM = ?? (Maximum Size)

By default it is 256m and 1g.
just change the size according to yours requirement.
I hope this will reslove yours problem..


On Monday, October 21, 2013 11:57:50 PM UTC+5:30, Ankit Jain wrote:
Hi All,
 
I am getting below error on indexing records into ES cluster.
Exception in thread "elasticsearch[Ghost][transport_client_worker][T#8]{New I/O worker #93}" java.lang.OutOfMemoryError: GC overhead limit exceeded.
 
We have 2 nodes cluster and each ES node has 10 GB of data.
 
Total number of indices are 500 and number of shards on each node is 500.
 
Total number of documents in cluster is around 6 millions
 
We are using version 6 of java
 
Regards,
Ankit Jain

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

dadoonet
Set ES_MIN_MEM to 10g

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


Le 22 oct. 2013 à 06:02, Ankit Jain <[hidden email]> a écrit :

Hi Mohit,

Please find the size of min and max memory
ES_MIN_MEM = 256m
ES_MAX_MEM=10 GB

Thanks for your response..



On Tuesday, 22 October 2013 00:34:09 UTC+5:30, Mohit Kumar Yadav wrote:
hi ankit,

what is the size of the following...
 1. ES_MIN_MEM  = ?? (Minimum Size)
 2. ES_MAX_MEM = ?? (Maximum Size)

By default it is 256m and 1g.
just change the size according to yours requirement.
I hope this will reslove yours problem..


On Monday, October 21, 2013 11:57:50 PM UTC+5:30, Ankit Jain wrote:
Hi All,
 
I am getting below error on indexing records into ES cluster.
Exception in thread "elasticsearch[Ghost][transport_client_worker][T#8]{New I/O worker #93}" java.lang.OutOfMemoryError: GC overhead limit exceeded.
 
We have 2 nodes cluster and each ES node has 10 GB of data.
 
Total number of indices are 500 and number of shards on each node is 500.
 
Total number of documents in cluster is around 6 millions
 
We are using version 6 of java
 
Regards,
Ankit Jain

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.