Shards getting closed on Indexing Data

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Shards getting closed on Indexing Data

narinder.izap
Hi,

    I am using version 0.90.3 of elasticsearch. I am trying to index the data. This index is not via bulk api. I am pushing one by one documents into the index. And then it closes one of my index's shard. I found the following Log 


[2013-08-09 09:07:55,395][INFO ][node                     ] [Goldbug] version[0.90.3], pid[4299], build[5c38d60/2013-08-06T13:18:31Z]
[2013-08-09 09:07:55,396][INFO ][node                     ] [Goldbug] initializing ...
[2013-08-09 09:07:55,415][INFO ][plugins                  ] [Goldbug] loaded [], sites []
[2013-08-09 09:08:07,234][INFO ][node                     ] [Goldbug] initialized
[2013-08-09 09:08:07,234][INFO ][node                     ] [Goldbug] starting ...
[2013-08-09 09:08:07,656][INFO ][transport                ] [Goldbug] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.105:9300]}
[2013-08-09 09:08:11,122][INFO ][cluster.service          ] [Goldbug] new_master [Goldbug][MQ3oITPKRPWPM4pIeRzHpQ][inet[/192.168.1.105:9300]], reason: zen-disco-join (elected_as_master)
[2013-08-09 09:08:11,178][INFO ][discovery                ] [Goldbug] elasticsearch/MQ3oITPKRPWPM4pIeRzHpQ
[2013-08-09 09:08:11,278][INFO ][http                     ] [Goldbug] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.105:9200]}
[2013-08-09 09:08:11,278][INFO ][node                     ] [Goldbug] started
[2013-08-09 09:08:16,320][INFO ][gateway                  ] [Goldbug] recovered [4] indices into cluster_state
[2013-08-09 10:45:28,746][WARN ][index.merge.scheduler    ] [Goldbug] [es_requests_2][0] failed to merge
java.lang.ArrayIndexOutOfBoundsException: 612
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at org.apache.lucene.index.MergeState$DocMap$1.get(MergeState.java:86)
at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:107)
at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-08-09 10:45:28,807][WARN ][index.engine.robin       ] [Goldbug] [es_requests_2][0] failed engine
org.apache.lucene.index.MergePolicy$MergeException: java.lang.ArrayIndexOutOfBoundsException: 612
at org.elasticsearch.index.merge.scheduler.ConcurrentMergeSchedulerProvider$CustomConcurrentMergeScheduler.handleMergeException(ConcurrentMergeSchedulerProvider.java:99)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 612
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at org.apache.lucene.index.MergeState$DocMap$1.get(MergeState.java:86)
at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:107)
at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-08-09 10:45:28,918][WARN ][cluster.action.shard     ] [Goldbug] sending failed shard for [es_requests_2][0], node[MQ3oITPKRPWPM4pIeRzHpQ], [P], s[STARTED], reason [engine failure, message [MergeException[java.lang.ArrayIndexOutOfBoundsException: 612]; nested: ArrayIndexOutOfBoundsException[612]; ]]
[2013-08-09 10:45:28,918][WARN ][cluster.action.shard     ] [Goldbug] received shard failed for [es_requests_2][0], node[MQ3oITPKRPWPM4pIeRzHpQ], [P], s[STARTED], reason [engine failure, message [MergeException[java.lang.ArrayIndexOutOfBoundsException: 612]; nested: ArrayIndexOutOfBoundsException[612]; ]]

I am facing such situation since I moved to version 0.9 . and I used to push data in the similar way. but never faced shard close till I was with version 0.20.* . This creating a lot of problem, I am trying to find, if I am making some data in wrong format. but not able to find. Is it Elasticsearch's bug or My bug. Please help me to understand it.

Thanks 
Narinder Kaur




--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: Shards getting closed on Indexing Data

Ivan Brusic
How are you indexing data? Via REST or the Java API? If you are using the Java API, have you updated the client as well? What are your refresh settings? If you provide more information besides the error itself, perhaps someone on the list can help you.

-- 
Ivan


On Thu, Aug 8, 2013 at 10:26 PM, Narinder Kaur <[hidden email]> wrote:
Hi,

    I am using version 0.90.3 of elasticsearch. I am trying to index the data. This index is not via bulk api. I am pushing one by one documents into the index. And then it closes one of my index's shard. I found the following Log 


[2013-08-09 09:07:55,395][INFO ][node                     ] [Goldbug] version[0.90.3], pid[4299], build[5c38d60/2013-08-06T13:18:31Z]
[2013-08-09 09:07:55,396][INFO ][node                     ] [Goldbug] initializing ...
[2013-08-09 09:07:55,415][INFO ][plugins                  ] [Goldbug] loaded [], sites []
[2013-08-09 09:08:07,234][INFO ][node                     ] [Goldbug] initialized
[2013-08-09 09:08:07,234][INFO ][node                     ] [Goldbug] starting ...
[2013-08-09 09:08:07,656][INFO ][transport                ] [Goldbug] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.105:9300]}
[2013-08-09 09:08:11,122][INFO ][cluster.service          ] [Goldbug] new_master [Goldbug][MQ3oITPKRPWPM4pIeRzHpQ][inet[/192.168.1.105:9300]], reason: zen-disco-join (elected_as_master)
[2013-08-09 09:08:11,178][INFO ][discovery                ] [Goldbug] elasticsearch/MQ3oITPKRPWPM4pIeRzHpQ
[2013-08-09 09:08:11,278][INFO ][http                     ] [Goldbug] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.105:9200]}
[2013-08-09 09:08:11,278][INFO ][node                     ] [Goldbug] started
[2013-08-09 09:08:16,320][INFO ][gateway                  ] [Goldbug] recovered [4] indices into cluster_state
[2013-08-09 10:45:28,746][WARN ][index.merge.scheduler    ] [Goldbug] [es_requests_2][0] failed to merge
java.lang.ArrayIndexOutOfBoundsException: 612
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at org.apache.lucene.index.MergeState$DocMap$1.get(MergeState.java:86)
at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:107)
at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-08-09 10:45:28,807][WARN ][index.engine.robin       ] [Goldbug] [es_requests_2][0] failed engine
org.apache.lucene.index.MergePolicy$MergeException: java.lang.ArrayIndexOutOfBoundsException: 612
at org.elasticsearch.index.merge.scheduler.ConcurrentMergeSchedulerProvider$CustomConcurrentMergeScheduler.handleMergeException(ConcurrentMergeSchedulerProvider.java:99)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 612
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at org.apache.lucene.index.MergeState$DocMap$1.get(MergeState.java:86)
at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:107)
at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-08-09 10:45:28,918][WARN ][cluster.action.shard     ] [Goldbug] sending failed shard for [es_requests_2][0], node[MQ3oITPKRPWPM4pIeRzHpQ], [P], s[STARTED], reason [engine failure, message [MergeException[java.lang.ArrayIndexOutOfBoundsException: 612]; nested: ArrayIndexOutOfBoundsException[612]; ]]
[2013-08-09 10:45:28,918][WARN ][cluster.action.shard     ] [Goldbug] received shard failed for [es_requests_2][0], node[MQ3oITPKRPWPM4pIeRzHpQ], [P], s[STARTED], reason [engine failure, message [MergeException[java.lang.ArrayIndexOutOfBoundsException: 612]; nested: ArrayIndexOutOfBoundsException[612]; ]]

I am facing such situation since I moved to version 0.9 . and I used to push data in the similar way. but never faced shard close till I was with version 0.20.* . This creating a lot of problem, I am trying to find, if I am making some data in wrong format. but not able to find. Is it Elasticsearch's bug or My bug. Please help me to understand it.

Thanks 
Narinder Kaur




--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: Shards getting closed on Indexing Data

narinder.izap
I am using REST Api, and refresh settings are defaults. But I am calling refresh api after every new document index. 

On Saturday, 10 August 2013 05:06:17 UTC+5:30, Ivan Brusic wrote:
How are you indexing data? Via REST or the Java API? If you are using the Java API, have you updated the client as well? What are your refresh settings? If you provide more information besides the error itself, perhaps someone on the list can help you.

-- 
Ivan


On Thu, Aug 8, 2013 at 10:26 PM, Narinder Kaur <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="dY7Ka8IyUh4J">narind...@...> wrote:
Hi,

    I am using version 0.90.3 of elasticsearch. I am trying to index the data. This index is not via bulk api. I am pushing one by one documents into the index. And then it closes one of my index's shard. I found the following Log 


[2013-08-09 09:07:55,395][INFO ][node                     ] [Goldbug] version[0.90.3], pid[4299], build[5c38d60/2013-08-06T13:18:31Z]
[2013-08-09 09:07:55,396][INFO ][node                     ] [Goldbug] initializing ...
[2013-08-09 09:07:55,415][INFO ][plugins                  ] [Goldbug] loaded [], sites []
[2013-08-09 09:08:07,234][INFO ][node                     ] [Goldbug] initialized
[2013-08-09 09:08:07,234][INFO ][node                     ] [Goldbug] starting ...
[2013-08-09 09:08:07,656][INFO ][transport                ] [Goldbug] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.105:9300]}
[2013-08-09 09:08:11,122][INFO ][cluster.service          ] [Goldbug] new_master [Goldbug][MQ3oITPKRPWPM4pIeRzHpQ][inet[/192.168.1.105:9300]], reason: zen-disco-join (elected_as_master)
[2013-08-09 09:08:11,178][INFO ][discovery                ] [Goldbug] elasticsearch/MQ3oITPKRPWPM4pIeRzHpQ
[2013-08-09 09:08:11,278][INFO ][http                     ] [Goldbug] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.105:9200]}
[2013-08-09 09:08:11,278][INFO ][node                     ] [Goldbug] started
[2013-08-09 09:08:16,320][INFO ][gateway                  ] [Goldbug] recovered [4] indices into cluster_state
[2013-08-09 10:45:28,746][WARN ][index.merge.scheduler    ] [Goldbug] [es_requests_2][0] failed to merge
java.lang.ArrayIndexOutOfBoundsException: 612
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at org.apache.lucene.index.MergeState$DocMap$1.get(MergeState.java:86)
at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:107)
at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-08-09 10:45:28,807][WARN ][index.engine.robin       ] [Goldbug] [es_requests_2][0] failed engine
org.apache.lucene.index.MergePolicy$MergeException: java.lang.ArrayIndexOutOfBoundsException: 612
at org.elasticsearch.index.merge.scheduler.ConcurrentMergeSchedulerProvider$CustomConcurrentMergeScheduler.handleMergeException(ConcurrentMergeSchedulerProvider.java:99)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 612
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at org.apache.lucene.index.MergeState$DocMap$1.get(MergeState.java:86)
at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:107)
at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-08-09 10:45:28,918][WARN ][cluster.action.shard     ] [Goldbug] sending failed shard for [es_requests_2][0], node[MQ3oITPKRPWPM4pIeRzHpQ], [P], s[STARTED], reason [engine failure, message [MergeException[java.lang.ArrayIndexOutOfBoundsException: 612]; nested: ArrayIndexOutOfBoundsException[612]; ]]
[2013-08-09 10:45:28,918][WARN ][cluster.action.shard     ] [Goldbug] received shard failed for [es_requests_2][0], node[MQ3oITPKRPWPM4pIeRzHpQ], [P], s[STARTED], reason [engine failure, message [MergeException[java.lang.ArrayIndexOutOfBoundsException: 612]; nested: ArrayIndexOutOfBoundsException[612]; ]]

I am facing such situation since I moved to version 0.9 . and I used to push data in the similar way. but never faced shard close till I was with version 0.20.* . This creating a lot of problem, I am trying to find, if I am making some data in wrong format. but not able to find. Is it Elasticsearch's bug or My bug. Please help me to understand it.

Thanks 
Narinder Kaur




--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdf-obfuscated-mailto="dY7Ka8IyUh4J">elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: Shards getting closed on Indexing Data

simonw-2
Hey Narinder,

this exception seems pretty serious. It's a problem in Lucene I suppose. I have never been able to reproduce this so I'd be very very happy to get a hold on your index and mappings etc. if possible the entire ES data you have sitting on the machine. I wonder if that is possible or if not if you have a way to reproduce the issue? You can contact me directly via [hidden email] if you don't want to pass any related infos via the mailing list.

simon

On Saturday, August 10, 2013 10:19:40 AM UTC+2, Narinder Kaur wrote:
I am using REST Api, and refresh settings are defaults. But I am calling refresh api after every new document index. 

On Saturday, 10 August 2013 05:06:17 UTC+5:30, Ivan Brusic wrote:
How are you indexing data? Via REST or the Java API? If you are using the Java API, have you updated the client as well? What are your refresh settings? If you provide more information besides the error itself, perhaps someone on the list can help you.

-- 
Ivan


On Thu, Aug 8, 2013 at 10:26 PM, Narinder Kaur <[hidden email]> wrote:
Hi,

    I am using version 0.90.3 of elasticsearch. I am trying to index the data. This index is not via bulk api. I am pushing one by one documents into the index. And then it closes one of my index's shard. I found the following Log 


[2013-08-09 09:07:55,395][INFO ][node                     ] [Goldbug] version[0.90.3], pid[4299], build[5c38d60/2013-08-06T13:18:31Z]
[2013-08-09 09:07:55,396][INFO ][node                     ] [Goldbug] initializing ...
[2013-08-09 09:07:55,415][INFO ][plugins                  ] [Goldbug] loaded [], sites []
[2013-08-09 09:08:07,234][INFO ][node                     ] [Goldbug] initialized
[2013-08-09 09:08:07,234][INFO ][node                     ] [Goldbug] starting ...
[2013-08-09 09:08:07,656][INFO ][transport                ] [Goldbug] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.105:9300]}
[2013-08-09 09:08:11,122][INFO ][cluster.service          ] [Goldbug] new_master [Goldbug][MQ3oITPKRPWPM4pIeRzHpQ][inet[/192.168.1.105:9300]], reason: zen-disco-join (elected_as_master)
[2013-08-09 09:08:11,178][INFO ][discovery                ] [Goldbug] elasticsearch/MQ3oITPKRPWPM4pIeRzHpQ
[2013-08-09 09:08:11,278][INFO ][http                     ] [Goldbug] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.105:9200]}
[2013-08-09 09:08:11,278][INFO ][node                     ] [Goldbug] started
[2013-08-09 09:08:16,320][INFO ][gateway                  ] [Goldbug] recovered [4] indices into cluster_state
[2013-08-09 10:45:28,746][WARN ][index.merge.scheduler    ] [Goldbug] [es_requests_2][0] failed to merge
java.lang.ArrayIndexOutOfBoundsException: 612
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at org.apache.lucene.index.MergeState$DocMap$1.get(MergeState.java:86)
at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:107)
at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-08-09 10:45:28,807][WARN ][index.engine.robin       ] [Goldbug] [es_requests_2][0] failed engine
org.apache.lucene.index.MergePolicy$MergeException: java.lang.ArrayIndexOutOfBoundsException: 612
at org.elasticsearch.index.merge.scheduler.ConcurrentMergeSchedulerProvider$CustomConcurrentMergeScheduler.handleMergeException(ConcurrentMergeSchedulerProvider.java:99)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 612
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at org.apache.lucene.index.MergeState$DocMap$1.get(MergeState.java:86)
at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:107)
at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-08-09 10:45:28,918][WARN ][cluster.action.shard     ] [Goldbug] sending failed shard for [es_requests_2][0], node[MQ3oITPKRPWPM4pIeRzHpQ], [P], s[STARTED], reason [engine failure, message [MergeException[java.lang.ArrayIndexOutOfBoundsException: 612]; nested: ArrayIndexOutOfBoundsException[612]; ]]
[2013-08-09 10:45:28,918][WARN ][cluster.action.shard     ] [Goldbug] received shard failed for [es_requests_2][0], node[MQ3oITPKRPWPM4pIeRzHpQ], [P], s[STARTED], reason [engine failure, message [MergeException[java.lang.ArrayIndexOutOfBoundsException: 612]; nested: ArrayIndexOutOfBoundsException[612]; ]]

I am facing such situation since I moved to version 0.9 . and I used to push data in the similar way. but never faced shard close till I was with version 0.20.* . This creating a lot of problem, I am trying to find, if I am making some data in wrong format. but not able to find. Is it Elasticsearch's bug or My bug. Please help me to understand it.

Thanks 
Narinder Kaur




--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: Shards getting closed on Indexing Data

joergprante@gmail.com
I guess the index got corrupted, probably after running out of file descriptors.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply | Threaded
Open this post in threaded view
|

Re: Shards getting closed on Indexing Data

simonw-2

On Tuesday, August 13, 2013 10:31:44 AM UTC+2, Jörg Prante wrote:
I guess the index got corrupted, probably after running out of file descriptors.

very unlikely here we'd see something like read past EOF or similar errors but this is really odd since it's basically opening a healthy index but the num of docs is jagged. This rather seems like a bug to me but I wanna make sure though. 

simon
 

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.