
12

Hi all,
I'm considering using elasticsearch as a repository for a PoC I'm currently developing.
This PoC models an application that needs durability but not isolability, so I'm fine with the eventual consistency of reads against the most recent writes.
As durability is paramount (we can't affort to lose the data unless 100% of the nodes die) I've been exploring the option of setting every shard to have N replicas where N is the number of nodes in the cluster.
From what I've read so far it is possible to dynamically set the number or replicas which triggers a replication throttled replication process.
I would like to have some help on the following steps (I'm running ES in embedded mode in a Java application):
1  How can I set the number or replicas using the native Java client ? 2  What happens if a node dies and the number of replicas is lowered to the number of surviving ones? 3  Is it possible, from a participating node, to access the list of nodes in the cluster so I can use their count to set the number of replicas (step 1) ? 4  is it possible to hook a callback to the event of a node joining or leaving the cluster ?
I envisioning the following mechanism:
a)  Start with one node, a given number of shards and 1 replica b) Each time a node joins I adjust the number or replicas to match the new node count. In this case, there would be 2 replicas c)  An arbitrary number of nodes might be added and I'd execute step b) accordingly d)  At any time a node might leave the cluster and thus I need to lower the number or replicas to the new node count (I assume that the cluster would go ahead and proceed to compensate the lost replica by asking an existing node to hold 2 replicas instead of one; is this stopped by lowering the number or replicas?)
The ultimate goal is to make sure no data is loss unless 100% of the nodes die before a new one can acquire a full replica.
Is this doable? Does this make sense at all ?
For the time being, I'm not worried about lack of disk space or bandwidth as I'm still in the very early days of the PoC.
Thank you very much for all your work and help.
Gonçalo

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/276418fa812f4af594a07362f5ba7931%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


1. You can set replica number at index creation time or by cluster update settings action org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsAction
2. You will get an index with lower replica number :)
3. Yes. Quick code example:
ClusterState clusterState = clusterService.state();
// find number of data nodes int numberOfDataNodes = 0;
for (DiscoveryNode node : clusterState.getNodes()) { if (node.isDataNode()) { numberOfDataNodes++; } }
4. Yes. Use org.elasticsearch.cluster.ClusterStateListener
From my view your idea of better fault tolerance does not make much sense. The replica number is a statistical entity that is related to the probability of faults. The higher the replica, the higher the probability of surviving faults. There is no correlation to the total number of nodes in a cluster to ensure better fault tolerance. The fault tolerance depends on the probability of a node failure.
From the viewpoint of balancing load, it makes much sense. When setting replica number to the number of nodes, the cluster can balance search requests to all nodes which is optimal.
Jörg

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


"I've been exploring the option of setting every shard to have N replicas where N is the number of nodes in the cluster."
You would *always* have an allocated shard. Do you mean N1 replicas? That's not much better of an idea, since the loss of a single node would then leave you with an unallocated shard. On Wednesday, July 9, 2014 5:57:53 PM UTC4, [hidden email] wrote: Hi all,
I'm considering using elasticsearch as a repository for a PoC I'm currently developing.
This PoC models an application that needs durability but not isolability, so I'm fine with the eventual consistency of reads against the most recent writes.
As durability is paramount (we can't affort to lose the data unless 100% of the nodes die) I've been exploring the option of setting every shard to have N replicas where N is the number of nodes in the cluster.
From what I've read so far it is possible to dynamically set the number or replicas which triggers a replication throttled replication process.
I would like to have some help on the following steps (I'm running ES in embedded mode in a Java application):
1  How can I set the number or replicas using the native Java client ? 2  What happens if a node dies and the number of replicas is lowered to the number of surviving ones? 3  Is it possible, from a participating node, to access the list of nodes in the cluster so I can use their count to set the number of replicas (step 1) ? 4  is it possible to hook a callback to the event of a node joining or leaving the cluster ?
I envisioning the following mechanism:
a)  Start with one node, a given number of shards and 1 replica b) Each time a node joins I adjust the number or replicas to match the new node count. In this case, there would be 2 replicas c)  An arbitrary number of nodes might be added and I'd execute step b) accordingly d)  At any time a node might leave the cluster and thus I need to lower the number or replicas to the new node count (I assume that the cluster would go ahead and proceed to compensate the lost replica by asking an existing node to hold 2 replicas instead of one; is this stopped by lowering the number or replicas?)
The ultimate goal is to make sure no data is loss unless 100% of the nodes die before a new one can acquire a full replica.
Is this doable? Does this make sense at all ?
For the time being, I'm not worried about lack of disk space or bandwidth as I'm still in the very early days of the PoC.
Thank you very much for all your work and help.
Gonçalo

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b0c7e6ba284540ea9c506723566538e8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


In reply to this post by joergprante@gmail.com
Hi Joe,
Thanks for your reply.
On this thougth:
"
From my view your idea of better fault tolerance does not make much sense. The replica number is a statistical entity that is related to the probability of faults. The higher the replica, the higher the probability of surviving faults. There is no correlation to the total number of nodes in a cluster to ensure better fault tolerance. The fault tolerance depends on the probability of a node failure."
I'm not getting it. If we have 4 nodes with 2 replicas it means that 3 of the nodes will have data of a given index (assuming 0 shards to ease the discussion), ritght? If those three nodes fail simultaneously the 4th will have no way of grabbing a copy and data will be lost forever. However if nr of replicas is 3, the 4th would be able to keep serving the requesrs and eventually handover a copy to a new node joining the cluster.
How does this not help fault tolerance? I'm I missing something?
Thanks,
G.
On 10 Jul 2014 00:21, " [hidden email]" < [hidden email]> wrote:
1. You can set replica number at index creation time or by cluster update settings action org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsAction
2. You will get an index with lower replica number :)
3. Yes. Quick code example:
ClusterState clusterState = clusterService.state();
// find number of data nodes int numberOfDataNodes = 0;
for (DiscoveryNode node : clusterState.getNodes()) { if (node.isDataNode()) { numberOfDataNodes++; } }
4. Yes. Use org.elasticsearch.cluster.ClusterStateListener
From my view your idea of better fault tolerance does not make much sense. The replica number is a statistical entity that is related to the probability of faults. The higher the replica, the higher the probability of surviving faults. There is no correlation to the total number of nodes in a cluster to ensure better fault tolerance. The fault tolerance depends on the probability of a node failure.
From the viewpoint of balancing load, it makes much sense. When setting replica number to the number of nodes, the cluster can balance search requests to all nodes which is optimal.
Jörg

You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMrm09qtY%2B_xoC2dXVXVzaXFxV70Q_XGs%2BrXoWqnSu_SrVvGJw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


All I say is that it depends on the probability of the event of three nodes failing simultaneously, not on the total number of nodes having a replica. You can even have 5 nodes and the probability of the event of 4 nodes failing simultaneously, and so on.
As an illustration, suppose you have a data center with two independent electric circuits and the probability of failure corresponds with power outage, then it is enough to distribute nodes equally over servers using the two independent power lines in the racks. If one electric circuit (plus UPS) fails, half of the nodes go down. With replica level 1, ES cluster will keep all the data. There is no need to set replica level equal to node number.
Jörg

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHv1Ab6ite%2BHfaVh9ti2ea69Bh0ASjkCM8vE6%2Bn7j3PgQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


In reply to this post by joergprante@gmail.com
I get it know.
I agree that setting the number of replicas is connected to the deployment reality in each case and it's derived variables and thus there is no one formula to fit all cases (it would't be a setting in that case).
What I was trying to cover was the theoretical / extreme case where any node may fail at any time and what is the best way to go to minimize the chance of losing data. Also, in the case you want to scale down the installation (pottentially down to one node) without having to worry about selecting nodes that hold different replicated shards is an example that can beneffit from such configuration.
I'm however not clear yet on what happens when a node goes down (triggering extra replication amongst the survivors) and then comes up again. Is the ongoing replication cancelled and the returning node brought up to date?
Thanks for your valuable input.
G.
On 10 Jul 2014 18:07, " [hidden email]" < [hidden email]> wrote:
All I say is that it depends on the probability of the event of three nodes failing simultaneously, not on the total number of nodes having a replica. You can even have 5 nodes and the probability of the event of 4 nodes failing simultaneously, and so on.
As an illustration, suppose you have a data center with two independent electric circuits and the probability of failure corresponds with power outage, then it is enough to distribute nodes equally over servers using the two independent power lines in the racks. If one electric circuit (plus UPS) fails, half of the nodes go down. With replica level 1, ES cluster will keep all the data. There is no need to set replica level equal to node number.
Jörg

You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHv1Ab6ite%2BHfaVh9ti2ea69Bh0ASjkCM8vE6%2Bn7j3PgQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMrm09r7EU4ZMy4%2BfVZ3SXmTPbh4YDsJgB_S0LWt3tRJHNihWg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Hi Ivan,
Does this mean that if a note comes back and a replication is underway we'll end up with a node holding 2 replicas and 1 node holding node ?
Scenario:
Node A  Replica 2 Node B  Replica 3 Node C  Replica 1
If node A dies and Node B get's Replica 2, as soon as node A (or a replacement) is brought up, is the final configuration likely to be
Node A (or replcament)  No replicas Node B . Replica 3 and 2 Node C  Replica 1
or is there a rebalance that takes place ? Thanks,
Gonçalo

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMrm09rNQGaaMLE0iOD3NbFWNo7FtmzLhJswwcKr3Ggp0ztfg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Hi Goncalo,
I think it's important that you understand: multiple copies of a shard will never be located on the same node. Not two replicas, and not the primary and one replica. To witness this, run a server on your local machine, and create an index with the defaults  5 shards, one replica. You will see that your cluster is "yellow", and has 5 unallocated shards.
How that helps create a better mental picture of shard allocation. On Friday, July 11, 2014 2:00:47 AM UTC4, Gonçalo Luiz wrote: Hi Ivan,
Does this mean that if a note comes back and a replication is underway we'll end up with a node holding 2 replicas and 1 node holding node ?
Scenario:
Node A  Replica 2 Node B  Replica 3 Node C  Replica 1
If node A dies and Node B get's Replica 2, as soon as node A (or a replacement) is brought up, is the final configuration likely to be
Node A (or replcament)  No replicas Node B . Replica 3 and 2 Node C  Replica 1
or is there a rebalance that takes place ? Thanks,
Gonçalo Gonçalo Luiz
On 10 July 2014 22:11, Ivan Brusic <<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">iv...@...> wrote:
It's only been around for 3.5 years: <a href="https://github.com/elasticsearch/elasticsearch/issues/623" target="_blank" onmousedown="this.href='https://www.google.com/url?q\75https%3A%2F%2Fgithub.com%2Felasticsearch%2Felasticsearch%2Fissues%2F623\46sa\75D\46sntz\0751\46usg\75AFQjCNH_sO_mjAhOkl1euApoILVHPZCIgQ';return true;" onclick="this.href='https://www.google.com/url?q\75https%3A%2F%2Fgithub.com%2Felasticsearch%2Felasticsearch%2Fissues%2F623\46sa\75D\46sntz\0751\46usg\75AFQjCNH_sO_mjAhOkl1euApoILVHPZCIgQ';return true;">https://github.com/ elasticsearch/elasticsearch/issues/623 :)
I should clarify part of my previous statement.
"By default, the ongoing recovery is not cancelled when the missing node rejoins the cluster. You can change the gateway settings [2] to control when recovery kicks in."
What I meant to say is that an ongoing recovery is never cancelled once it has commenced, no matter what settings. By default, recovery happens immediately, but can be changed with the gateway settings.
 Ivan
On Thu, Jul 10, 2014 at 1:48 PM, <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">joerg...@... <<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">joerg...@...> wrote:
Indeed, auto_expand_replicas "all" triggers an update cluster settings action each time a node is added.
Still blown by the many settings Elasticsearch provides. Feeling small. Homework: collecting a gist textfile of all ES 1.2 settings.
Jörg
On Thu, Jul 10, 2014 at 9:57 PM, Ivan Brusic <<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">iv...@...> wrote:
Sticking to your use case, you might want to use the auto_expand_replicas setting to "all" [1]: Never used it, but it sounds what you are looking for.
By default, the ongoing recovery is not cancelled when the missing node rejoins the cluster. You can change the gateway settings [2] to control when recovery kicks in.
[1] <a href="http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indicesupdatesettings.html" target="_blank" onmousedown="this.href='http://www.google.com/url?q\75http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Findicesupdatesettings.html\46sa\75D\46sntz\0751\46usg\75AFQjCNE6yd0IxL_GDCRnNdX0do1c7qybzw';return true;" onclick="this.href='http://www.google.com/url?q\75http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Findicesupdatesettings.html\46sa\75D\46sntz\0751\46usg\75AFQjCNE6yd0IxL_GDCRnNdX0do1c7qybzw';return true;">http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indicesupdatesettings.html
[2] <a href="http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modulesgateway.html" target="_blank" onmousedown="this.href='http://www.google.com/url?q\75http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodulesgateway.html\46sa\75D\46sntz\0751\46usg\75AFQjCNE7UGxjN735WCX97NsvK_LKBbPTIg';return true;" onclick="this.href='http://www.google.com/url?q\75http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodulesgateway.html\46sa\75D\46sntz\0751\46usg\75AFQjCNE7UGxjN735WCX97NsvK_LKBbPTIg';return true;">http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modulesgateway.html
Cheers,
Ivan On Thu, Jul 10, 2014 at 12:39 PM, Gonçalo Luiz <<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">goncal...@...> wrote:
I get it know.
I agree that setting the number of replicas is connected to the deployment reality in each case and it's derived variables and thus there is no one formula to fit all cases (it would't be a setting in that case).
What I was trying to cover was the theoretical / extreme case where any node may fail at any time and what is the best way to go to minimize the chance of losing data. Also, in the case you want to scale down the installation (pottentially down to one node) without having to worry about selecting nodes that hold different replicated shards is an example that can beneffit from such configuration.
I'm however not clear yet on what happens when a node goes down (triggering extra replication amongst the survivors) and then comes up again. Is the ongoing replication cancelled and the returning node brought up to date?
Thanks for your valuable input.
G.
On 10 Jul 2014 18:07, "<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">joerg...@..." <<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">joerg...@...> wrote:
All I say is that it depends on the probability of the event of three nodes failing simultaneously, not on the total number of nodes having a replica. You can even have 5 nodes and the probability of the event of 4 nodes failing simultaneously, and so on.
As an illustration, suppose you have a data center with two independent electric circuits and the probability of failure corresponds with power outage, then it is enough to distribute nodes equally over servers using the two independent power lines in the racks. If one electric circuit (plus UPS) fails, half of the nodes go down. With replica level 1, ES cluster will keep all the data. There is no need to set replica level equal to node number.
Jörg On Thu, Jul 10, 2014 at 8:55 AM, Gonçalo Luiz <<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">goncal...@...> wrote:
Hi Joe,
Thanks for your reply.
On this thougth:
"
From my view your idea of better fault tolerance does not make much sense. The replica number is a statistical entity that is related to the probability of faults. The higher the replica, the higher the probability of surviving faults. There is no correlation to the total number of nodes in a cluster to ensure better fault tolerance. The fault tolerance depends on the probability of a node failure."
I'm not getting it. If we have 4 nodes with 2 replicas it means that 3 of the nodes will have data of a given index (assuming 0 shards to ease the discussion), ritght? If those three nodes fail simultaneously the 4th will have no way of grabbing a copy and data will be lost forever. However if nr of replicas is 3, the 4th would be able to keep serving the requesrs and eventually handover a copy to a new node joining the cluster.
How does this not help fault tolerance? I'm I missing something?
Thanks,
G.
On 10 Jul 2014 00:21, "<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">joerg...@..." <<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">joerg...@...> wrote:
1. You can set replica number at index creation time or by cluster update settings action org.elasticsearch. action.admin.cluster.settings.ClusterUpdateSettingsAction
2. You will get an index with lower replica number :)
3. Yes. Quick code example:
ClusterState clusterState = clusterService.state();
// find number of data nodes int numberOfDataNodes = 0;
for (DiscoveryNode node : clusterState.getNodes()) { if (node.isDataNode()) { numberOfDataNodes++; } }
4. Yes. Use org.elasticsearch.cluster.ClusterStateListener
From my view your idea of better fault tolerance does not make much sense. The replica number is a statistical entity that is related to the probability of faults. The higher the replica, the higher the probability of surviving faults. There is no correlation to the total number of nodes in a cluster to ensure better fault tolerance. The fault tolerance depends on the probability of a node failure.
From the viewpoint of balancing load, it makes much sense. When setting replica number to the number of nodes, the cluster can balance search requests to all nodes which is optimal.
Jörg
On Wed, Jul 9, 2014 at 11:57 PM, <<a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">goncal...@...> wrote:
Hi all,
I'm considering using elasticsearch as a repository for a PoC I'm currently developing.
This PoC models an application that needs durability but not isolability, so I'm fine with the eventual consistency of reads against the most recent writes.
As durability is paramount (we can't affort to lose the data unless 100% of the nodes die) I've been exploring the option of setting every shard to have N replicas where N is the number of nodes in the cluster.
From what I've read so far it is possible to dynamically set the number or replicas which triggers a replication throttled replication process.
I would like to have some help on the following steps (I'm running ES in embedded mode in a Java application):
1  How can I set the number or replicas using the native Java client ? 2  What happens if a node dies and the number of replicas is lowered to the number of surviving ones? 3  Is it possible, from a participating node, to access the list of nodes in the cluster so I can use their count to set the number of replicas (step 1) ?
4  is it possible to hook a callback to the event of a node joining or leaving the cluster ?
I envisioning the following mechanism:
a)  Start with one node, a given number of shards and 1 replica
b) Each time a node joins I adjust the number or replicas to match the new node count. In this case, there would be 2 replicas c)  An arbitrary number of nodes might be added and I'd execute step b) accordingly
d)  At any time a node might leave the cluster and thus I need to lower the number or replicas to the new node count (I assume that the cluster would go ahead and proceed to compensate the lost replica by asking an existing node to hold 2 replicas instead of one; is this stopped by lowering the number or replicas?)
The ultimate goal is to make sure no data is loss unless 100% of the nodes die before a new one can acquire a full replica.
Is this doable? Does this make sense at all ?
For the time being, I'm not worried about lack of disk space or bandwidth as I'm still in the very early days of the PoC.
Thank you very much for all your work and help.
Gonçalo

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/276418fa812f4af594a07362f5ba7931%40googlegroups.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/276418fa812f4af594a07362f5ba7931%40googlegroups.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/276418fa812f4af594a07362f5ba7931%40googlegroups.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/msgid/elasticsearch/276418fa812f4af594a07362f5ba7931%40googlegroups.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit <a href="https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe" target="_blank" onmousedown="this.href='https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe';return true;" onclick="this.href='https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe';return true;">https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@ googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/CAMrm09qtY%2B_xoC2dXVXVzaXFxV70Q_XGs%2BrXoWqnSu_SrVvGJw%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/CAMrm09qtY%2B_xoC2dXVXVzaXFxV70Q_XGs%2BrXoWqnSu_SrVvGJw%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/CAMrm09qtY%2B_xoC2dXVXVzaXFxV70Q_XGs%2BrXoWqnSu_SrVvGJw%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/msgid/elasticsearch/CAMrm09qtY%2B_xoC2dXVXVzaXFxV70Q_XGs%2BrXoWqnSu_SrVvGJw%40mail.gmail.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit <a href="https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe" target="_blank" onmousedown="this.href='https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe';return true;" onclick="this.href='https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe';return true;">https://groups.google.com/d/ topic/elasticsearch/hPvVz20v6YY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHv1Ab6ite%2BHfaVh9ti2ea69Bh0ASjkCM8vE6%2Bn7j3PgQ%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHv1Ab6ite%2BHfaVh9ti2ea69Bh0ASjkCM8vE6%2Bn7j3PgQ%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHv1Ab6ite%2BHfaVh9ti2ea69Bh0ASjkCM8vE6%2Bn7j3PgQ%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHv1Ab6ite%2BHfaVh9ti2ea69Bh0ASjkCM8vE6%2Bn7j3PgQ%40mail.gmail.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@ googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/CAMrm09r7EU4ZMy4%2BfVZ3SXmTPbh4YDsJgB_S0LWt3tRJHNihWg%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/CAMrm09r7EU4ZMy4%2BfVZ3SXmTPbh4YDsJgB_S0LWt3tRJHNihWg%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/CAMrm09r7EU4ZMy4%2BfVZ3SXmTPbh4YDsJgB_S0LWt3tRJHNihWg%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/msgid/elasticsearch/CAMrm09r7EU4ZMy4%2BfVZ3SXmTPbh4YDsJgB_S0LWt3tRJHNihWg%40mail.gmail.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@ googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBhiw5DqC_W3mwCBKBOBjt95tRwj31J9Zq5XP_ROLsSTg%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBhiw5DqC_W3mwCBKBOBjt95tRwj31J9Zq5XP_ROLsSTg%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBhiw5DqC_W3mwCBKBOBjt95tRwj31J9Zq5XP_ROLsSTg%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBhiw5DqC_W3mwCBKBOBjt95tRwj31J9Zq5XP_ROLsSTg%40mail.gmail.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@ googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcroqUqU7an5B8bHQd_GFns4QsGCML5eS5LT6yiEDf6Q%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcroqUqU7an5B8bHQd_GFns4QsGCML5eS5LT6yiEDf6Q%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcroqUqU7an5B8bHQd_GFns4QsGCML5eS5LT6yiEDf6Q%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/ msgid/elasticsearch/CAKdsXoFcroqUqU7an5B8bHQd_GFns4QsGCML5eS5LT6yiEDf6Q%40mail.gmail.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit <a href="https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe" target="_blank" onmousedown="this.href='https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe';return true;" onclick="this.href='https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe';return true;">https://groups.google.com/d/ topic/elasticsearch/hPvVz20v6YY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="pEFViIyxX3MJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQA_LtOdS6Ht_DR57P%2BXkLWp5%3DV5Dz%2BVh2_cMkgy6kDSw%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQA_LtOdS6Ht_DR57P%2BXkLWp5%3DV5Dz%2BVh2_cMkgy6kDSw%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQA_LtOdS6Ht_DR57P%2BXkLWp5%3DV5Dz%2BVh2_cMkgy6kDSw%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQA_LtOdS6Ht_DR57P%2BXkLWp5%3DV5Dz%2BVh2_cMkgy6kDSw%40mail.gmail.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7a3b23e67e634ca39a224519353a5d2e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Thanks for the clear and simple explanation.
However, will the cluster (with auto expand replicas) ever go green if it has been grown from 2 to 3 (triggering replicas to grow to to) and then downsized to two nodes again? In other words, do the auto grow replicas setting work both ways or just upwards?
Thanks again.
G.
On 11 Jul 2014 12:11, "Glen Smith" < [hidden email]> wrote:
Hi Goncalo,
I think it's important that you understand: multiple copies of a shard will never be located on the same node. Not two replicas, and not the primary and one replica.
To witness this, run a server on your local machine, and create an index with the defaults  5 shards, one replica. You will see that your cluster is "yellow", and has 5 unallocated shards.
How that helps create a better mental picture of shard allocation. On Friday, July 11, 2014 2:00:47 AM UTC4, Gonçalo Luiz wrote:
Hi Ivan,
Does this mean that if a note comes back and a replication is underway we'll end up with a node holding 2 replicas and 1 node holding node ?
Scenario:
Node A  Replica 2 Node B  Replica 3 Node C  Replica 1
If node A dies and Node B get's Replica 2, as soon as node A (or a replacement) is brought up, is the final configuration likely to be
Node A (or replcament)  No replicas Node B . Replica 3 and 2 Node C  Replica 1
or is there a rebalance that takes place ? Thanks,
Gonçalo Gonçalo Luiz
On 10 July 2014 22:11, Ivan Brusic <[hidden email]> wrote:
It's only been around for 3.5 years: https://github.com/elasticsearch/elasticsearch/issues/623 :)
I should clarify part of my previous statement.
"By default, the ongoing recovery is not cancelled when the missing node rejoins the cluster. You can change the gateway settings [2] to control when recovery kicks in."
What I meant to say is that an ongoing recovery is never cancelled once it has commenced, no matter what settings. By default, recovery happens immediately, but can be changed with the gateway settings.
 Ivan
Indeed, auto_expand_replicas "all" triggers an update cluster settings action each time a node is added.
Still blown by the many settings Elasticsearch provides. Feeling small. Homework: collecting a gist textfile of all ES 1.2 settings.
Jörg
Sticking to your use case, you might want to use the auto_expand_replicas setting to "all" [1]: Never used it, but it sounds what you are looking for.
By default, the ongoing recovery is not cancelled when the missing node rejoins the cluster. You can change the gateway settings [2] to control when recovery kicks in.
Cheers,
Ivan On Thu, Jul 10, 2014 at 12:39 PM, Gonçalo Luiz <[hidden email]> wrote:
I get it know.
I agree that setting the number of replicas is connected to the deployment reality in each case and it's derived variables and thus there is no one formula to fit all cases (it would't be a setting in that case).
What I was trying to cover was the theoretical / extreme case where any node may fail at any time and what is the best way to go to minimize the chance of losing data. Also, in the case you want to scale down the installation (pottentially down to one node) without having to worry about selecting nodes that hold different replicated shards is an example that can beneffit from such configuration.
I'm however not clear yet on what happens when a node goes down (triggering extra replication amongst the survivors) and then comes up again. Is the ongoing replication cancelled and the returning node brought up to date?
Thanks for your valuable input.
G.
All I say is that it depends on the probability of the event of three nodes failing simultaneously, not on the total number of nodes having a replica. You can even have 5 nodes and the probability of the event of 4 nodes failing simultaneously, and so on.
As an illustration, suppose you have a data center with two independent electric circuits and the probability of failure corresponds with power outage, then it is enough to distribute nodes equally over servers using the two independent power lines in the racks. If one electric circuit (plus UPS) fails, half of the nodes go down. With replica level 1, ES cluster will keep all the data. There is no need to set replica level equal to node number.
Jörg On Thu, Jul 10, 2014 at 8:55 AM, Gonçalo Luiz <[hidden email]> wrote:
Hi Joe,
Thanks for your reply.
On this thougth:
"
From my view your idea of better fault tolerance does not make much sense. The replica number is a statistical entity that is related to the probability of faults. The higher the replica, the higher the probability of surviving faults. There is no correlation to the total number of nodes in a cluster to ensure better fault tolerance. The fault tolerance depends on the probability of a node failure."
I'm not getting it. If we have 4 nodes with 2 replicas it means that 3 of the nodes will have data of a given index (assuming 0 shards to ease the discussion), ritght? If those three nodes fail simultaneously the 4th will have no way of grabbing a copy and data will be lost forever. However if nr of replicas is 3, the 4th would be able to keep serving the requesrs and eventually handover a copy to a new node joining the cluster.
How does this not help fault tolerance? I'm I missing something?
Thanks,
G.
1. You can set replica number at index creation time or by cluster update settings action org.elasticsearch. action.admin.cluster.settings. ClusterUpdateSettingsAction
2. You will get an index with lower replica number :)
3. Yes. Quick code example:
ClusterState clusterState = clusterService.state();
// find number of data nodes int numberOfDataNodes = 0;
for (DiscoveryNode node : clusterState.getNodes()) { if (node.isDataNode()) { numberOfDataNodes++; } }
4. Yes. Use org.elasticsearch.cluster.ClusterStateListener
From my view your idea of better fault tolerance does not make much sense. The replica number is a statistical entity that is related to the probability of faults. The higher the replica, the higher the probability of surviving faults. There is no correlation to the total number of nodes in a cluster to ensure better fault tolerance. The fault tolerance depends on the probability of a node failure.
From the viewpoint of balancing load, it makes much sense. When setting replica number to the number of nodes, the cluster can balance search requests to all nodes which is optimal.
Jörg
On Wed, Jul 9, 2014 at 11:57 PM, <[hidden email]> wrote:
Hi all,
I'm considering using elasticsearch as a repository for a PoC I'm currently developing.
This PoC models an application that needs durability but not isolability, so I'm fine with the eventual consistency of reads against the most recent writes.
As durability is paramount (we can't affort to lose the data unless 100% of the nodes die) I've been exploring the option of setting every shard to have N replicas where N is the number of nodes in the cluster.
From what I've read so far it is possible to dynamically set the number or replicas which triggers a replication throttled replication process.
I would like to have some help on the following steps (I'm running ES in embedded mode in a Java application):
1  How can I set the number or replicas using the native Java client ? 2  What happens if a node dies and the number of replicas is lowered to the number of surviving ones? 3  Is it possible, from a participating node, to access the list of nodes in the cluster so I can use their count to set the number of replicas (step 1) ?
4  is it possible to hook a callback to the event of a node joining or leaving the cluster ?
I envisioning the following mechanism:
a)  Start with one node, a given number of shards and 1 replica
b) Each time a node joins I adjust the number or replicas to match the new node count. In this case, there would be 2 replicas c)  An arbitrary number of nodes might be added and I'd execute step b) accordingly
d)  At any time a node might leave the cluster and thus I need to lower the number or replicas to the new node count (I assume that the cluster would go ahead and proceed to compensate the lost replica by asking an existing node to hold 2 replicas instead of one; is this stopped by lowering the number or replicas?)
The ultimate goal is to make sure no data is loss unless 100% of the nodes die before a new one can acquire a full replica.
Is this doable? Does this make sense at all ?
For the time being, I'm not worried about lack of disk space or bandwidth as I'm still in the very early days of the PoC.
Thank you very much for all your work and help.
Gonçalo

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/276418fa812f4af594a07362f5ba7931%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuUAJQYkkRtTU1H90MKpRg46dM1T2PzcP2Mfk1X8mbfA%40mail.gmail.com.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMrm09qtY%2B_xoC2dXVXVzaXFxV70Q_XGs%2BrXoWqnSu_SrVvGJw%40mail.gmail.com.

You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/hPvVz20v6YY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearc...@googlegroups.com.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMrm09r7EU4ZMy4%2BfVZ3SXmTPbh4YDsJgB_S0LWt3tRJHNihWg%40mail.gmail.com.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBhiw5DqC_W3mwCBKBOBjt95tRwj31J9Zq5XP_ROLsSTg%40mail.gmail.com.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQA_LtOdS6Ht_DR57P%2BXkLWp5%3DV5Dz%2BVh2_cMkgy6kDSw%40mail.gmail.com.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMrm09pWmsimVqxRVYf0iSozP_QMrU_p%3DUODWd9CS4O_Ti5Wg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


In reply to this post by joergprante@gmail.com
Hi Jorg,
Can you please give a serverside or clientside example of using CLusterStateListener? Do I have to use a plugin. if so, which module do I register/override? If not, do I have to use a Node Client (not a TransportClient), and retrieve the ClusterService somehow and then register?
Thanks Sandeep On Thursday, 10 July 2014 22:25:51 UTC+5:30, Jörg Prante wrote: On the client side, you can't use cluster state listener, it is for nodes that have access to a local copy of the master cluster state. Clients must execute an action to ask for cluster state, and with the current transport request/response cycle, they must poll for new events ...
Jörg On Thu, Jul 10, 2014 at 6:38 PM, Ivan Brusic <<a href="javascript:" target="_blank" gdfobfuscatedmailto="_6fZNrFjNWAJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">iv...@...> wrote:
Jörg, have you actually implemented your own ClusterStateListener? I never had much success. Tried using that interface or even PublishClusterStateAction.NewClusterStateListener, but either I could not configure successfully the module (the former) or received no events (the latter). Implemented on the client side, not as a plugin.
Cheers,
Ivan On Wed, Jul 9, 2014 at 4:21 PM, <a href="javascript:" target="_blank" gdfobfuscatedmailto="_6fZNrFjNWAJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">joerg...@... <<a href="javascript:" target="_blank" gdfobfuscatedmailto="_6fZNrFjNWAJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">joerg...@...> wrote:
4. Yes. Use org.elasticsearch.cluster.ClusterStateListener

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdfobfuscatedmailto="_6fZNrFjNWAJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">elasticsearc...@ googlegroups.com.
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBB%3DW_qG9E7isEc6HZeMskxKgbqzaKgqzSQ26sjgT5%2BQ%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank" onmousedown="this.href='https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBB%3DW_qG9E7isEc6HZeMskxKgbqzaKgqzSQ26sjgT5%2BQ%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;" onclick="this.href='https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBB%3DW_qG9E7isEc6HZeMskxKgbqzaKgqzSQ26sjgT5%2BQ%40mail.gmail.com?utm_medium\75email\46utm_source\75footer';return true;">https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBB%3DW_qG9E7isEc6HZeMskxKgbqzaKgqzSQ26sjgT5%2BQ%40mail.gmail.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/35f6b64e37874891a3a8518dfd7638e3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

12
