java.lang.OutOfMemoryError when starting second node in 0.11.0 cluster

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

java.lang.OutOfMemoryError when starting second node in 0.11.0 cluster

John Chang
I've just started trying out Elastic Search 0.11.0.  I start a node with 'bin/elasticsearch -f' (and all the defaults in bin/elasticsearch.in.sh') and it comes up fine.  Then when I start up my java application which tries to connect, the first node I started from the command line trows out a 'java.lang.OutOfMemoryError: Java heap space' as soon as my application tries to joing the cluster.

I can start a second node and have it join the first successfully if the second node is also just an unzipped 0.11.0 distribution that I start with 'bin/elasticsearch -f,'  but the node inside my java application just can't join without causing problems to the node its trying to join.  I am using 0.11.0 on both the first node and in the java application.

Below is how I try to start up my node in java.  All the config settings reflect those in the elasticsearch.yml of the node that starts from the command line.  

    private void initClient() {
            NodeBuilder nb = NodeBuilder.nodeBuilder().client(true);
            nb.settings().put("gateway.type", "fs");
            nb.settings().put("gateway.fs.location", "/tmp/elasticsearch/gateway");
            nb.settings().put("index.store.type", "mmapfs");
            nb.settings().put("index.store.fs.mmapfs.enabled", "true");
            nb.settings().put("index.merge.policy.merge_factor", "20");
            nb.settings().put("path.work", "/tmp/elasticsearch2/work");
            nb.settings().put("path.logs", "/tmp/elasticsearch2/logs");
            nb.settings().put("discovery.zen.ping.unicast.hosts", "localhost:9300");
            nb.settings().put("discovery.zen.ping.unicast.hosts", elasticSearchHostsList);

            this.indexClient = nb.node().client();   //when this line executes, I get the error in the other node
    }

Thanks for any help you can afford.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError when starting second node in 0.11.0 cluster

John Chang
I should note that when these nodes come up, their work, gateway, and log directories are empty.  There is nothing indexed at all yet.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError when starting second node in 0.11.0 cluster

kimchy
Administrator
Are you connecting with a java client of the same version?

On Sat, Oct 9, 2010 at 12:13 AM, John Chang <[hidden email]> wrote:

I should note that when these nodes come up, their work, gateway, and log
directories are empty.  There is nothing indexed at all yet.
--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/java-lang-OutOfMemoryError-when-starting-second-node-in-0-11-0-cluster-tp1667969p1668040.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError when starting second node in 0.11.0 cluster

John Chang
yes, both are 0.11 - java app as well as the command-line started data node.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError when starting second node in 0.11.0 cluster

kimchy
Administrator
This is really strange... . First, just a note regarding the configuration, you don't need to configure index level configuration on the client side, since you set it to client(true), no shards will be allocated to it, so it won't need the configuration (like the store type, or merge factor).

If there are no indices, my best guess is different versions, just to be on the safe side, can you add right at the start in your client:

System.out.println(org.elasticsearch.Version.full());

and see what it prints...

If it prints 0.11, is there a way for me to get the heap dump of the server node? It should be under ES_HOME/work/heap.

-shay.banon

On Sat, Oct 9, 2010 at 1:03 AM, John Chang <[hidden email]> wrote:

yes, both are 0.11 - java app as well as the command-line started data node.
--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/java-lang-OutOfMemoryError-when-starting-second-node-in-0-11-0-cluster-tp1667969p1668227.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError when starting second node in 0.11.0 cluster

John Chang
Looks like your suspicion is right.  Even though I have changed my mvn dependencies to the elastic search 0.11.0 jar, for some reason the old version must be sneaking in there.  The line you suggested prints out:

elasticsearch/0.9.0

Thanks.  I'l try to figure out where the 0.9 jar is sneaking into my project.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError when starting second node in 0.11.0 cluster

kimchy
Administrator
Phew :)

On Sat, Oct 9, 2010 at 1:25 AM, John Chang <[hidden email]> wrote:

Looks like your suspicion is right.  Even though I have changed my mvn
dependencies to the elastic search 0.11.0 jar, for some reason the old
version must be sneaking in there.  The line you suggested prints out:

elasticsearch/0.9.0

Thanks.  I'l try to figure out where the 0.9 jar is sneaking into my
project.
--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/java-lang-OutOfMemoryError-when-starting-second-node-in-0-11-0-cluster-tp1667969p1668276.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.