multicast packets

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

multicast packets

Szymon Gwóźdź
Hi!

When I'm launching ElasticSearch I get logs like this:

[13:48:24,418][WARN ][jgroups.UDP              ] send buffer of socket java.net.DatagramSocket@46b29c9d was set to 640KB, but the OS only allocated 167.77KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
[13:48:24,418][WARN ][jgroups.UDP              ] receive buffer of socket java.net.DatagramSocket@46b29c9d was set to 20MB, but the OS only allocated 524.29KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
[13:48:24,418][WARN ][jgroups.UDP              ] send buffer of socket java.net.MulticastSocket@7846a55e was set to 640KB, but the OS only allocated 167.77KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
[13:48:24,418][WARN ][jgroups.UDP              ] receive buffer of socket java.net.MulticastSocket@7846a55e was set to 25MB, but the OS only allocated 524.29KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)

How can I change this buffer in java (not in system, I don't want to change system values)? I've tried with something like this:

discovery:
    jgroups:
        config:    udp
        mcast_send_buf_size: 10000
        mcast_recv_buf_size: 10000
        ucast_send_buf_size: 10000
        ucast_recv_buf_size: 10000

but it didn't work (i had the same values in logs as before) :(

I have to get rid of these warnings, because network in my company is rejecting multicast messages with too many packets - admins sent me logs like this:

Discard IP fragment set with more than 24 elements: src = 10.1.112.214, dest = 228.8.8.8, proto = udp, id = 18373

And probably because of this after few minutes of feeding elasticsearch with data I have logs like this:

[12:28:16,346][WARN ][jgroups.pbcast.NAKACK    ] test-4-16470: dropped message from test-5-22451 (not in xmit_table), keys are [test-4-16470, test-3-24681, test-2-34614, test-1-48736], view=[test-1-48736|9] [test-1-48736, test-3-24681, test-4-16470, test-2-34614]
Reply | Threaded
Open this post in threaded view
|

Re: multicast packets

kimchy
Administrator
This values are recommended values for jgroups when using multicast. You can move to use unicast if you don't want to use multicast or change the values on the os level.

cheers,
shay.banon

On Tue, Apr 20, 2010 at 3:15 PM, Szymon Gwóźdź <[hidden email]> wrote:
Hi!

When I'm launching ElasticSearch I get logs like this:

[13:48:24,418][WARN ][jgroups.UDP              ] send buffer of socket java.net.DatagramSocket@46b29c9d was set to 640KB, but the OS only allocated 167.77KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
[13:48:24,418][WARN ][jgroups.UDP              ] receive buffer of socket java.net.DatagramSocket@46b29c9d was set to 20MB, but the OS only allocated 524.29KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
[13:48:24,418][WARN ][jgroups.UDP              ] send buffer of socket java.net.MulticastSocket@7846a55e was set to 640KB, but the OS only allocated 167.77KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
[13:48:24,418][WARN ][jgroups.UDP              ] receive buffer of socket java.net.MulticastSocket@7846a55e was set to 25MB, but the OS only allocated 524.29KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)

How can I change this buffer in java (not in system, I don't want to change system values)? I've tried with something like this:

discovery:
    jgroups:
        config:    udp
        mcast_send_buf_size: 10000
        mcast_recv_buf_size: 10000
        ucast_send_buf_size: 10000
        ucast_recv_buf_size: 10000

but it didn't work (i had the same values in logs as before) :(

I have to get rid of these warnings, because network in my company is rejecting multicast messages with too many packets - admins sent me logs like this:

Discard IP fragment set with more than 24 elements: src = 10.1.112.214, dest = 228.8.8.8, proto = udp, id = 18373

And probably because of this after few minutes of feeding elasticsearch with data I have logs like this:

[12:28:16,346][WARN ][jgroups.pbcast.NAKACK    ] test-4-16470: dropped message from test-5-22451 (not in xmit_table), keys are [test-4-16470, test-3-24681, test-2-34614, test-1-48736], view=[test-1-48736|9] [test-1-48736, test-3-24681, test-4-16470, test-2-34614]

Reply | Threaded
Open this post in threaded view
|

Re: multicast packets

Szymon Gwóźdź
Yes, but could you explain me why my discovery.jgroups config isn't working?

Regards
Szymon Gwóźdź
Reply | Threaded
Open this post in threaded view
|

Re: multicast packets

kimchy
Administrator
Yea, its because they are (sadly) not configured to be injected using system properties (which is what I do when you pass them in the elasticsearch.yml configuration). You can create your own copy of the udp.xml file, and place it under config/jgroups location, and change it.

cheers,
shay.banon

2010/4/21 Szymon Gwóźdź <[hidden email]>
Yes, but could you explain me why my discovery.jgroups config isn't working?

Regards
Szymon Gwóźdź