[db-wg] NRTM replication inefficiencies
- Previous message (by thread): [db-wg] NRTM replication inefficiencies
- Next message (by thread): [db-wg] NRTM replication inefficiencies
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Anand Buddhdev
anandb at ripe.net
Fri Dec 8 15:23:56 CET 2017
On 08/12/2017 15:05, Edward Shryane via db-wg wrote: > Yes, this is already possible, you can set the SO_KEEPALIVE option > on the socket. > > However, at least on CentOS7 (Linux), the default is to wait 2 hours > before sending a keepalive probe, then 9 probes have to fail (each 75s > apart) before declaring the connection is broken. Changing this default > behaviour is applied system-wide. A RIPE NCC internal python application that uses NRTM, does this: conn = socket.create_connection((self.nrtm_host, self.nrtm_port)) # enable TCP keepalive (but only on Linux - the TCP_KEEPIDLE # option is not available for other OSes) # send keepalives after 60s of inactivity # try 3 probes at 15s intervals before closing the connection if platform.system() == 'Linux': logger.info('enabling TCP keepalive') conn.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) conn.setsockopt(socket.SOL_TCP, socket.TCP_KEEPIDLE, 60) conn.setsockopt(socket.SOL_TCP, socket.TCP_KEEPINTVL, 15) conn.setsockopt(socket.SOL_TCP, socket.TCP_KEEPCNT, 3) Regards, Anand Buddhdev RIPE NCC
- Previous message (by thread): [db-wg] NRTM replication inefficiencies
- Next message (by thread): [db-wg] NRTM replication inefficiencies
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[ db-wg Archives ]