Status: Resolved (View Workflow)
Affects Version/s: 3.6.5.Final, 3.7.0.Final
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
Linux machine-10 4.1.12-112.16.4.el7uek.x86_64 #2 SMP Mon Mar 12 23:57:12 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux
XNIO I/O threads starts taking more CPU after an hour of deployment. Can you please help us with this issue:
1. We are using Spring boot + undertow (version details are mentioned below)
2. App server is behind NGnix load balancer. We are using keep alive connections from NGnix to app server
3. When we deploy the app server, everything is OK, CPU usage is low, app is able to close TCP connections.
4. After an hour or so, the app server process starts taking up 25% of the CPU. Upon some inspection, I see that app is not able to close the CLOSE_WAIT connections (which may be the root cause )
5. When we leave the app server up and running for few days, it takes up more than 50% of CPU and the number of CLOSE_WAIT connections grows.
6. We are consistently able to reproduce with following versions:
- spring-boot-starter-undertow: 2.1.1.RELEASE and 2.1.3.RELEASE
- xnio-nio: 3.6.5.Final and 3.7.0.Final
7. I have noticed one 502 (BAD GATEWAY) from the app server just around the time when it starts taking up high CPU and at that moment I see the CLOSE_WAIT connections (although this app server doesn't have much traffic). May be when XNIO I/O thread rejects the incoming request, it gets into this state?
8. Please see attached images.
Using following server options:
- builder.setServerOption(UndertowOptions.NO_REQUEST_TIMEOUT, 30 * 1000);
- builder.setServerOption(UndertowOptions.REQUEST_PARSE_TIMEOUT, 30 * 1000);
- builder.setServerOption(UndertowOptions.IDLE_TIMEOUT, 60 * 1000);
- server.connection-timeout=50000 (spring property)
Please let me know if you need additional details.