Among the qualities that help someone working intensively with computers, one that I’m definitely not short of is impatience. So when, as I was investigating an issue with the ELK stack, I had seemingly random long freezes of Kibana, I couldn’t simply live with it…
I had a simple local setup to try to reproduce my problem : out-of-the-zip elasticsearch and kibana servers, and nginx in front, proxying /kibana
to localhost:5601
. While changing the configuration of kibana to account for nginx, I set server.basePath
(to /kibana
), but also server.host
to 0.0.0.0
, matching the real environment (where nginx may be on another machine).
When checking the network tab in Chrome’s DevTools, I was seeing some more or less random calls taking almost exactly one minute to complete, very far from the usual few tens of milliseconds. The logs of nginx gave it away : sometimes, nginx would try both the ipv4 and ipv6 looopback addresses, starting with the ipv6 one. I would have guessed that since kibana was only listening on the ipv4 addresses, that would immediately fail, but as it happens, it only timeouts, after (of course !) exactly one minute. Then nginx tries the 127.0.0.1
address, which works normally, and keeps using it for a while, until it deems a good idea to check again if there is now someone listening on [::1]:5601
, and bam, another (very) long call. The fix is either to configure nginx to send to 127.0.0.1:5601
instead of localhost:5601
, or to set server.host
to ::0
in kibana’s config.
And the issue I had in the first place ? It turned out to be an actual bug in kibana, which was much easier to find without constantly losing focus because of the one minute pauses.