Lsof hangs when running nginx_show

Having some trouble here after updating ood 1.2.18 → 1.2.20. nginx_show will hang for ‘users’ on the lsof process. Looking into this when I run against a specific userid, i find that every other hour there is a ‘hung’ lsof:
[mrd20@wmaster ~]$ ps -ef --forest | grep ‘lsof|xxj34|stage’ | grep -v grep
root 24608 16521 0 16:28 pts/3 00:00:00 | _ ruby -I/opt/ood/nginx_stage/lib -rnginx_stage -e NginxStage::Application.start – nginx_show --user=xxj34
root 24632 24608 0 16:28 pts/3 00:00:00 | _ lsof -F piu /var/run/ondemand-nginx/xxj34/passenger.sock
root 24633 24632 0 16:28 pts/3 00:00:00 | _ lsof -F piu /var/run/ondemand-nginx/xxj34/passenger.sock
root 25783 25779 0 10:00 ? 00:00:00 | _ /bin/sh -c [ -f /opt/ood/nginx_stage/sbin/nginx_stage ] && /opt/ood/nginx_stage/sbin/nginx_stage nginx_clean 2>&1 | logger -t nginx_clean
root 25786 25783 0 10:00 ? 00:00:00 | _ ruby -I/opt/ood/nginx_stage/lib -rnginx_stage -e NginxStage::Application.start – nginx_clean
root 25833 25786 0 10:00 ? 00:00:00 | | _ lsof -F piu /var/run/ondemand-nginx/axd497/passenger.sock
root 25834 25833 0 10:00 ? 00:00:00 | | _ lsof -F piu /var/run/ondemand-nginx/axd497/passenger.sock
root 40854 40848 0 12:00 ? 00:00:00 | _ /bin/sh -c [ -f /opt/ood/nginx_stage/sbin/nginx_stage ] && /opt/ood/nginx_stage/sbin/nginx_stage nginx_clean 2>&1 | logger -t nginx_clean
root 40855 40854 0 12:00 ? 00:00:00 | _ ruby -I/opt/ood/nginx_stage/lib -rnginx_stage -e NginxStage::Application.start – nginx_clean
root 40914 40855 0 12:00 ? 00:00:00 | | _ lsof -F piu /var/run/ondemand-nginx/klm193/passenger.sock
root 40915 40914 0 12:00 ? 00:00:00 | | _ lsof -F piu /var/run/ondemand-nginx/klm193/passenger.sock
root 46740 46735 0 14:00 ? 00:00:00 | _ /bin/sh -c [ -f /opt/ood/nginx_stage/sbin/nginx_stage ] && /opt/ood/nginx_stage/sbin/nginx_stage nginx_clean 2>&1 | logger -t nginx_clean
root 46742 46740 0 14:00 ? 00:00:00 | _ ruby -I/opt/ood/nginx_stage/lib -rnginx_stage -e NginxStage::Application.start – nginx_clean
root 46803 46742 0 14:00 ? 00:00:00 | | _ lsof -F piu /var/run/ondemand-nginx/ssp105/passenger.sock
root 46804 46803 0 14:00 ? 00:00:00 | | _ lsof -F piu /var/run/ondemand-nginx/ssp105/passenger.sock
root 12130 12126 0 16:00 ? 00:00:00 _ /bin/sh -c [ -f /opt/ood/nginx_stage/sbin/nginx_stage ] && /opt/ood/nginx_stage/sbin/nginx_stage nginx_clean 2>&1 | logger -t nginx_clean
root 12133 12130 0 16:00 ? 00:00:00 _ ruby -I/opt/ood/nginx_stage/lib -rnginx_stage -e NginxStage::Application.start – nginx_clean
root 12196 12133 0 16:00 ? 00:00:00 | _ lsof -F piu /var/run/ondemand-nginx/ssp105/passenger.sock
root 12197 12196 0 16:00 ? 00:00:00 | _ lsof -F piu /var/run/ondemand-nginx/ssp105/passenger.sock
xxj34 16039 1 0 11:10 ? 00:00:00 Passenger watchdog
xxj34 16042 16039 0 11:10 ? 00:00:08 _ Passenger core
root 16068 1 0 11:10 ? 00:00:00 nginx: master process (xxj34) -c /var/lib/ondemand-nginx/config/puns/xxj34.conf
xxj34 16082 16068 0 11:10 ? 00:00:00 _ nginx: worker process

Perhaps this has nothing to do with the upgrade. I’m going to evaluate on Monday. I’m wondering if the every 2 hour process is by design, or related to the situation with nginx_show that I’m currently experiencing.

Thanks,
~ Em

You may have ran into this bug below. We’re not quite sure why those hang. As to the timing, They’re likely from the cronjob that cleans them up (or is supposed to). I don’t believe it’s due to the upgrade, but I can’t quite rule it out yet though.

Looks like we were supposed to clean those processes’ up but we never did because the lsof command hung which is a bit worrisome.