bash script to collect new network sockets in a given period of time

  bess       2021/09/06       241


The question and answer about this post on bash script to collect new network sockets in a given period of time have a total of 2 answer so far..

2 answers
241 votes



The following bash code is meant to check every second the amount of NEW (relative to the last second) network socket files. at the end of the run it summarizes every 60 entries (should be 60 seconds) and output a file called verdict.csv that tells me how many new network sockets were opened in that minute (I run under the assumption that those sockets live for more than 1 second and hence I don't miss new ones).

The problem starts when I run it on a busy server where I have a lot of new network sockets being opened, then I start seeing that the lsof_func iterations takes much more then 1 second (even more than a minute some times) and than I cannot trust the output of this script.

TIMETORUN=84600           # Time for the script to run in seconds
# collect number of new socket files in the last second
lsof_func () {
    echo "" > /tmp/lsof_test
    while [[ $TIME -lt $TIMETORUN ]]; do
        lsof -i -t > /tmp/lsof_test2
        echo "$(date +"%Y-%m-%d %H:%M:%S"),$(comm -23 <(cat /tmp/lsof_test2|sort) <(cat /tmp/lsof_test|sort) | wc -l)" >> /tmp/results.csv # comm command is used as a set subtractor operator (lsof_test minus lsof_test2)
        mv /tmp/lsof_test2 /tmp/lsof_test
        sleep 0.9

# Calculate the number of new connections per minute
verdict () {
    cat /tmp/results.csv | uniq > /tmp/results_for_verdict.csv
    echo "Timestamp,New Procs" > /tmp/verdict.csv
    while [[ $(cat /tmp/results_for_verdict.csv | wc -l) -gt 60 ]]; do
        echo -n $(cat /tmp/results_for_verdict.csv | head -n 1 | awk -F, '{print $1}'),  >> /tmp/verdict.csv
        cat /tmp/results_for_verdict.csv | head -n 60 | awk -F, '{s+=$2} END {print s}' >> /tmp/verdict.csv
        sed -n '61,$p' < /tmp/results_for_verdict.csv > /tmp/tmp_results_for_verdict.csv
        mv /tmp/tmp_results_for_verdict.csv /tmp/results_for_verdict.csv
    echo -n $(cat /tmp/results_for_verdict.csv | head -n 1 | awk -F, '{print $1}'),  >> /tmp/verdict.csv
    cat /tmp/results_for_verdict.csv | head -n 60 | awk -F, '{s+=$2} END {print s}' >> /tmp/verdict.csv


rm /tmp/lsof_test
#rm /tmp/lsof_test2
rm /tmp/results.csv
rm /tmp/results_for_verdict.csv

How can I make the iterations of lsof_func function be more consistent / run faster and collect this data every second?


We have a simple bug - using lsof -t causes it to print one line per process rather than one line per socket. If we want to observe changes to the open sockets as claimed in the question, then we'll want something like lsof -i -b -n -F 'n' | grep '^n'.

Instead of using lsof, it may be more efficient to use netstat; on my lightly-loaded system it's about 10-20 times as fast, but you should benchmark the two on your target system.

So instead of comparing subsequent runs of lsof -i -t | sort, we could compare runs of

netstat -tn | awk '{print $4,$5}' | sort

Some things to note here:

  • netstat -t examines TCP connections over IPv4 and IPv6. I believe that's what's wanted.
  • netstat -n, like lsof -n, saves a vast amount of time by not doing DNS reverse lookup.
  • awk is more suitable than cut for selecting columns, since netstat uses a variable number of spaces to separate fields.
  • Netstat includes a couple of header lines, but because these are the same in every invocation, they will disappear in the comparison. We could remove them if we really want: awk 'FNR>2 {print $4,$5}'.


Nfiles Questions - the Q nad A blogs sites for free
🗂 Nfiles » Home
🗂 ModStore
🗂 Videos
🗂 Shareurl
🗂 About us
🗂 Bitcoin

👤 Mr. snow finger
👤 Yoki No moto
👤 Xdeveloper
👤 Malitanyo Dev
👤 Money Motto
👤 Ako Johnny Sin
Create with ❤ Questions by 2021