Network Latency

On a passive capture device, there is no such thing as “network delay.” The “capture delay” is what you want to measure. The only method to do so is to precisely sync the clocks on the capture and tap devices, then examine the timestamps on the packet segments as they arrive. This necessitates a lot of time-consuming and finicky setup, or the use of very specialized network equipment. Outside of network R&D, I’m not sure anyone performs this to more than a second’s precision.

 

Of course, the issue of why you’d want to do this remains unanswered. You are building your systems incorrectly if you expect to take timely action on network traffic using a passive capture device. All such systems will have a race condition built in, since you hope your intended action takes effect before the packets it is supposed to act on do. Such systems need active packet management, such as employing the ICAP protocol to retain packets for inspection and only releasing them when an answer from your monitoring device is received.

 

You might find this article interesting too!: How To Build A Cybersecurity Program (sossupport.net)

 

If the passive capture is only for logging, a few milliseconds to minutes delay (depending on your spooling methodology for the processing step) is largely irrelevant; regardless of what you do, the data will have moved on before you have a chance to react to it; all the passive capture device does is ensure you have a record of it for investigative purposes. I know that my own passive monitoring devices write raw packets to disk in close enough to real-time that it doesn’t matter (but not fast enough to, say, prevent those packets from being forwarded to the next device), and process and write those packets to logs in tens of seconds to minutes depending on load.

 

Because of the raw packet amount, this is a two-stage procedure; if you were working with a tiny collection of known data, you could definitely accomplish the capture and write phases in one step in near real time.

 

But what about measuring the delay to the closest millisecond, as active communications do? That is not a simple process. I’ll note you that latency is a measure of round trip time, therefore all timing data are taken from the same device. Measuring latency across several devices is seldom done with such precision, and is normally done relative to the local clock time and the packet timestamp, with an error as large as the difference between the transmitting and receiving clocks at best. There are few real-world reasons for doing this.

 



Verified by MonsterInsights