Difference between revisions of "NetFlow Data Artifacts"

From SimpleWiki
Jump to navigationJump to search
Line 3: Line 3:
 
* [[File:m1_tcpflags.py|Script 1]]: Measurement script for analyzing TCP FIN/RST flag flow record expiration behavior.
 
* [[File:m1_tcpflags.py|Script 1]]: Measurement script for analyzing TCP FIN/RST flag flow record expiration behavior.
 
* [[File:m2_bytecounters.py|Script 2]]: Measurement script for analyzing invalid byte counters in flow records.
 
* [[File:m2_bytecounters.py|Script 2]]: Measurement script for analyzing invalid byte counters in flow records.
* [[File:m3_flowrecordexpiration.py|Script 3]]: Measurement script for analyzing flow record expiration behavior (based on active timeout, idle timeout and TCP flags).
+
* [[File:NF_Artifacts_m3_flowrecordexpiration.py|Script 3]]: Measurement script for analyzing flow record expiration behavior (based on active timeout, idle timeout and TCP flags).
  
 
All scripts rely on the Python-library [http://www.secdev.org/projects/scapy/ Scapy].
 
All scripts rely on the Python-library [http://www.secdev.org/projects/scapy/ Scapy].

Revision as of 18:35, 25 January 2013

In the paper 'Measurement Artifacts in NetFlow Data' we have analyzed the presence and impact of measurement artifacts in NetFlow data from six flow exporters. Several Python scripts have been used to generate data, such that the artifacts (if present at all) would be easily identifiable:

  • Script 1: Measurement script for analyzing TCP FIN/RST flag flow record expiration behavior.
  • Script 2: Measurement script for analyzing invalid byte counters in flow records.
  • Script 3: Measurement script for analyzing flow record expiration behavior (based on active timeout, idle timeout and TCP flags).

All scripts rely on the Python-library Scapy.

Reference to the paper:

Rick Hofstede, Idilio Drago, Anna Sperotto, Ramin Sadre, Aiko Pras. In: Proceedings of the Passive and Active Measurement conference (PAM 2013), 18-20 May 2013, Hong Kong, China (to appear)

Abstract of the paper:

Flows provide an aggregated view of network traffic by grouping streams of packets. The resulting scalability gain usually excuses the coarser data granularity, as long as the flow data reflects the actual network traffic faithfully. However, it is known that the flow export process may introduce artifacts in the exported data. This paper extends the set of known artifacts by explaining which implementation decisions are causing them. In addition, we verify the artifacts' presence in data from a set of widely-used devices. Our results show that the revealed artifacts are widely spread among different devices from various vendors. We believe that these results provide researchers and operators with important insights for developing robust analysis applications.