Difference between revisions of "Cloud benchmarks"

From SimpleWiki
Jump to navigationJump to search
Line 35: Line 35:
 
== Traces ==
 
== Traces ==
  
The traffic traces that generated the results in the paper can be downloaded from the link below. These traces, together with the previous scripts, produce the results in Figure 7 of the paper. More details about these data can be obtained in Chapter 7 of the following theses:
+
The traffic traces that generated the results in the paper can be downloaded from '''this link'''. These traces together with the previous scripts produce the results in Figure 7 of the paper. More details about this dataset can be obtained in Chapter 7 of the following thesis:
  
 
* Drago, I. (2013) [https://sites.google.com/site/idiliod/publications/2013_drago_thesis.pdf "Understanding and Monitoring Cloud Services"]. PhD thesis, University of Twente. CTIT Ph.D. thesis Series No. 13-279. ISBN 978-90-365-3577-9.
 
* Drago, I. (2013) [https://sites.google.com/site/idiliod/publications/2013_drago_thesis.pdf "Understanding and Monitoring Cloud Services"]. PhD thesis, University of Twente. CTIT Ph.D. thesis Series No. 13-279. ISBN 978-90-365-3577-9.
 
Download the complete dataset here.
 
  
 
== Acceptable Use Policy ==
 
== Acceptable Use Policy ==

Revision as of 23:29, 12 January 2014

This page contains the software and data presented in the following paper:

  • "Benchmarking Personal Cloud Storage" by Idilio Drago, Enrico Bocchi, Marco Mellia, Herman Slatman and Aiko Pras. In Proceedings of the 13th ACM Internet Measurement Conference. IMC 2013.

This paper is a continuation of our work on personal cloud storage. Previous results can be found on this page and on this page.

The slides of the presentation can be downloaded from here.

Benchmarks Scripts

The scripts are written in python. All scripts require: netifaces, pcapy

How to execute the benchmarks

 ./delta_encoding.py -i wlan0 --seed 123134 --bytes 10000 --test 3 -o /tmp/output/ --ftp 1.1.1.1 --port 2121 --user "user_name" --passwd "password" --folder="."

Important remarks:

1 - The folder ftp://user:pass@server/folder/ must be in a synchronized folder of the storage tool.

2 - The file delta_encoding.py must not be in a synchronized folder, otherwise the .pyc files created at run-time will disturb the experiment.

3 - The folder /tmp/output/ must not be in a synchronized folder, for the same reasons as above.

4 - Disable as much processes as possible in the benchmarking machine. This will minimize external interference on the test.

5 - If the storage system is running on a virtual machine, make sure the host machine is powerful enough to support the load. Check also whether the virtual machine limit or shape the network traffic.

Post-processing the data

The previous steps generate a pcap file per benchmark step in /tmp/output/. In order to produce the figures presented in the paper (Section 5), the pcap files need to be post-processed. The scripts in the following have been developed by manually evaluating the traffic of each cloud storage tool. The typical flows of each tool are isolated by means of lists of IP addresses of servers, and statistics are calculated according to heuristics to determine the start and end of synchronization steps during the benchmarks.

Traces

The traffic traces that generated the results in the paper can be downloaded from this link. These traces together with the previous scripts produce the results in Figure 7 of the paper. More details about this dataset can be obtained in Chapter 7 of the following thesis:

Acceptable Use Policy

  • When writing a paper using software or data from this page, please cite:
 @inproceedings{drago2013_imc,
   author        = {Idilio Drago and Enrico Bocchi and Marco Mellia and Herman Slatman and Aiko Pras},
   title         = {Benchmarking Personal Cloud Storage},
   booktitle     = {Proceedings of the 13th ACM Internet Measurement Conference},
   series        = {IMC'13},
   year          = {2013}
 }

Paper abstract

Personal cloud storage services are data-intensive applications already producing a significant share of Internet traffic. Several solutions offered by different companies attract more and more people. However, little is known about each service capabilities, architecture and - most of all - performance implications of design choices. This paper presents a methodology to study cloud storage services. We apply our methodology to compare 5 popular offers, revealing different system architectures and capabilities. The implications on performance of different designs are assessed executing a series of benchmarks. Our results show no clear winner, with all services suffering from some limitations or having potential for improvement. In some scenarios, the upload of the same file set can take seven times more, wasting twice as much capacity. Our methodology and results are useful thus as both benchmark and guideline for system design.

Mplane.png Flamingo.png