In this part of the exercise the students will independently setup from scratch their own TCP congestion control exercise. They will have the opportunity to use the FIRE tools and facilities in order to perform the following tasks.
Reserve the necessary resources for the exercise (hosts, links, routers etc).
Create the required topology.
Connect to the hosts of this topology and ensure that everything works as requested.
Download and install all the necessary software. No scripts are allowed!
Configure all the necessary software, prepare the hosts for the exercise (e.g. start monitoring processes)
Run the experiment.
Collect the results.
"Clean everything" - release the resources, remove unnecessary software, files etc.
This part uses a simple TCP congestion control exercise to train the students on how to create and manage live testbeds. Such training includes actions like reserving FIRE resources, installing OS and tools, configuring software, creating specific network topologies, evaluate the tool's functionality etc. During this part the users will:
familiarize themselves with the tools and infrastructure provided by FIRE, for this assignment.
setup the required network topologies described in the assignment.
generate the required network traffic among the nodes described in the assignment. Students must make sure that they can create the necessary network conditions for the exercise (e.g. create bottlenecks, create large delays, open or close flows etc.).
identify, download and install all the required tools and software, necessary for creating this exercise.
Tools - infrastructure - software - technologies
For this part of the exercise students will use resources from the Virtual Wall, w-iLab.t (https://www.wall2.ilabt.iminds.be) of the iMinds (http://www.iminds.be/en). Each test (experiment created by one user) will require the use of 3 nodes. The topology creation and management will be performed using the iMinds w-iLab.t website and its web tools e.g. the GUI editor for creating topologies, the Traffic shaping tool for handling the traffic between nodes of the experiment etc. Thus the students will have the ability to setup the topology of their experiment using a web browser. All the nodes of the experiment will use Ubuntu OS. Once the experiment is live (nodes are reserved and running), the students must use the ssh tool (secure shell), connect to all of the Virtual Wall nodes and perform an initial check (e.g. if all network interfaces are correctly configured, if links are working etc). Finally, students must have installed in their computer a software for the graphical representation of the results. This tool can be the gnuplot*. Once the students connect to the nodes, they will need to install the iperf tool to be able to run the TCP congestion control experiment.
iperf: Iperf is a tool to measure maximum TCP bandwidth, allowing the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss. must be installed in both end hosts*.
*Students can download and install software in Ubuntu OS using the following command.
In the menu "Experimentation" (page header) click the link "Begin an
Experiment". You will be redirected to a new page with various options.
Click the "New Gui Editor". Using the graphical interface create the
topology you see in figure 1. Name one of the nodes "client" and the
other "server". Choose for both the nodes (hosts P1, P2) the operating
system Ubuntu. The IP addresses and link capacity should be left as
default. (Make sure that the two hosts are connected with a link). Save
the experiment using any name you wish e.g. myfirstexperiment. Figure 1
From your experiment's list ("My Emulab" > tab: "Experiments"), click on the name of your experiment and go to the experiment management page. From the left menu, click on the link "Swap experiment in" to activate it.
If your computer supports IPv6 use the ssh tool and connect to the nodes P1 and P2 (Run a ping between them to make sure that the link between them is working ok). If your computer does not support IPv6, setup a tunnel using openvpn, for IPv4 access. Using the "Modify traffic shaping", try to modify the packet delay between the client and the server.
Capture tcp probe output (on sender) and place in background
cat /proc/net/tcpprobe > tcpout &
Start iperf server on the receiver_host (server)
Run iperf test on sender_host (client) for 300 seconds with a report interval 1 sec. As receiver_host use the IP address of the server node.
iperf -i1 -t300 -c (receiver_host)
After the completion of the experiment, a file with the name "tcpout" should exist in the client node. Download this file in your local computer (e.g. with the use of the sftp tool). Use the gnuplot software to create a graphical representation of the results for cwnd and ssthresh. The command you should run is the following.
$ gnuplot -persist <<"EOF"
set style data linespoints
set xlabel "time (seconds)"
set ylabel "Segments (cwnd, ssthresh)"
plot "tcpout" using 1:7 title "snd_cwnd", \
"tcpout" using 1:($8>=2147483647 ? 0 : $8) title "snd_ssthresh"
Optional *) Remove the tcp_probe module from the kernel and kill the monitoring process.
kill pid (pid: process id of monitoring process - cat /proc/net/tcpprobe > tcpout)
After completing the experiment, go to the experiment management page (the page you are redirected if you click on the experiment name in your experiment's list ("My Emulab" > tab "Experiments"). In the left menu click on the link "Swap experiment out". This action will release all the resources that were reserved in the iMinds Virtual Wall for your experiment.
From the same menu in the experiment management page, click on the link "Terminate Experiment" to delete your experiment.
*This is an optional step because when we swap an experiment out, Virtual Wall does not save the modifications made in the Operating System of the nodes of the experiment. Each time an experiment is swapped in, the Virtual Wall loads to its nodes the predefined Operating System (in our case Ubuntu OS). This also means that iperf will not be installed in the client and server nodes, if we swap this experiment in again in the future.