Mininet: A Simple Virtual Testbed for OpenFlow aka How to Squeeze a 1024-node OpenFlow Network onto your Laptop (Extremely Experimental Development Version 0.1, December 2009) --- Mininet creates simple OpenFlow test networks by using process-based virtualization and network namespaces. Simulated hosts (as well as switches and controllers with the user datapath) are created as processes in separate network namespaces. This allows a complete OpenFlow network to be simulated on top of a single Linux kernel. Mininet provides a set of Python classes and functions which enable creation of OpenFlow networks of varying sizes and topologies. In order to run Mininet, you must have: * A Linux 2.6.26 or greater kernel compiled with network namespace support enabled. (Debian 5.0 or greater should work.) * The OpenFlow reference implementation (either the user or kernel datapath may be used, and the tun or ofdatapath kernel modules must be loaded, respectively) * Python, Bash, Ping, iPerf, etc. * Root privileges (required for network device access) * The netns program (included as netns.c), or an equivalent program of the same name, installed in an appropriate path location. * mininet.py installed in an appropriate Python path location. Currently mininet includes: - A simple node infrastructure (Host, Switch, Controller classes) for creating virtual OpenFlow networks. - A simple network infrastructure (class Network and its descendants TreeNet, GridNet and LinearNet) for creating scalable topologies and running experiments (e.g. TreeNet(2,3).run(pingTest) ) - Some simple tests which can be run using someNetwork.run( test ) - A simple command-line interface which may be invoked on a network using .run( Cli ) - A 'cleanup' script to get rid of junk (interfaces, processes, etc.) which might be left around by mininet. Try this if things stop working. - Examples (in examples/ directory) to help you get started. Batteries are not included (yet!) However, some preliminary installation notes are included in the INSTALL file. Good luck! --- Bob Lantz rlantz@cs.stanford.edu