Skip to content
Snippets Groups Projects
Commit 55dd9368 authored by Bob Lantz's avatar Bob Lantz
Browse files

Cleanup of doc files.

Fixed xterm.py (and cleanup) to clean up screen sessions.
Cleaned up sshd.py (though interface is still in flux.)
Added 1024-node network example (treenet1024.py).
Added example showing multiple tests on a single network (multitest.py).
Renamed examples to make them easier to type!
parent 08cef003
No related branches found
No related tags found
No related merge requests found
Preliminary Mininet Installation/Configuration Notes
---
- Mininet is not currently 'installed.' If you want to install it,
so that you can 'import mininet', place it somewhere in your
......@@ -17,7 +18,10 @@ Preliminary Mininet Installation/Configuration Notes
does; Ubuntu doesn't. If your kernel doesn't support it, you will need
to build and install a kernel that does!
- To run the iPerf test, you need to install iperf:
- Mininet should probably be run either on a machine with
no other important processes, or on a virtual machine
- To run the iperf test, you need to install iperf:
sudo aptitude/yum install iperf
......@@ -30,7 +34,7 @@ Preliminary Mininet Installation/Configuration Notes
Consult the appropriate example file for details.
- To switch to the most recent OpenFlow 0.8.9 release branch (the most
recent one with NOX support):
recent one with full NOX support):
git checkout -b release/0.8.9 remotes/origin/release/0.8.9
......@@ -41,11 +45,11 @@ Preliminary Mininet Installation/Configuration Notes
insmod /home/openflow/openflow/datapath/linux-2.6/ofdatapath.ko
modprobe tun
- The default OpenFlow controller (controller(8)) only supports 16
switches! If you wish to run a network with more than 16 switches,
please recompile controller(8) with larger limits, or use a different
controller such as nox. (At the moment, unfortunately, it's not
easy to do so without modifying mininet.py. This will be improved
- The reference OpenFlow controller (controller(8)) only supports 16
switches by default! If you wish to run a network with more than 16
switches, please recompile controller(8) with larger limits, or use a
different controller such as nox. (At the moment, unfortunately, it's
not easy to do so without modifying mininet.py. This will be improved
upon, and an example provided, in the future.)
- For scalable configurations, you might need to increase some of your
......@@ -75,5 +79,6 @@ Preliminary Mininet Installation/Configuration Notes
# Mininet: increase routing table size
net.ipv4.route.max_size=32768
---
......@@ -21,25 +21,25 @@ creation of OpenFlow networks of varying sizes and topologies.
In order to run Mininet, you must have:
* A Linux 2.6.26 or greater kernel compiled with network namespace support
enabled. (Debian 5.0 or greater should work.)
enabled. (Debian 5.0 or greater should work)
* The OpenFlow reference implementation (either the user or kernel
datapath may be used, and the tun or ofdatapath kernel modules must be
loaded, respectively)
* Python, Bash, Ping, iPerf, etc.
* Python, bash, ping, iperf, etc.
* Root privileges (required for network device access)
* The netns program (included as netns.c), or an equivalent program
of the same name, installed in an appropriate path location.
of the same name, installed in an appropriate path location
* mininet.py installed in an appropriate Python path location.
* mininet.py installed in an appropriate Python path location
Currently mininet includes:
- A simple node infrastructure (Host, Switch, Controller classes) for
creating virtual OpenFlow networks.
creating virtual OpenFlow networks
- A simple network infrastructure (class Network and its descendants
TreeNet, GridNet and LinearNet) for creating scalable topologies and
......@@ -52,7 +52,7 @@ Currently mininet includes:
- A 'cleanup' script to get rid of junk (interfaces, processes, etc.)
which might be left around by mininet. Try this if things stop
working.
working!
- Examples (in examples/ directory) to help you get started.
......
#!/bin/bash
# Unfortunately, Mininet and OpenFlow don't always clean up
# properly after themselves. Until they do (or until cleanup
# functionality is integrated into the python code), this
# script may be used to get rid of unwanted garbage. It may
# also get rid of "false positives", but hopefully nothing
# irreplaceable!
echo "Removing all links of the pattern foo-ethX"
for f in `ip link show | egrep -o '(\w+-eth\w+)' ` ; do
......@@ -14,7 +21,7 @@ killall -9 controller ofprotocol ofdatapath ping 2> /dev/null
echo "Removing excess kernel datapath processes"
ps ax | egrep -o 'dp[0-9]+' | sed 's/dp/nl:/' | xargs -l1 echo dpctl deldp
echo "Removing vconn junk in /tmp"
echo "Removing junk in /tmp"
rm -f /tmp/vconn* /tmp/vlogs* /tmp/*.out /tmp/*.log
echo "Removing old screen sessions"
......
File moved
#!/usr/bin/python
"""
Test bandwidth on linear networks of varying size, using both
the kernel and user datapaths.
Test bandwidth (using iperf) on linear networks of varying size,
using both kernel and user datapaths.
Each network looks like:
......@@ -11,7 +11,6 @@
Note: by default, the reference controller only supports 16
switches, so this test WILL NOT WORK unless you have recompiled
your controller to support a 100 switches (or more.)
"""
from mininet import init, LinearNet, iperfTest
......@@ -36,7 +35,7 @@ def linearBandwidthTest():
print "*** Linear network results for", datapath, "datapath:"
print
result = results[ datapath ]
print "SwitchCount\tiPerf results"
print "SwitchCount\tiperf Results"
for switchCount, bandwidth in result:
print switchCount, '\t\t',
print bandwidth[ 0 ], 'server, ', bandwidth[ 1 ], 'client'
......
#!/usr/bin/python
"Run multiple tests on a network."
from mininet import init, TreeNet, pingTestVerbose, iperfTest, Cli
if __name__ == '__main__':
init()
network = TreeNet( depth=2, fanout=2, kernel=True )
network.start()
network.runTest( pingTestVerbose )
network.runTest( iperfTest)
network.runTest( Cli )
network.stop()
#!/usr/bin/python
"""Create a network and start sshd(8) on the hosts.
While something like rshd(8) would be lighter and faster,
(and perfectly adequate on an in-machine network)
the advantage of running sshd is that scripts can work
unchanged on mininet and hardware."""
"""
Create a network and start sshd(8) on the hosts.
While something like rshd(8) would be lighter and faster,
(and perfectly adequate on an in-machine network)
the advantage of running sshd is that scripts can work
unchanged on mininet and hardware.
"""
import sys ; readline = sys.stdin.readline
from mininet import init, Node, createLink, TreeNet, Cli
def nets( hosts ):
......@@ -17,42 +21,35 @@ def nets( hosts ):
nets[ net ] = True
return nets.keys()
def addRoutes( node, nets, intf ):
"Add routes from node to nets through intf."
for net in nets:
node.cmdPrint( 'route add -net ' + net + ' dev ' + intf )
def removeRoutes( node, nets ):
"Remove routes to nets from node."
for net in nets:
node.cmdPrint( 'route del -net ' + net )
def sshd( network ):
"Start sshd up on each host, routing appropriately."
controllers, switches, hosts = (
network.controllers, network.switches, network.hosts )
# Create a node in root ns and link to switch 0
def connectToRootNS( network, switch ):
"Connect hosts to root namespace via switch. Starts network."
# Create a node in root namespace and link to switch 0
root = Node( 'root', inNamespace=False )
createLink( root, switches[ 0 ] )
createLink( root, switch )
ip = '10.0.123.1'
root.setIP( root.intfs[ 0 ], ip, '/24' )
# Start network that now includes link to root namespace
network.start()
# Add routes
routes = nets( hosts )
addRoutes( root, routes, root.intfs[ 0 ] )
# Start up sshd on each host
for host in hosts: host.cmdPrint( '/usr/sbin/sshd' )
# Dump out IP addresses and run CLI
routes = nets( network.hosts )
intf = root.intfs[ 0 ]
for net in routes:
root.cmdPrint( 'route add -net ' + net + ' dev ' + intf )
def startServers( network, server ):
"Start network, and servers on each host."
connectToRootNS( network, network.switches[ 0 ] )
for host in network.hosts: host.cmdPrint( server )
if __name__ == '__main__':
init()
network = TreeNet( depth=1, fanout=4, kernel=True )
startServers( network, '/usr/sbin/sshd' )
print
print "*** Hosts are running sshd at the following addresses:"
for host in hosts: print host.name, host.IP()
print
print "*** Starting Mininet CLI - type 'exit' or ^D to exit"
network.runTest( Cli )
for host in network.hosts: print host.name, host.IP()
print
print "*** Press return to shut down network: ",
readline()
network.stop()
removeRoutes( root, routes )
if __name__ == '__main__':
init()
network = TreeNet( depth=1, fanout=2, kernel=True )
sshd( network )
#!/usr/bin/python
"""
Create a 1024-host network, and run the CLI on it.
If this fails because of kernel limits, you may have
to adjust them, e.g. by adding entries to /etc/sysctl.conf
and running sysctl -p.
"""
from mininet import init, TreeNet
if __name__ == '__main__':
init()
network = TreeNet( depth=2, fanout=32, kernel=True )
network.run( Cli )
File moved
#!/usr/bin/python
"""
Create a network and run an xterm (connected via screen(1) ) on each host.
Requires xterm(1) and GNU screen(1).
Create a network and run an xterm (connected via screen(1) ) on each
host. Requires xterm(1) and GNU screen(1).
"""
import os
import os, re
from subprocess import Popen
from mininet import init, TreeNet, Cli, quietRun
......@@ -13,18 +13,19 @@ def makeXterm( node, title ):
"Run screen on a node, and hook up an xterm."
node.cmdPrint( 'screen -dmS ' + node.name )
title += ': ' + node.name
if not node.inNamespace:
title += ' (root namespace)'
if not node.inNamespace: title += ' (root)'
cmd = [ 'xterm', '-title', title ]
cmd += [ '-e', 'screen', '-D', '-RR', '-S', node.name ]
return Popen( cmd )
def cleanUpScreens():
"Remove moldy old screen sessions."
# XXX We need to implement this - otherwise those darned
# screen sessions will just accumulate
output = quietRun( 'screen -ls' )
pass
r = r'(\d+.[hsc]\d+)'
output = quietRun( 'screen -ls' ).split( '\n' )
for line in output:
m = re.search( r, line )
if m is not None:
quietRun( 'screen -S ' + m.group( 1 ) + ' -X kill' )
def makeXterms( nodes, title ):
terms = []
......@@ -34,18 +35,18 @@ def makeXterms( nodes, title ):
return terms
def xterms( controllers, switches, hosts ):
cleanUpScreens()
terms = []
terms += makeXterms( controllers, 'controller' )
terms += makeXterms( switches, 'switch' )
terms += makeXterms( hosts, 'host' )
# Wait for xterms to exit
for term in terms:
for term in terms:
os.waitpid( term.pid, 0 )
cleanUpScreens()
if __name__ == '__main__':
init()
print "Running xterms on", os.environ[ 'DISPLAY' ]
cleanUpScreens()
network = TreeNet( depth=2, fanout=2, kernel=True )
network.run( xterms )
cleanUpScreens()
\ No newline at end of file
......@@ -18,15 +18,15 @@
Hosts have a network interface which is configured via ifconfig/ip
link/etc. with data network IP addresses (e.g. 192.168.123.2 )
This version supports both the kernel or user space datapaths
This version supports both the kernel and user space datapaths
from the OpenFlow reference implementation.
In kernel datapath mode, the controller and switches are simply
processes in the root namespace.
Kernel OpenFlow datapaths are instantiated using dpctl(8), and are attached
to the one side of a veth pair; the other side resides in the host
namespace. In this mode, switch processes can simply connect to the
Kernel OpenFlow datapaths are instantiated using dpctl(8), and are
attached to the one side of a veth pair; the other side resides in the
host namespace. In this mode, switch processes can simply connect to the
controller via the loopback interface.
In user datapath mode, the controller and switches are full-service
......@@ -35,8 +35,8 @@
currently routed although it could be bridged.)
In addition to a management interface, user mode switches also have
several switch interfaces, halves of veth pairs whose other halves reside
in the host nodes that the switches are connected to.
several switch interfaces, halves of veth pairs whose other halves
reside in the host nodes that the switches are connected to.
Naming:
Host nodes are named h1-hN
......@@ -61,6 +61,7 @@
History:
11/19/09 Initial revision (user datapath only)
11/19/09 Mininet demo at OpenFlow SWAI meeting
12/08/09 Kernel datapath support complete
12/09/09 Moved controller and switch routines into classes
12/12/09 Added subdivided network driver workflow
......@@ -788,24 +789,21 @@ def fixLimits():
def init():
"Initialize Mininet."
# Note: this script must be run as root
# Perhaps we should do so automatically!
if os.getuid() != 0:
# Note: this script must be run as root
# Perhaps we should do so automatically!
print "*** Mininet must run as root."; exit( 1 )
fixLimits()
if __name__ == '__main__':
init()
results = {}
exit( 1 )
print "*** Welcome to Mininet!"
print "*** Look in examples/ for more examples\n"
print "*** Testing Mininet with kernel and user datapath"
for datapath in [ 'kernel', 'user' ]:
k = datapath == 'kernel'
# results += [ TreeNet( depth=2, fanout=2, kernel=k ).
# run( pingTestVerbose ) ]
results[ datapath ] = []
for switchCount in range( 1, 4 ):
results[ datapath ] += [ ( switchCount,
LinearNet( switchCount, k).run( iperfTest ) ) ]
# GridNet( 2, 2 ).run( Cli )
print "*** Test results:", results
\ No newline at end of file
network = TreeNet( depth=2, fanout=4, kernel=k)
result = network.run( pingTestVerbose )
results[ datapath ] = result
print "*** Test results:", results
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment