Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Info
titleConsider using onos.py

A possibly simpler alternative to this approach can be found in Mininet and onos.py workflow - you can pick whichever option seems best for your purposes.

Table of Contents

Running multiple VMs that each run an ONOS instance is one way of running a multi-instance ONOS deployment. It is however not practical to do on my resource-constrained laptop. Using Linux Containers is a great alternative that achieves the same thing but uses way less CPU and memory. I also didn't care much for Docker's approach: Docker is very convenient to distribute ONOS as a self-contained application, but I had trouble integrating it into my development cycle which requires fast code/deploy/test cycles.

Getting Started

I'm using VirtualBox as virtualization environment, but I see no reason why this shouldn't work on VMWare or other hypervisors.

The first step is to complete the first three steps of the ONOS from Scratch tutorial tutorial. So only do the VM preparation, install required software, and setup the build environment. Don't bother creating a cell definition, nor packaging and deploying ONOS. This document has new instructions for those steps.

This VM will eventually host the ONOS instance containers. In the following, we'll first create a single container that is fully configured, and then clone the original as many times as needed.

 

Using Vagrant boxes to prepare your development environment

This section replaces the remainder of this document and is an alternative to bootstrapping your environment. Using vagrant we provision two VMs:

  1. onosdev - Running three lxc containers preinstalled with Java 8.
  2. mn - A vm preinstalled with mininet. 

 

To make use of this you simply need to do the following, assuming you have vagrant and virtualbox installed:

Code Block
$ cd $ONOS_ROOT/tools/dev/vagrant
$ vagrant up

 

At this point you will have to wait for all the ansible scripts to finish before continuing. This may take sometime the first time you run it because it has to download a few ubuntu images. 

Once the provisioning is done you should see the following:

Code Block
==> mn: PLAY RECAP *********************************************************************
==> mn: localhost                  : ok=4    changed=3    unreachable=0    failed=0

 

At this point use the vagrant cell to setup your environment:

Code Block
$ cell vagrant
ONOS_CELL=vagrant
OCI=10.100.198.201
OC1=10.100.198.201
OC2=10.100.198.202
OC3=10.100.198.203
OCN=10.100.198.100
ONOS_APPS=drivers,openflow,proxyarp
ONOS_GROUP=ubuntu
ONOS_NIC=10.100.198.*
ONOS_SCENARIOS=$ONOS_ROOT/onos-next/tools/test/scenarios
ONOS_USER=ubuntu
ONOS_WEB_PASS=rocks
ONOS_WEB_USER=onos
Note

In older versions the cell name may be lxc instead of vagrant.

Finally, just push your ssh keys to the lxc nodes:

Code Block
$ onos-push-keys $OC1
$ onos-push-keys $OC2
$ onos-push-keys $OC3
Info

This will prompt you to for a password, which is ubuntu.

 

At this point, you should can use STC to deploy your cluster as shown below.

Code Block
2016-05-27 10:13:17  Check-Apps-2 started -- onos-check-apps 10.100.198.202 drivers,openflow,proxyarp includes
2016-05-27 10:13:17  Check-Logs-2 completed
2016-05-27 10:13:17  Check-Apps-2 completed
2016-05-27 10:13:17  Check-Components-3 completed
2016-05-27 10:13:17  Check-Apps-3 started -- onos-check-apps 10.100.198.203 drivers,openflow,proxyarp includes
2016-05-27 10:13:17  Check-Logs-3 started -- onos-check-logs 10.100.198.203
2016-05-27 10:13:18  Check-Apps-3 completed
2016-05-27 10:13:18  Check-Logs-3 completed
2016-05-27 10:13:18  Setup completed
2:44 Passed! 31 steps succeeded

And you can enter your mininet environment with the following command:

Code Block
$ cd $ONOS_ROOT/tools/dev/vagrant
$ vagrant ssh mn
mn$ sudo mn --topo linear,5 --controller remote,ip=10.100.198.201 --controller remote,ip=10.100.198.202 --controller remote,ip=10.100.198.203

 

Enjoy and feel free to report any issues.

Creating Your First Container

Installing LXC is as simple as running

...

Take a look at the output of lxc-ls again: note the container is now started and, if all has gone well, has received an IP address.

Cell Configuration

Containers are very convenient since we can automatically obtain their IPs. This is what my cell definition looks like; you want to put this in a file in onos/tools/test/cells/. You definitely want to verify if the ONOS_NIC variable corresponds to your system settings, and also OCN which points to the machine where Mininet is running. Feel free to customize the ONOS_FEATURESAPPS to your liking. Finally, you have to reload the cell if you create additional containers, so the new IPs get picked up.

Code Block
languagebash
# LXC-based multi ONOS instance & LINC-OE mininet box
export ONOS_NIC=10.0.3.*
I=1
for CONTAINER in $( sudo lxc-ls ); do
 IP=`sudo lxc-lsinfo --fancy -F ipv4n $CONTAINER | tail -1`iH`
 export OC${I}=${IP}
 let I=I+1
done
export OCI=$OC1
export OCN="192.168.56.9"
export ONOS_FEATURESAPPS="webconsole,onos-api,onos-core,onos-cli,onos-openflow,onos-app-fwd,onos-app-optical,onos-gui,onos-rest,onos-app-proxyarpdrivers,openflow,proxyarp,optical,bgprouter"
export ONOS_USER=ubuntu
export ONOS_GROUP=ubuntu

Customizing Your Container

You want to do three things: enable paswordless login, enable passwordless sudo, and install Java 8.

...

Finally, log out of the container by pressing Ctrl-d or typing exit.

Cloning Your Container

...

We're now ready to make as many clones of the original container as you want ONOS instances. Repeat the following steps as many times as you'd like, giving each new container a unique name.

...

Be sure to reload your cell configuration, which will automatically create and assign new $OCx variables.

Development Cycle

From now on, it becomes very easy to develop and test code in a multi-instance environment. The process is as follows:

  1. Write some code
  2. Compile ONOS by running obs or mci
  3. Package ONOS by running op
  4. Install and run the latest package: onos-group install -f $OC1 $OC2 $OC3 etc.
  5. Go back to step 1 (smile)

Additional Steps

Linux Bridge Configuration

I ran into the weird problem that the ONOS instances weren't discovering each other. This process is driven by Hazelcast and relies on IP multicast. It turned out that LXC uses Linux bridge for connectivity among the containers, and the bridge by default does not forward multicast traffic! This is easily solved by checking the Linux bridge documentation; here's an excerpt

...

You may want to add this command to your startup scripts so your don't lose these settings when you reboot.

Code Block
languagebash
sudo echo 2 > /sys/devices/virtual/net/lxcbr0/bridge/multicast_router

Start Containers on Boot

If you would like your containers to be automatically started on boot, you'll need to add the following line to /var/lib/lxc/NAME/config, where NAME is your container's name. By the way, on my system the /var/lib/lxc directory is only accessible by root.

Code Block
lxc.start.auto = 1

Check

...

using

...

lxc-ls

...

--fancy

...

to

...

verify

...

your

...

containers

...

are

...

in

...

fact

...

started

...

on

...

boot.

Port Forwarding

If you want to access the ONOS GUI from a browse browser on the host machine, you need to set up port forwarding on the VM. The following command forwards traffic on port 8181 to the first ONOS container ($OC1).

Code Block
languagebash
sudo iptables  -t nat -A PREROUTING -i eth1 -p tcp --dport 8181 -j DNAT --to $OC1:8181