Have questions? Stuck? Please check our FAQ for some common questions and answers.

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Running multiple VMs that each run an ONOS instance is one way of running a multi-instance ONOS deployment. It is however not that practical to do on my resource-constrained laptop. Using Linux Containers is a great alternative that achieves the same thing but uses way less CPU and memory.

Environment

I'm using VirtualBox as virtualization environment, but I see no reason why this shouldn't work on VMWare or other hypervisors. First step is to create a fresh install of Ubuntu 14.04.2 LTS server edition. I installed 

LXC

Installing LXC is as simple as doing

sudo apt-get install lxc

You should run lxc-checkconfig to determine your system properly supports this technology.

 

 

If you would like your containers to be automatically started on boot, you'll need to add the following line to /var/lib/lxc/NAME/config, where NAME is your container's name. By the way, on my system, the /var/lib/lxc directory is only accessible by root.

lxc.start.auto = 1

Check using lxc-ls --fancy to verify your containers are in fact started on boot.

 

ONOS config

This is what my cell looks like. Some things you may want to check if: 

# LXC-based multi ONOS instance & LINC-OE mininet box
export ONOS_NIC=10.0.3.*
I=1
for CONTAINER in $( sudo lxc-ls ); do
IP=`sudo lxc-ls --fancy -F ipv4 $CONTAINER | tail -1`
export OC${I}=${IP}
let I=I+1
done
export OCI=$OC1
export OCN="192.168.56.9"
export ONOS_FEATURES="webconsole,onos-api,onos-core,onos-cli,onos-openflow,onos-app-fwd,onos-app-optical,onos-gui,onos-rest,onos-app-proxyarp"
export ONOS_USER=ubuntu
export ONOS_GROUP=ubuntu

 

I ran into the weird problem that the ONOS instances weren't discovering each other. This process is driven by Hazelcast and relies on IP multicast. It turned out that LXC uses Linux bridge for connectivity among the containers, and the bridge by default does not forward multicast traffic! This is easily solved by checking the Linux bridge documentation; here's an excerpt

IGMP snooping support is not yet included in bridge-utils or iproute2, but it can be easily controlled through sysfs interface. For brN, the settings can be found under /sys/devices/virtual/net/brN/bridge.

multicast_snooping

This option allows the user to disable IGMP snooping completely. It also allows the user to reenable snooping when it has been automatically disabled due to hash collisions. If the collisions have not been resolved however the system will refuse to reenable snooping.

multicast_router

This allows the user to forcibly enable/disable ports as having multicast routers attached. A port with a multicast router will receive all multicast traffic.

The value 0 disables it completely. The default is 1 which lets the system automatically detect the presence of routers (currently this is limited to picking up queries), and 2 means that the ports will always receive all multicast traffic.

Note: this setting can be enabled/disable on a per-port basis, also through sysfs interface (e.g. if eth0 is some bridge's active port, then you can adjust /sys/...../eth0/brport/multicast_router)

You may want to add this command to your startup scripts so your don't lose these settings when you reboot.

sudo echo 2 > /sys/devices/virtual/net/lxcbr0/bridge/multicast_router

 

IP forwarding

  • No labels