Table of Contents
Introduction
You will need:
- An ONOS cluster installed and running
- An OpenStack service installed and running ("stable/mitaka" version is used here)
Note that this instructions assume you’re familiar with ONOS and OpenStack, and do not provide a guide to how to install or trouble shooting these services. However, If you aren’t, please find a guide from ONOS(http://wiki.onosproject.org) and OpenStack(http://docs.openstack.org), respectively.
Prerequisite
1. Make sure your OVS version is 2.3.0 or later. This guide works very well for me (don't forget to change the version in the guide to 2.3.0 or later).
2. Set OVSDB listening mode in your compute nodes. There are two ways. "compute_node_ip" below should be accessible address from the ONOS instance.
$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:compute_node_ip
Or you can make the setting permanent by adding the following line to /usr/share/openvswitch/scripts/ovs-ctl, right after "set ovsdb-server "$DB_FILE" line. You need to restart the openvswitch-switch service.
set "$@" --remote=ptcp:6640
Either way, you should be able to see port "6640" is in listening state.
$ netstat -ntl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN tcp6 0 0 :::22
3. Check your OVSDB. It is okay If there's a bridge with name br-int, but note that SONA will add or update its controller, DPID, and fail mode.
$ sudo ovs-vsctl show cedbbc0a-f9a4-4d30-a3ff-ef9afa813efb ovs_version: "2.5.0"
ONOS Setup
1. Refer to the guide(SONA Network Configuration Guide) and write a network configuration for SONA. Place the network-cfg.json under tools/package/config/, build package, and then install ONOS. Here's an example cell configuration and commands.
onos$ cell ONOS_CELL=sona OCI=10.203.255.221 OC1=10.203.255.221 ONOS_APPS=drivers,openflow-base,openstackswitching,openstackrouting ONOS_GROUP=sdn ONOS_SCENARIOS=/Users/hyunsun/onos/tools/test/scenarios ONOS_TOPO=default ONOS_USER=sdn ONOS_WEB_PASS=rocks ONOS_WEB_USER=onos onos$ buck build onos onos$ cp ~/network-cfg.json ~/onos/tools/package/config/ onos$ buck build package onos$ stc setup
Make sure to activate "only" the following ONOS applications.
ONOS_APPS=drivers,openflow-base,openstackswitching
If you want Neutron L3 service, enable openstackrouting, too.
ONOS_APPS=drivers,openflow-base,openstackswitching,openstackrouting
2. Check all the applications are activated successfully.
onos> apps -s -a * 4 org.onosproject.dhcp 1.7.0.SNAPSHOT DHCP Server App * 6 org.onosproject.optical-model 1.7.0.SNAPSHOT Optical information model * 12 org.onosproject.openflow-base 1.7.0.SNAPSHOT OpenFlow Provider * 19 org.onosproject.ovsdb-base 1.7.0.SNAPSHOT OVSDB Provider * 22 org.onosproject.drivers.ovsdb 1.7.0.SNAPSHOT OVSDB Device Drivers * 27 org.onosproject.openstackinterface 1.7.0.SNAPSHOT OpenStack Interface App * 28 org.onosproject.openstacknode 1.7.0.SNAPSHOT OpenStack Node Bootstrap App * 29 org.onosproject.scalablegateway 1.7.0.SNAPSHOT Scalable GW App * 30 org.onosproject.openstackrouting 1.7.0.SNAPSHOT OpenStack Routing App * 44 org.onosproject.openstackswitching 1.7.0.SNAPSHOT OpenStack Switching App * 50 org.onosproject.drivers 1.7.0.SNAPSHOT Default device drivers
OpenStack Setup
How to deploy OpenStack is out of scope of this documentation. Here, it only describes some configurations related to use SONA. All other settings are completely up to your environment.
Controller Node
1. Install networking-onos (Neutron ML2 plugin for ONOS) first.
$ git clone https://github.com/openstack/networking-onos.git $ cd networking-onos $ sudo python setup.py install
2. Specify ONOS access information. You may want to copy the config file to /etc/neutron/plugins/ml2/ where the other Neutron configuration files are.
# Configuration options for ONOS ML2 Mechanism driver [onos] # (StrOpt) ONOS ReST interface URL. This is a mandatory field. url_path = http://onos.instance.ip.addr:8181/onos/openstackswitching # (StrOpt) Username for authentication. This is a mandatory field. username = onos # (StrOpt) Password for authentication. This is a mandatory field. password = rocks
3. Next step is to install and run OpenStack services. For DevStack users, use this sample DevStack local.conf to build OpenStack controller node. Make sure your DevStack branch is consistent with the OpenStack branches, "stable/mitaka" in this example.
[[local|localrc]] HOST_IP=10.134.231.28 SERVICE_HOST=10.134.231.28 RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 Q_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Images IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://www.planet-lab.org/cord/trusty-server-multi-nic.img" FORCE_CONFIG_DRIVE=True # Networks Q_ML2_TENANT_NETWORK_TYPE=vxlan Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2 Q_PLUGIN_EXTRA_CONF_PATH=/opt/stack/networking-onos/etc Q_PLUGIN_EXTRA_CONF_FILES=(conf_onos.ini) ML2_L3_PLUGIN=networking_onos.plugins.l3.driver.ONOSL3Plugin NEUTRON_CREATE_INITIAL_NETWORKS=False # Services enable_service q-svc disable_service n-net disable_service n-cpu disable_service tempest disable_service c-sch disable_service c-api disable_service c-vol # Branches GLANCE_BRANCH=stable/mitaka HORIZON_BRANCH=stable/mitaka KEYSTONE_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka NOVA_BRANCH=stable/mitaka
If you use other deploy tools or build the controller node manually, please set the following configurations to Nova and Neutron configuration files. Set Neutron to use ONOS ML2 plugin and ONOS L3 service plugin.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin service_plugins = networking_onos.plugins.l3.driver.ONOSL3Plugin dhcp_agent_notification = False
[ml2] tenant_network_types = vxlan type_drivers = vxlan mechanism_drivers = onos_ml2 [securitygroup] enable_security_group = True
Set Nova to use config drive for metadata service, so that we don't need to launch Neutron metadata-agent. And of course, set to use Neutron for network service.
[DEFAULT] force_config_drive = True network_api_class = nova.network.neutronv2.api.API security_group_api = neutron [neutron] url = http://[controller_ip]:9696 auth_strategy = keystone admin_auth_url = http://[controller_ip]:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = [admin passwd]
Don't forget to specify conf_onos.ini when you start Neutron service.
/usr/bin/python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /opt/stack/networking-onos/etc/conf_onos.ini
Compute node
No special configurations are required for compute node other than setting network api to Neutron. For DevStack users, here's sample DevStack local.conf.
[[local|localrc]] HOST_IP=10.134.231.30 <-- local IP SERVICE_HOST=162.243.x.x <-- controller IP, must be reachable from your test browser for console access from Horizon RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql NOVA_VNC_ENABLED=True VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP VNCSERVER_LISTEN=$HOST_IP LIBVIRT_TYPE=kvm # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Services ENABLED_SERVICES=n-cpu,neutron # Branches NOVA_BRANCH=stable/mitaka KEYSTONE_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka
If your compute node is a VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set LIBVIRT_TYPE=qemu
. Nested KVM is much faster than qemu
, if possible.
For manual set ups, set Nova to use Neutron as a network API.
[DEFAULT] force_config_drive = always network_api_class = nova.network.neutronv2.api.API security_group_api = neutron [neutron] url = http://[controller_ip]:9696 auth_strategy = keystone admin_auth_url = http://[controller_ip]:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = [admin passwd]
Gateway node
No OpenStack service needs to be running on gateway nodes.
Additional Node Setup
1. Push network-cfg.json after ONOS and OpenStack are ready, and check all COMPUTE type node state is COMPLETE with openstack-nodes command. Use openstack-node-check command for more detailed states of the node if it's INCOMPLETE. For GATEWAY type node, leave it in DEVICE_CREATED state. You'll need additional configurations explained later.
$ curl --user onos:rocks -X POST -H "Content-Type: application/json" http://$onos-ip:8181/onos/v1/network/configuration/ -d @network-cfg.json
For your information, pushing network config file triggers reinitialization of all nodes at once. It's no harm to reinitialize COMPLETE state node. If you want to reinitialize only a particular compute node, use openstack-node-init command with hostname.
2. In GATEWAY type nodes, Quagga and additional ONOS instance for vRouter is required. Download and install Docker first.
$ curl -s https://get.docker.io/ubuntu/ | sudo sh
3. Download script to help setup gateway node.
$ git clone https://github.com/hyunsun/sona-setup.git $ cd sona-setup
4. Modify volumes/gateway/zebra.conf
and volumes/gateway/bgpd.conf
as you want. Here are samples of the config files. Note that fpm connection ip
in zebra.conf
should be the eth0 interface IP address of onos-vrouter container, assigned by Docker. Run Quagga container with those config files. The IP address comes with the command would be equals to router-id
in bgpd.conf
. And the MAC address must be "fe:00:00:00:00:01", it is hard-coded right now and needs to be improved soon.
! -*- bgp -*- ! ! BGPd sample configuration file ! ! hostname gateway-01 password zebra ! router bgp 65101 bgp router-id 172.18.0.254 timers bgp 3 9 neighbor 172.18.0.1 remote-as 65100 neighbor 172.18.0.1 ebgp-multihop neighbor 172.18.0.1 timers connect 5 neighbor 172.18.0.1 advertisement-interval 5 network 172.27.0.0/24 ! log file /var/log/quagga/bgpd.log
! hostname gateway-01 password zebra ! fpm connection ip 172.17.0.2 port 2620
$ ./quagga.sh --name=gateway-01 --ip=172.18.0.254/24 --mac=fe:00:00:00:00:01
If you check the result of ovs-vsctl show
, there should be a new port named quagga
on br-router
bridge.
5. If there's no external router or emulation of it in your setup, add another Quagga container which acts as an external router. First, modify volumes/router/zebra.conf
and volumes/router/bgpd.conf
to make this Quagga an external router neighboring with the one created right before, and use the same command above but with additional argument --external-router
to bring up the router container. You can put any MAC address this time.
! -*- bgp -*- ! ! BGPd sample configuration file ! ! hostname router-01 password zebra ! router bgp 65100 bgp router-id 172.18.0.1 timers bgp 3 9 neighbor 172.18.0.254 remote-as 65101 neighbor 172.18.0.254 ebgp-multihop neighbor 172.18.0.254 timers connect 5 neighbor 172.18.0.254 advertisement-interval 5 neighbor 172.18.0.254 default-originate ! log file /var/log/quagga/bgpd.log
! hostname router-01 password zebra !
$ ./quagga.sh --name=router-01 --ip=172.18.0.1/24 --mac=fa:00:00:00:00:01 --external-router
6. The last container is ONOS with vRouter. Refer to the guide(SONA Network Configuration Guide) and write a network configuration for vRouter. Name it to vrouter.json, place under sona-setup and run vrouter.sh, which brings up ONOS container with vRouter application activated.
# modify vrouter.json sona-setup$ vrouter.sh sona-setup$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e5ac67e62bbb onosproject/onos:1.6 "./bin/onos-service" 8 days ago Up 8 days 6653/tcp, 8101/tcp, 8181/tcp, 9876/tcp onos-vrouter
7. Once it is up and running, check ports
result. If any port number does not match to the ones in vrouter.json
, modify the config file with the correct port numbers, and just re-run the vrouter.sh. It actually happens often since you are going to re-create Quagga containers to fix the Quagga config files and OVS increase port number whenever new port is added to a bridge.
$ ssh -p 8101 karaf@172.17.0.2 # password is karaf onos> ports id=of:00000000000000b1, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.3.0, serial=None, driver=softrouter, channelId=172.17.0.1:58292, managementAddress=172.17.0.1, name=of:00000000000000b1, protocol=OF_13 port=local, state=disabled, type=copper, speed=0 , portName=br-router, portMac=e6:a0:79:f9:d1:4a port=1, state=enabled, type=copper, speed=0 , portName=patch-rout, portMac=fe:da:85:15:b1:bf port=24, state=enabled, type=copper, speed=10000 , portName=quagga, portMac=06:96:1b:36:32:77 port=25, state=enabled, type=copper, speed=10000 , portName=quagga-router, portMac=ea:1e:71:d1:fd:81
"apps" : { "org.onosproject.router" : { "router" : { "controlPlaneConnectPoint" : "of:00000000000000b1/24", "ospfEnabled" : "true", "interfaces" : [ "b1-1", "b1-2" ] } } }, "ports" : { "of:00000000000000b1/25" : { "interfaces" : [ { "name" : "b1-1", "ips" : [ "172.18.0.254/24" ], "mac" : "fe:00:00:00:00:01" } ] }, "of:00000000000000b1/1" : { "interfaces" : [ { "name" : "b1-2", "ips" : [ "172.27.0.254/24" ], "mac" : "fe:00:00:00:00:01" } ] } }, "hosts" : { "fe:00:00:00:00:02/-1" : { "basic": { "ips": ["172.27.0.1"], "location": "of:00000000000000b1/1" } } }
Line 11: Device ID and port number of the port with portName=quagga
-> controlPlaneConnectPoint
Line 18: Device ID and port number of the port with portName=quagga-router
or other actual uplink port
If you have a floating range, 172.27.0.0/24 in this example, check the following configurations also.
Line 27: (optional interface config for floating IP address range) Device ID and port number of the port with portName=patch-rout
Line 41: (optional interface config for floating IP gateway) Device ID and port number of the port with portName=patch-rout
Once you fix the vrouter.json file, re-create onos-vrouter container with the updated configuration. vrouter.sh script takes care of removing the existing container.
sona-setup$ vrouter.sh
8. If everything's right, check fpm-connections
, hosts
and routes
. 172.18.0.1
is the external default gateway in this example. The host with IP address 172.27.0.1
is for the floating IP but routable from outside, which will explain later.
onos> hosts id=FA:00:00:00:00:01/None, mac=FA:00:00:00:00:01, location=of:00000000000000b1/25, vlan=None, ip(s)=[172.18.0.1] id=FE:00:00:00:00:01/None, mac=FE:00:00:00:00:01, location=of:00000000000000b1/24, vlan=None, ip(s)=[172.18.0.254] id=FE:00:00:00:00:02/None, mac=FE:00:00:00:00:02, location=of:00000000000000b1/1, vlan=None, ip(s)=[172.27.0.1], name=FE:00:00:00:00:02/None onos> fpm-connections 172.17.0.2:52332 connected since 6m ago onos> next-hops ip=172.18.0.1, mac=FA:00:00:00:00:01, numRoutes=1 onos> routes Table: ipv4 Network Next Hop 0.0.0.0/0 172.18.0.1 Total: 1 Table: ipv6 Network Next Hop Total: 0
onos> openstack-nodes hostname=compute-01, type=COMPUTE, managementIp=10.203.25.244, dataIp=10.134.34.222, intBridge=of:00000000000000a1, routerBridge=Optional.empty init=COMPLETE hostname=compute-02, type=COMPUTE, managementIp=10.203.25.245, dataIp=10.134.34.223, intBridge=of:00000000000000a2, routerBridge=Optional.empty init=COMPLETE hostname=gateway-01, type=GATEWAY, managementIp=10.203.198.125, dataIp=10.134.33.208, intBridge=of:00000000000000a3, routerBridge=Optional[of:00000000000000b1] init=COMPLETE hostname=gateway-02, type=GATEWAY, managementIp=10.203.198.131, dataIp=10.134.33.209, intBridge=of:00000000000000a4, routerBridge=Optional[of:00000000000000b2] init=COMPLETE Total 4 nodes
CLI
Command | Usage | Description |
---|---|---|
openstack-nodes | openstack-nodes | Shows the list of compute and gateway nodes that registered to openstack node service |
openstack-node-check | openstack-node-check [hosthame] | Shows the state of each node bootstrap steps |
openstack-node-init | openstack-node-init [hostname] | Try to re-initialize a given node. It's no harm to re-initialize already in COMPLETE state. |