Introduction
You will need:
- An ONOS cluster installed and running
- An OpenStack service installed and running ("stable/mitaka" version is used here)
Note that this instructions assume you’re familiar with ONOS and OpenStack, and do not provide a guide to how to install or trouble shooting these services. However, If you aren’t, please find a guide from ONOS(http://wiki.onosproject.org) and OpenStack(http://docs.openstack.org), respectively.
Prerequisite
1. Make sure your OVS version is 2.3.0 or later. This guide works very well for me (don't forget to change the version in the guide to 2.3.0 or later).
2. Set OVSDB listening mode in your compute nodes. There are two ways. "compute_node_ip" below should be accessible address from the ONOS instance.
$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:compute_node_ip
Or you can make the setting permanent by adding the following line to /usr/share/openvswitch/scripts/ovs-ctl, right after "set ovsdb-server "$DB_FILE" line. You need to restart the openvswitch-switch service.
set "$@" --remote=ptcp:6640
Either way, you should be able to see port "6640" is in listening state.
$ netstat -ntl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN tcp6 0 0 :::22
3. Check your OVSDB. It is okay If there's a bridge with name br-int, but note that SONA will add or update its controller, DPID, and fail mode.
$ sudo ovs-vsctl show cedbbc0a-f9a4-4d30-a3ff-ef9afa813efb ovs_version: "2.5.0"
ONOS Setup
1. Refer to the guide(SONA Network Configuration Guide) to write a network configuration for SONA, and prepare one for your environment. Place the network-cfg.json under tools/package/config/, build package, and then install ONOS. Here's an example cell configuration and commands.
onos$ cell ONOS_CELL=sona OCI=10.203.255.221 OC1=10.203.255.221 ONOS_APPS=drivers,openflow-base,openstackswitching,openstackrouting ONOS_GROUP=sdn ONOS_SCENARIOS=/Users/hyunsun/onos/tools/test/scenarios ONOS_TOPO=default ONOS_USER=sdn ONOS_WEB_PASS=rocks ONOS_WEB_USER=onos onos$ buck build onos onos$ cp ~/network-cfg.json ~/onos/tools/package/config/ onos$ buck build package onos$ stc setup
Make sure to activate "only" the following ONOS applications.
ONOS_APPS=drivers,openflow-base,openstackswitching
If you want Neutron L3 service, enable openstackrouting, too.
ONOS_APPS=drivers,openflow-base,openstackswitching,openstackrouting
2. Check all the applications are activated successfully.
onos> apps -s -a * 4 org.onosproject.dhcp 1.7.0.SNAPSHOT DHCP Server App * 6 org.onosproject.optical-model 1.7.0.SNAPSHOT Optical information model * 12 org.onosproject.openflow-base 1.7.0.SNAPSHOT OpenFlow Provider * 19 org.onosproject.ovsdb-base 1.7.0.SNAPSHOT OVSDB Provider * 22 org.onosproject.drivers.ovsdb 1.7.0.SNAPSHOT OVSDB Device Drivers * 27 org.onosproject.openstackinterface 1.7.0.SNAPSHOT OpenStack Interface App * 28 org.onosproject.openstacknode 1.7.0.SNAPSHOT OpenStack Node Bootstrap App * 29 org.onosproject.scalablegateway 1.7.0.SNAPSHOT Scalable GW App * 30 org.onosproject.openstackrouting 1.7.0.SNAPSHOT OpenStack Routing App * 44 org.onosproject.openstackswitching 1.7.0.SNAPSHOT OpenStack Switching App * 50 org.onosproject.drivers 1.7.0.SNAPSHOT Default device drivers
OpenStack Setup
How to deploy OpenStack is out of scope of this documentation. Here, it only describes some configurations related to use SONA. All other settings are completely up to your environment.
Controller Node
1. Install networking-onos (Neutron ML2 plugin for ONOS) first.
$ git clone https://github.com/openstack/networking-onos.git $ cd networking-onos $ sudo python setup.py install
2. Specify ONOS access information. You may want to copy the config file to /etc/neutron/plugins/ml2/ where the other Neutron configuration files are.
# Configuration options for ONOS ML2 Mechanism driver [onos] # (StrOpt) ONOS ReST interface URL. This is a mandatory field. url_path = http://onos.instance.ip.addr:8181/onos/openstackswitching # (StrOpt) Username for authentication. This is a mandatory field. username = onos # (StrOpt) Password for authentication. This is a mandatory field. password = rocks
3. Next step is to install and run OpenStack services. For DevStack users, use this sample DevStack local.conf to build OpenStack controller node. Make sure your DevStack branch is consistent with the OpenStack branches, "stable/mitaka" in this example.
[[local|localrc]] HOST_IP=10.134.231.28 SERVICE_HOST=10.134.231.28 RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 Q_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Images IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://www.planet-lab.org/cord/trusty-server-multi-nic.img" FORCE_CONFIG_DRIVE=True # Networks Q_ML2_TENANT_NETWORK_TYPE=vxlan Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2 Q_PLUGIN_EXTRA_CONF_PATH=/opt/stack/networking-onos/etc Q_PLUGIN_EXTRA_CONF_FILES=(conf_onos.ini) ML2_L3_PLUGIN=networking_onos.plugins.l3.driver.ONOSL3Plugin NEUTRON_CREATE_INITIAL_NETWORKS=False # Services enable_service q-svc disable_service n-net disable_service n-cpu disable_service tempest disable_service c-sch disable_service c-api disable_service c-vol # Branches GLANCE_BRANCH=stable/mitaka HORIZON_BRANCH=stable/mitaka KEYSTONE_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka NOVA_BRANCH=stable/mitaka
If you use other deploy tools or build the controller node manually, please set the following configurations to Nova and Neutron configuration files. Set Neutron to use ONOS ML2 plugin and ONOS L3 service plugin.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin service_plugins = networking_onos.plugins.l3.driver.ONOSL3Plugin dhcp_agent_notification = False
[ml2] tenant_network_types = vxlan type_drivers = vxlan mechanism_drivers = onos_ml2 [securitygroup] enable_security_group = True
Set Nova to use config drive for metadata service, so that we don't need to launch Neutron metadata-agent. And of course, set to use Neutron for network service.
[DEFAULT] force_config_drive = True network_api_class = nova.network.neutronv2.api.API security_group_api = neutron [neutron] url = http://[controller_ip]:9696 auth_strategy = keystone admin_auth_url = http://[controller_ip]:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = [admin passwd]
Don't forget to specify conf_onos.ini when you start Neutron service.
/usr/bin/python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /opt/stack/networking-onos/etc/conf_onos.ini
Compute node
No special configurations are required for compute node other than setting network api to Neutron. For DevStack users, here's sample DevStack local.conf.
[[local|localrc]] HOST_IP=10.134.231.30 <-- local IP SERVICE_HOST=162.243.x.x <-- controller IP, must be reachable from your test browser for console access from Horizon RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql NOVA_VNC_ENABLED=True VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP VNCSERVER_LISTEN=$HOST_IP LIBVIRT_TYPE=kvm # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Services ENABLED_SERVICES=n-cpu,neutron # Branches NOVA_BRANCH=stable/mitaka KEYSTONE_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka
If your compute node is a VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set LIBVIRT_TYPE=qemu
. Nested KVM is much faster than qemu
, if possible.
For manual set ups, set Nova to use Neutron as a network API.
[DEFAULT] force_config_drive = always network_api_class = nova.network.neutronv2.api.API security_group_api = neutron [neutron] url = http://[controller_ip]:9696 auth_strategy = keystone admin_auth_url = http://[controller_ip]:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = [admin passwd]
Gateway node
No OpenStack service needs to be running on gateway nodes.
Node Bootstrap
1. Push network-cfg.json after ONOS and OpenStack are ready, and check all COMPUTE type node state is COMPLETE with openstack-nodes command. Use openstack-node-check command for more detailed states of the node if it's INCOMPLETE. For GATEWAY type node, you'll need additional configuration explained later.
$ curl --user onos:rocks -X POST -H "Content-Type: application/json" http://$onos-ip:8181/onos/v1/network/configuration/ -d @network-cfg.json
For your information, pushing network config file triggers reinitialization of all nodes at once. It's no harm to reinitialize COMPLETE state node. If you want to reinitialize only a particular compute node, use openstack-node-init command with hostname.
2. For GATEWAY type nodes,
onos> openstack-nodes hostname=compute-01, type=COMPUTE, managementIp=10.203.25.244, dataIp=10.134.34.222, intBridge=of:00000000000000a1, routerBridge=Optional.empty init=COMPLETE hostname=compute-02, type=COMPUTE, managementIp=10.203.25.245, dataIp=10.134.34.223, intBridge=of:00000000000000a2, routerBridge=Optional.empty init=COMPLETE hostname=gateway-01, type=GATEWAY, managementIp=10.203.198.125, dataIp=10.134.33.208, intBridge=of:00000000000000a3, routerBridge=Optional[of:00000000000000b1] init=COMPLETE hostname=gateway-02, type=GATEWAY, managementIp=10.203.198.131, dataIp=10.134.33.209, intBridge=of:00000000000000a4, routerBridge=Optional[of:00000000000000b2] init=COMPLETE Total 4 nodes
CLI
Command | Usage | Description |
---|---|---|
openstack-nodes | openstack-nodes | Shows the list of compute and gateway nodes that registered to openstack node service |
openstack-node-check | openstack-node-check [hosthame] | Shows the state of each node bootstrap steps |
openstack-node-init | openstack-node-init [hostname] | Try to re-initialize a given node. It's no harm to re-initialize already in COMPLETE state. |