Work in progress...
Prerequisite
You will need:
- An ONOS cluster installed and running
- An OpenStack service installed and running ("stable/mitaka" version is used here)
OpenStack Setup
How to deploy OpenStack is out of scope of this documentation. Here, it only describes some configurations related to use SONA. All other settings are completely up to your environment.
Controller Node
1. Install networking-onos (Neutron ML2 plugin for ONOS) first.
$ git clone https://github.com/openstack/networking-onos.git $ cd networking-onos $ sudo python setup.py install
2. Specify ONOS access information. You may want to copy the config file to /etc/neutron/plugins/ml2/ where the other Neutron configuration files are.
# Configuration options for ONOS ML2 Mechanism driver [onos] # (StrOpt) ONOS ReST interface URL. This is a mandatory field. url_path = http://onos.instance.ip.addr:8181/onos/openstackswitching # (StrOpt) Username for authentication. This is a mandatory field. username = onos # (StrOpt) Password for authentication. This is a mandatory field. password = rocks
3. Next step is to install and run OpenStack services.
For DevStack users, use this sample DevStack local.conf to build OpenStack controller node. Make sure your DevStack branch is consistent with the OpenStack branches, "stable/mitaka" in this example.
[[local|localrc]] HOST_IP=10.134.231.28 SERVICE_HOST=10.134.231.28 RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 Q_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Images IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://www.planet-lab.org/cord/trusty-server-multi-nic.img" FORCE_CONFIG_DRIVE=True # Networks Q_ML2_TENANT_NETWORK_TYPE=vxlan Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2 Q_PLUGIN_EXTRA_CONF_PATH=/opt/stack/networking-onos/etc Q_PLUGIN_EXTRA_CONF_FILES=(conf_onos.ini) ML2_L3_PLUGIN=networking_onos.plugins.l3.driver.ONOSL3Plugin NEUTRON_CREATE_INITIAL_NETWORKS=False # Services enable_service q-svc disable_service n-net disable_service n-cpu disable_service tempest disable_service c-sch disable_service c-api disable_service c-vol # Branches GLANCE_BRANCH=stable/mitaka HORIZON_BRANCH=stable/mitaka KEYSTONE_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka NOVA_BRANCH=stable/mitaka
If you use other deploy tools or build the controller node manually, please set the following configurations to Nova and Neutron configuration files.
Set Neutron to use ONOS ML2 plugin.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin service_plugins = networking_onos.plugins.l3.driver.ONOSL3Plugin dhcp_agent_notification = False
[ml2] tenant_network_types = vxlan type_drivers = vxlan mechanism_drivers = onos_ml2 [securitygroup] enable_security_group = True
Set Nova to use config drive for metadata service, so that we don't need to launch Neutron metadata-agent.
And of course, set to use Neutron for network service.
[DEFAULT] force_config_drive = True network_api_class = nova.network.neutronv2.api.API security_group_api = neutron [neutron] url = http://[controller_ip]:9696 auth_strategy = keystone admin_auth_url = http://[controller_ip]:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = [admin passwd]
Don't forget to specify conf_onos.ini when you start Neutron service.
/usr/bin/python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /opt/stack/networking-onos/etc/conf_onos.ini
Compute node
No special configurations are required for compute node other than setting network api to Neutron.
For DevStack users, here's sample DevStack local.conf.
[[local|localrc]] HOST_IP=10.134.231.30 <-- local IP SERVICE_HOST=162.243.x.x <-- controller IP, must be reachable from your test browser for console access from Horizon RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql NOVA_VNC_ENABLED=True VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP VNCSERVER_LISTEN=$HOST_IP LIBVIRT_TYPE=kvm # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Services ENABLED_SERVICES=n-cpu,neutron # Branches NOVA_BRANCH=stable/mitaka KEYSTONE_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka
If your compute node is a VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set LIBVIRT_TYPE=qemu
. Nested KVM is much faster than qemu
, if possible.
For manual set ups, set Nova to use Neutron as a network API.
[DEFAULT] force_config_drive = always network_api_class = nova.network.neutronv2.api.API security_group_api = neutron [neutron] url = http://[controller_ip]:9696 auth_strategy = keystone admin_auth_url = http://[controller_ip]:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = [admin passwd]
Gateway node
No OpenStack service needs to be running on gateway nodes.
Additional compute and gateway node setup
1. Make sure your OVS version is 2.3.0 or later.
2. Set OVSDB listening mode in your compute nodes. There are two ways.
$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:host_ip
Or you can make the setting permanent by adding the following line to /usr/share/openvswitch/scripts/ovs-ctl, right after "set ovsdb-server "$DB_FILE" line. You need to restart the openvswitch-switch service.
set "$@" --remote=ptcp:6640
In either way, you should be able to see port "6640" is in listening state.
$ netstat -ntl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN tcp6 0 0 :::22
3. Check your OVSDB. It is okay If there's a bridge with name "br-int", but note that SONA will add or update its controller, DPID, and fail mode.
$ sudo ovs-vsctl show cedbbc0a-f9a4-4d30-a3ff-ef9afa813efb ovs_version: "2.3.0"
TODO: Add additional gateway node setups
ONOS Setup
How to deploy ONOS is out of scope of this documentation.
1. If your ONOS is ready, create network-cfg.json file to configure SONA apps. If you want Neutron L3 services, you'll need GATEWAY type node. (L3 SERVICE IS WORKING IN PROGRESS..)
{ "apps" : { "org.onosproject.openstackinterface" : { "openstackinterface" : { "neutronServer" : "http://10.243.139.46:9696/v2.0/", "keystoneServer" : "http://10.243.139.46:5000/v2.0/", "userName" : "admin", "password" : "nova" } }, "org.onosproject.openstacknode" : { "openstacknode" : { "nodes" : [ { "hostname" : "compute-01", "type" : "COMPUTE", "managementIp" : "10.203.25.244", "dataIp" : "10.134.34.222", "integrationBridge" : "of:00000000000000a1" }, { "hostname" : "compute-02", "type" : "COMPUTE", "managementIp" : "10.203.229.42", "dataIp" : "10.134.34.223", "integrationBridge" : "of:00000000000000a2" }, { "hostname" : "gateway-01", "type" : "GATEWAY", "managementIp" : "10.203.198.125", "dataIp" : "10.134.33.208", "integrationBridge" : "of:00000000000000a3", "routerBridge" : "of:00000000000000b3" }, { "hostname" : "gateway-02", "type" : "GATEWAY", "managementIp" : "10.203.198.131", "dataIp" : "10.134.33.209", "integrationBridge" : "of:00000000000000a4", "routerBridge" : "of:00000000000000b4" } ] } } }, "devices" : { "of:00000000000000a1" : { "basic" : { "driver" : "sona" } }, "of:00000000000000a2" : { "basic" : { "driver" : "sona" } }, "of:00000000000000b3" : { "basic" : { "driver" : "softrouter" } }, "of:00000000000000b4" : { "basic" : { "driver" : "softrouter" } } } }
We recommend you to place this network-cfg.json under "onos/package/config" when you package ONOS, since all OVS switches set its driver to "ovs" by default if there's no specific "devices" and "driver" settings in the network config. Once the driver is set to "ovs" it's hard to change it to "sona" or "softrouter" later. This might be improved soon.
2. Make sure to activate "only" the following ONOS applications.
ONOS_APPS=drivers,openflow-base,openstackswitching
If you want to test Neutron L3 service APIs, enable opesntackRouting also. (L3 SERVICE IS WORKING IN PROGRESS...)
ONOS_APPS=drivers,openflow-base,openstackswitching,openstackRouting
3. Push the network config file to ONOS.
curl --user onos:rocks -X POST -H "Content-Type: application/json" http://$onos-ip:8181/onos/v1/network/configuration/ -d @network-cfg.json
Check all node state is COMPLETE. Use openstack-node-check
command for more detailed states of the node. Pushing network configuration triggers reinitialization of the nodes. It's no harm to reinitialize COMPLETE state node. If you want to reinitialize a particular compute node, use openstack-node-init
command with hostname.
onos> openstack-nodes hostname=compute-01, type=COMPUTE, managementIp=10.203.25.244, dataIp=10.134.34.222, intBridge=of:00000000000000a1, routerBridge=Optional.empty init=INCOMPLETE hostname=compute-02, type=COMPUTE, managementIp=10.203.229.42, dataIp=10.134.34.223, intBridge=of:00000000000000a2, routerBridge=Optional.empty init=COMPLETE hostname=gateway-01, type=GATEWAY, managementIp=10.203.198.125, dataIp=10.134.33.208, intBridge=of:00000000000000a3, routerBridge=Optional[of:00000000000000b3] init=COMPLETE hostname=gateway-02, type=GATEWAY, managementIp=10.203.198.131, dataIp=10.134.33.209, intBridge=of:00000000000000a4, routerBridge=Optional[of:00000000000000b4] init=COMPLETE Total 4 nodes
It's done. Enjoy.
CLIs
Command | Usage | Description |
---|---|---|
openstack-nodes | openstack-nodes | Shows the list of compute and gateway nodes that registered to openstack node service |
openstack-node-check | openstack-node-check [hosthame] | Shows the state of each node bootstrap steps |
openstack-node-init | openstack-node-init [hostname] | Try to re-initialize a given node. It's no harm to re-initialize already in COMPLETE state. |