This page explains how to set up and use the CORD-VTN service.
You will need:
The high level architecture of the system is shown in the following figure.
[add figure]
You can find various setups and ways to build OpenStack from the Internet. Instructions described here include only the notable things to use CORD VTN service.
You will need,
Install networking-onos(Neutron ML2 plugin for ONOS) first.
$ git clone https://github.com/openstack/networking-onos.git $ cd networking-onos $ sudo python setup.py install |
Specify ONOS access information. You may want to copy the config file to /etc/neutron/plugins/ml2/ where the other Neutron configuration files are.
# Configuration options for ONOS ML2 Mechanism driver [onos] # (StrOpt) ONOS ReST interface URL. This is a mandatory field. url_path = http://onos.instance.ip.addr:8181/onos/openstackswitching # (StrOpt) Username for authentication. This is a mandatory field. username = onos # (StrOpt) Password for authentication. This is a mandatory field. password = rocks |
Set Neutron to use ONOS ML2 plugin.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin |
[ml2] tenant_network_types = vxlan type_drivers = vxlan mechanism_drivers = onos_ml2 [ml2_type_vxlan] vni_ranges = 1001:2000 |
Set Nova to use config drive for metadata service, so that we don't need to launch Neutron metadata-agent.
[DEFAULT] force_config_drive = always |
All the other configurations are up to your development settings. Don't forget to specify conf_onos.ini when you launch Neutron service.
/usr/bin/python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /opt/stack/networking-onos/etc/conf_onos.ini |
Here's sample DevStack local.conf.
[[local|localrc]] HOST_IP=10.134.231.28 SERVICE_HOST=10.134.231.28 RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 Q_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Images IMAGE_URLS="http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz" FORCE_CONFIG_DRIVE=always NEUTRON_CREATE_INITIAL_NETWORKS=False Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2 Q_PLUGIN_EXTRA_CONF_PATH=/opt/stack/networking-onos/etc Q_PLUGIN_EXTRA_CONF_FILES=(conf_onos.ini) # Services enable_service q-svc disable_service n-net disable_service n-cpu disable_service tempest disable_service c-sch disable_service c-api disable_service c-vol |
No special configurations are required for compute node. Just launch nova-compute agent with appropriate hypervisor settings.
Here's sample DevStack local.conf.
[[local|localrc]] HOST_IP=10.134.231.30 <-- local IP SERVICE_HOST=162.243.x.x <-- controller IP, must be reachable from your test browser for console access from Horizon RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql NOVA_VNC_ENABLED=True VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP VNCSERVER_LISTEN=$HOST_IP # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Images IMAGE_URLS="http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz" LIBVIRT_TYPE=kvm # Services ENABLED_SERVICES=n-cpu,neutron |
If your compute node is a VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set |
Set OVSDB listening mode. There are two ways to do.
$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:host_ip |
Or you can make the setting permanent by adding the following line to /usr/share/openvswitch/scripts/ovs-ctl, right after "set ovsdb-server "$DB_FILE" line.
You need to restart the openvswitch-switch service.
set "$@" --remote=ptcp:6640 |
In either way, you should be able to see port "6640" is in listening state.
$ netstat -ntl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN tcp6 0 0 :::22 |
And check your OVSDB and data plane physical interface.
Check your OVSDB is clean.
$ sudo ovs-vsctl show
cedbbc0a-f9a4-4d30-a3ff-ef9afa813efb
ovs_version: "2.3.0" |
And make sure the physical interface used for data plane does not have any IP address.
$ sudo ip addr flush eth1
$ ip addr show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 04:01:7c:f8:ee:02 brd ff:ff:ff:ff:ff:ff |
Add the following configurations to your ONOS network-cfg.json.
{
"apps" : {
"org.onosproject.cordvtn" : {
"cordvtn" : {
"gatewayMac" : "00:00:00:00:00:01",
"nodes" : [
{
"hostname" : "compute-01",
"ovsdbIp" : "10.55.25.244",
"ovsdbPort" : "6640",
"bridgeId" : "of:0000000000000001",
"phyPortName" : "eth0",
"localIp" : "10.134.34.222"
},
{
"hostname" : "compute-02",
"ovsdbIp" : "10.241.229.42",
"ovsdbPort" : "6640",
"bridgeId" : "of:0000000000000002",
"phyPortName" : "eth0",
"localIp" : "10.134.34.223"
}
]
}
},
"org.onosproject.openstackswitching" : {
"openstackswitching" : {
"do_not_push_flows" : "true",
"neutron_server" : "http://10.243.139.46:9696/v2.0/",
"keystone_server" : "http://10.243.139.46:5000/v2.0/",
"user_name" : "admin",
"password" : "passwd"
}
}
}
} |
You should set your ONOS to activate at least the following applications.
ONOS_APPS=drivers,openflow-base,lldpprovider,dhcp,cordvtn |
Once OpenStack and ONOS with CORD VTN app start successfully, you should do some additional steps before using the service.
1. Check your compute nodes are registered to CordVtn service and in init COMPLETE state.
onos> cordvtn-nodes hostname=compute-01, ovsdb=10.55.25.244:6640, br-int=of:0000000000000001, phyPort=eth0, localIp=10.134.34.222, init=COMPLETE hostname=compute-02, ovsdb=10.241.229.42:6640, br-int=of:0000000000000002, phyPort=eth0, localIp=10.134.34.223, init=INCOMPLETE Total 2 nodes |
If the nodes listed in your network-cfgf.json do not show in the result, try to push network-cfg.json to ONOS with REST API.
curl --user onos:rocks -X POST -H "Content-Type: application/json" http://onos-01:8181/onos/v1/network/configuration/ -d @network-cfg.json |
If all the nodes are listed but some of them are in "INCOMPLETE" state, check what is the problem with it and fix it.
Once you fix the problem, push the network-cfg.json again to trigger init for all nodes(it is no harm to init again with COMPLETE state nodes) or use "cordvtn-node-init" command.
onos> cordvtn-node-check compute-02 Integration bridge created/connected : OK (br-int) VXLAN interface created : OK Physical interface added : NO (eth0) |
2. Assign "localIP" in your network-cfg.json to "br-int" in your compute node.
This step has not been automated yet. You should do this step manually. Login to the compute nodes and assign local IP to the "br-int" bridge.
When you are done this, you should be able to ping between compute nodes with this local IPs.
$ sudo ip addr add 10.134.34.223 dev br-int $ sudo ip link set br-int up |
3. Make sure all virtual switches on compute nodes are added and available in ONOS.
onos> devices id=of:0000000000000001, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.0.2, serial=None, managementAddress=compute.01.ip.addr, protocol=OF_13, channelId=compute.01.ip.addr:39031 id=of:0000000000000002, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.0.2, serial=None, managementAddress=compute.02.ip.addr, protocol=OF_13, channelId=compute.02.ip.addr:44920 |