This page explains how to set up and use the CORD-VTN service.
You will need:
- An ONOS cluster installed and running (see ONOS documentation to get to this point)
- An OpenStack service installed and running (detailed OpenStack configurations are described here)
- (TODO: An XOS installed and running)
Architecture
The high level architecture of the system is shown in the following figure.
[add figure]
OpenStack Settings
You can find various setups and ways to build OpenStack from the Internet. Instructions described here include only the notable things to use CORD VTN service.
You will need,
- Controller cluster: at least one 4G RAM machine, runs all OpenStack services including Nova, Neutron, Glance and Keystone.
- Compute nodes: at least one 2G RAM machine, runs nova-compute agent only.
Controller Node
Install networking-onos(Neutron ML2 plugin for ONOS) first.
$ git clone https://github.com/openstack/networking-onos.git $ cd networking-onos $ sudo python setup.py install
Specify ONOS access information. You may want to copy the config file to /etc/neutron/plugins/ml2/ where the other Neutron configuration files are.
# Configuration options for ONOS ML2 Mechanism driver [onos] # (StrOpt) ONOS ReST interface URL. This is a mandatory field. url_path = http://onos.instance.ip.addr:8181/onos/openstackswitching # (StrOpt) Username for authentication. This is a mandatory field. username = onos # (StrOpt) Password for authentication. This is a mandatory field. password = rocks
Set Neutron to use ONOS ML2 plugin.
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
[ml2] tenant_network_types = vxlan type_drivers = vxlan mechanism_drivers = onos_ml2 [ml2_type_vxlan] vni_ranges = 1001:2000
Set Nova to use config drive for metadata service, so that we don't need to launch Neutron metadata-agent.
[DEFAULT] force_config_drive = always
All the other configurations are up to your development settings. Don't forget to specify conf_onos.ini when you launch Neutron service.
/usr/bin/python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /opt/stack/networking-onos/etc/conf_onos.ini
Here's sample DevStack local.conf.
[[local|localrc]] HOST_IP=10.134.231.28 SERVICE_HOST=10.134.231.28 RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 Q_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Images IMAGE_URLS="http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz" FORCE_CONFIG_DRIVE=always NEUTRON_CREATE_INITIAL_NETWORKS=False Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2 Q_PLUGIN_EXTRA_CONF_PATH=/opt/stack/networking-onos/etc Q_PLUGIN_EXTRA_CONF_FILES=(conf_onos.ini) # Services enable_service q-svc disable_service n-net disable_service n-cpu disable_service tempest disable_service c-sch disable_service c-api disable_service c-vol
Compute Node
No special configurations are required for compute node. Just launch nova-compute agent with appropriate hypervisor settings.
Here's sample DevStack local.conf.
[[local|localrc]] HOST_IP=10.134.231.30 <-- local IP SERVICE_HOST=162.243.x.x <-- controller IP, must be reachable from your test browser for console access from Horizon RABBIT_HOST=10.134.231.28 DATABASE_HOST=10.134.231.28 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql NOVA_VNC_ENABLED=True VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP VNCSERVER_LISTEN=$HOST_IP # Log SCREEN_LOGDIR=/opt/stack/logs/screen # Images IMAGE_URLS="http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz" LIBVIRT_TYPE=kvm # Services ENABLED_SERVICES=n-cpu,neutron
If your compute node is a VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set LIBVIRT_TYPE=qemu
. Nested KVM is much faster than qemu
, if possible.
Other Settings
Set OVSDB listening mode in your compute nodes. There are two ways to do.
$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:host_ip
Or you can make the setting permanent by adding the following line to /usr/share/openvswitch/scripts/ovs-ctl, right after "set ovsdb-server "$DB_FILE" line.
You need to restart the openvswitch-switch service.
set "$@" --remote=ptcp:6640
In either way, you should be able to see port "6640" is in listening state.
$ netstat -ntl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN tcp6 0 0 :::22
And check your OVSDB and data plane physical interface.
Check your OVSDB is clean.
$ sudo ovs-vsctl show cedbbc0a-f9a4-4d30-a3ff-ef9afa813efb ovs_version: "2.3.0"
And make sure the physical interface used for data plane does not have any IP address.
$ sudo ip addr flush eth1 $ ip addr show eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 04:01:7c:f8:ee:02 brd ff:ff:ff:ff:ff:ff
ONOS Settings
Add the following configurations to your ONOS network-cfg.json.
{ "apps" : { "org.onosproject.cordvtn" : { "cordvtn" : { "gatewayMac" : "00:00:00:00:00:01", "nodes" : [ { "hostname" : "compute-01", "ovsdbIp" : "10.55.25.244", "ovsdbPort" : "6640", "bridgeId" : "of:0000000000000001", "phyPortName" : "eth0", "localIp" : "10.134.34.222" }, { "hostname" : "compute-02", "ovsdbIp" : "10.241.229.42", "ovsdbPort" : "6640", "bridgeId" : "of:0000000000000002", "phyPortName" : "eth0", "localIp" : "10.134.34.223" } ] } }, "org.onosproject.openstackswitching" : { "openstackswitching" : { "do_not_push_flows" : "true", "neutron_server" : "http://10.243.139.46:9696/v2.0/", "keystone_server" : "http://10.243.139.46:5000/v2.0/", "user_name" : "admin", "password" : "passwd" } } } }
You should set your ONOS to activate at least the following applications.
ONOS_APPS=drivers,openflow-base,lldpprovider,dhcp,cordvtn
Additional Manual Steps
Once OpenStack and ONOS with CORD VTN app start successfully, you should do some additional steps before using the service.
1. Check your compute nodes are registered to CordVtn service and in init COMPLETE state.
onos> cordvtn-nodes hostname=compute-01, ovsdb=10.55.25.244:6640, br-int=of:0000000000000001, phyPort=eth0, localIp=10.134.34.222, init=COMPLETE hostname=compute-02, ovsdb=10.241.229.42:6640, br-int=of:0000000000000002, phyPort=eth0, localIp=10.134.34.223, init=INCOMPLETE Total 2 nodes
If the nodes listed in your network-cfgf.json do not show in the result, try to push network-cfg.json to ONOS with REST API.
curl --user onos:rocks -X POST -H "Content-Type: application/json" http://onos-01:8181/onos/v1/network/configuration/ -d @network-cfg.json
If all the nodes are listed but some of them are in "INCOMPLETE" state, check what is the problem with it and fix it.
Once you fix the problem, push the network-cfg.json again to trigger init for all nodes(it is no harm to init again with COMPLETE state nodes) or use "cordvtn-node-init" command.
onos> cordvtn-node-check compute-02 Integration bridge created/connected : OK (br-int) VXLAN interface created : OK Physical interface added : NO (eth0)
2. Assign "localIP" in your network-cfg.json to "br-int" in your compute node.
This step has not been automated yet. You should do this step manually. Login to the compute nodes and assign local IP to the "br-int" bridge.
When you are done this, you should be able to ping between compute nodes with this local IPs.
$ sudo ip addr add 10.134.34.223 dev br-int $ sudo ip link set br-int up
3. Make sure all virtual switches on compute nodes are added and available in ONOS.
onos> devices id=of:0000000000000001, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.0.2, serial=None, managementAddress=compute.01.ip.addr, protocol=OF_13, channelId=compute.01.ip.addr:39031 id=of:0000000000000002, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.0.2, serial=None, managementAddress=compute.02.ip.addr, protocol=OF_13, channelId=compute.02.ip.addr:44920
SSH to VM/Internet Access from VM (only for test)
If you want to access a VM through SSH or access the Internet from VM without fabric controller and vRouter, you need to do setup the following.
Basically, this settings mimics fabric switch and vRouter inside a compute node, that is, "fabric" bridge corresponds to fabric switch and Linux routing tables corresponds to vRouter.
You'll need at least two physical interface for this setup.
[TODO: add compute node network interfaces diagram]
First, you'd create a bridge named "fabric" (it doesn't have to be fabric).
$ sudo brctl addbr fabric
And create a veth pair.
$ ip link add veth0 type veth peer name veth1
Now, add veth1 and physical interface used for data plane to the fabric bridge.
$ sudo brctl addif fabric veth1 $ sudo brctl addif fabric eth1 $ sudo brctl show bridge name bridge id STP enabled interfaces fabric 8000.000000000001 no eth1 veth1
Check the physical interface MAC address and set the address to br-int. And br-int also have to have "localIP" address specified in network-cfg.json.
$ sudo ip link set address 04:01:6b:50:75:02 dev br-int $ sudo ip addr add 10.134.34.222/16 dev br-int
Set fabric bridge MAC address to the virtual gateway MAC address. Since you don't have vRouter, set any MAC address and just put the same address to "gatewayMac" field of network-cfg.json described in "ONOS Settings" part.
$ sudo ip link set address 00:00:00:00:00:01 dev fabric
Now, add routes of your virtual network IP ranges and NAT rules.
$ sudo route add -net 192.168.0.0/16 dev fabric $ sudo netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 45.55.0.1 0.0.0.0 UG 0 0 0 eth0 45.55.0.0 0.0.0.0 255.255.224.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 fabric $ sudo iptables -A FORWARD -d 192.168.0.0/16 -j ACCEPT $ sudo iptables -A FORWARD -s 192.168.0.0/16 -j ACCEPT $ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
It's ready. Make sure all interface is activated and ping to "localIP" of the other compute nodes.
$ sudo ip link set br-int up $ sudo ip link set veth0 up $ sudo ip link set veth1 up $ sudo ip link set fabric up
How To Test
Without XOS
1. Test VMs in a same network can talk to each other
You can test virtual networks and service chaining without XOS. First, create a network through OpenStack Horizon or OpenStack CLI.
Network name should include one of the following five network types. (You may choose anyone right now)
- private
- private_direct
- private_indirect
- public_direct
- public_indirect
$ neutron net-create net-A-private
Then create VMs with the network.
$ nova boot --image f04ed5f1-3784-4f80-aee0-6bf83912c4d0 --flavor 1 --nic net-id=aaaf70a4-f2b2-488e-bffe-63654f7b8a82 net-A-vm-01
You can access VM through Web Console, virsh console with some tricks(https://github.com/hyunsun/documentations/wiki/Access-OpenStack-VM-through-virsh-console) or if you did "SSH to VM/Internet Access from VM" part, you can ssh to VM from the compute node.
Now, test VMs can ping to each other.
2. Test VMs in a different network cannot talk to each other
Create another network, for example net-B-private, and create another VM with the network. Now, test the VM cannot ping to the network net-A-private.
3. Test service chaining
Enable ip_forward in your VMs.
$ sudo sysctl net.ipv4.ip_forward=1
Create service dependency with the following REST API.
$ curl -u onos:rocks -H "Content-Type:application/json" http://[onos_ip]:8181/onos/cordvtn/service-dependency/[net-A-UUID]/[net-B-UUID]
Now, ping from net-A-private VM to gateway IP address of net-B. There will not be a reply but if you tcpdump in net-B-private VMs, you can see one of the VMs in net-B-private gets the packets from net-A-private VM. Check the following video how service chaining works (service chaining demo starts from 42:00)
With XOS
[TODO]