...
- An ONOS cluster installed and running (see ONOS documentation to get to this point)
- An OpenStack service installed and running (detailed OpenStack configurations are described here)
- (TODO: An XOS installed and running)
Architecture
The high level architecture of the system is shown in the following figure.
...
| Note |
|---|
If your compute node is a VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set |
Other Settings
...
Set OVSDB listening mode in your compute nodes. There are two ways to do.
| Code Block |
|---|
$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:host_ip |
...
| Code Block |
|---|
$ sudo ip addr flush eth1
$ ip addr show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 04:01:7c:f8:ee:02 brd ff:ff:ff:ff:ff:ff |
ONOS Settings
Add the following configurations to your ONOS network-cfg.json.
...
| Code Block |
|---|
ONOS_APPS=drivers,openflow-base,lldpprovider,dhcp,cordvtn |
Additional Manual
...
Steps
Once OpenStack and ONOS with CORD VTN app start successfully, you should do some additional steps before using the service.
...
| Code Block |
|---|
onos> devices id=of:0000000000000001, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.0.2, serial=None, managementAddress=compute.01.ip.addr, protocol=OF_13, channelId=compute.01.ip.addr:39031 id=of:0000000000000002, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.0.2, serial=None, managementAddress=compute.02.ip.addr, protocol=OF_13, channelId=compute.02.ip.addr:44920 |
SSH to VM/Internet Access from VM (only for test)
If you want to access a VM through SSH or access the Internet from VM without fabric controller and vRouter, you need to do setup the following.
Basically, this settings mimics fabric switch and vRouter inside a compute node, that is, "fabric" bridge corresponds to fabric switch and Linux routing tables corresponds to vRouter.
You'll need at least two physical interface for this setup.
[TODO: add compute node network interfaces diagram]
First, you'd create a bridge named "fabric" (it doesn't have to be fabric).
| Code Block |
|---|
$ sudo brctl addbr fabric |
And create a veth pair.
| Code Block |
|---|
$ ip link add veth0 type veth peer name veth1 |
Now, add veth1 and physical interface used for data plane to the fabric bridge.
| Code Block | ||
|---|---|---|
| ||
$ sudo brctl addif fabric veth1
$ sudo brctl addif fabric eth1
$ sudo brctl show
bridge name bridge id STP enabled interfaces
fabric 8000.000000000001 no eth1
veth1 |
Check the physical interface MAC address and set the address to br-int. And br-int also have to have "localIP" address specified in network-cfg.json.
| Code Block |
|---|
$ sudo ip link set address 04:01:6b:50:75:02 dev br-int
$ sudo ip addr add 10.134.34.222/16 dev br-int |
Set fabric bridge MAC address to the virtual gateway MAC address. Since you don't have vRouter, set any MAC address and just put the same address to "gatewayMac" field of network-cfg.json described in "ONOS Settings" part.
| Code Block |
|---|
$ sudo ip link set address 00:00:00:00:00:01 dev fabric |
Now, add routes of your virtual network IP ranges and NAT rules.
| Code Block | ||
|---|---|---|
| ||
$ sudo route add -net 192.168.0.0/16 dev fabric
$ sudo netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 45.55.0.1 0.0.0.0 UG 0 0 0 eth0
45.55.0.0 0.0.0.0 255.255.224.0 U 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 fabric
$ sudo iptables -A FORWARD -d 192.168.0.0/16 -j ACCEPT
$ sudo iptables -A FORWARD -s 192.168.0.0/16 -j ACCEPT
$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE |
It's ready. Make sure all interface is activated and ping to "localIP" of the other compute nodes.
| Code Block | ||
|---|---|---|
| ||
$ sudo ip link set br-int up
$ sudo ip link set veth0 up
$ sudo ip link set veth1 up
$ sudo ip link set fabric up |
How To Test
Without XOS
1. Test VMs in a same network can talk to each other
You can test virtual networks and service chaining without XOS. First, create a network through OpenStack Horizon or OpenStack CLI.
Network name should include one of the following five network types. (You may choose anyone right now)
- private
- private_direct
- private_indirect
- public_direct
- public_indirect
| Code Block | ||||
|---|---|---|---|---|
| ||||
$ neutron net-create net-A-private |
Then create VMs with the network.
| Code Block | ||||
|---|---|---|---|---|
| ||||
$ nova boot --image f04ed5f1-3784-4f80-aee0-6bf83912c4d0 --flavor 1 --nic net-id=aaaf70a4-f2b2-488e-bffe-63654f7b8a82 net-A-vm-01 |
You can access VM through Web Console, virsh console with some tricks(https://github.com/hyunsun/documentations/wiki/Access-OpenStack-VM-through-virsh-console) or if you did "SSH to VM/Internet Access from VM" part, you can ssh to VM from the compute node.
Now, test VMs can ping to each other.
2. Test VMs in a different network cannot talk to each other
Create another network, for example net-B-private, and create another VM with the network. Now, test the VM cannot ping to the network net-A-private.
3. Test service chaining
Enable ip_forward in your VMs.
| Code Block | ||
|---|---|---|
| ||
$ sudo sysctl net.ipv4.ip_forward=1 |
Create service dependency with the following REST API.
| Code Block | ||
|---|---|---|
| ||
$ curl -u onos:rocks -H "Content-Type:application/json" http://[onos_ip]:8181/onos/cordvtn/service-dependency/[net-A-UUID]/[net-B-UUID] |
Now, ping from net-A-private VM to gateway IP address of net-B. There will not be a reply but if you tcpdump in net-B-private VMs, you can see one of the VMs in net-B-private gets the packets from net-A-private VM. Check the following video how service chaining works (service chaining demo starts from 42:00)
| Widget Connector | ||
|---|---|---|
|
With XOS
[TODO]