...
Introduction
You will need:
- An ONOS clusters installed and running, one for SONA and the others for vRouter. ONOS for vRouter runs on every gateway node.
- An OpenStack service installed and running ("stable/mitaka" version is used here)
Note that this instructions assume you’re familiar with ONOS and OpenStack, and do not provide a guide to how to install or trouble shooting these services. However, If you aren’t, please find a guide from ONOS(http://wiki.onosproject.org) and OpenStack(http://docs.openstack.org), respectively.
Image Removed
The example deployment depicted in the above figure uses three networks with an external router.
- Management network: used for ONOS to control virtual switches, and OpenStack to communicate with nova-compute agent running on the compute node
- Data network: used for East-West traffic via VXLAN, GRE or GENEVE tunnel
- External network: used for North-South traffic, normally only the gateway node has a connectivity gateway nodes have an access to this network
If you don't have enough network interfaces in your test environment, you can share the networks. And you can also emulate the All networks can share a network interface in case your test machine does not have enough interfaces. You can also emulate external router. The figure below shows one possible an example test environment used in the rest of this guide .
Image Removed
In this test environment, we don't have external network or external router, and emulated it with quagga-router container.
Prerequisite
with emulated external router and two network interfaces, one for sharing management and external, and the other for data.
Image Added
Prerequisite
1. Install OVS to all nodes including compute and gateway. 1. Make sure your OVS version is 2.3.0 or later (later . This guide works very well for me than 2.5.0 is recommended). Refer to this guide for updating OVS (don't forget to change the version in the guide).
Note |
---|
In ONOS version 1.11 (Loon) or later, stateful NAT feature is included, which is disabled by default. If you want to use the stateful NAT feature, you have to install OVS 2. |
...
...
...
2. Set OVSDB listening mode in your compute nodes. There are two ways. . Note that "compute_node_ip" in the below command should be an address accessible address from the ONOS instance.
Code Block |
---|
|
$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:[compute_node_ip] |
Or If you can want to make the setting permanent by adding , add the following line to /usr/share/openvswitch/scripts/ovs-ctl, right after "set ovsdb-server "$DB_FILE" line. You'll need to restart the openvswitch-switch service after that.
Code Block |
---|
set "$@" --remote=ptcp:6640 |
Either way, Now you should be able to see port "6640" is in listening state.
Code Block |
---|
|
$ netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN
tcp6 0 0 :::22 |
3. Check your OVSDBOVS state. It is okay If there's a bridge with name recommended to clean up all stale bridges used in OpenStack including br-int, but note that SONA will add or update its controller, DPID, and fail mode, br-tun, and br-ex if there is any. Note that SONA will add required bridges via OVSDB once it is up.
Code Block |
---|
|
$ sudo ovs-vsctl show
cedbbc0a-f9a4-4d30-a3ff-ef9afa813efb
ovs_version: "2.58.02" |
...
OpenStack Setup
1. Refer to the guide(SONA Network Configuration Guide) and write a network configuration for SONA. Place the network-cfg.json under tools/package/config/, build package, and then install ONOS. Here's an example cell configuration and commands.
Code Block |
---|
|
onos$ cell
ONOS_CELL=sona
OCI=10.134.231.29
OC1=10.134.231.29
ONOS_APPS=drivers,openflow-base,openstackswitching,openstackrouting
ONOS_GROUP=sdn
ONOS_SCENARIOS=/Users/hyunsun/onos/tools/test/scenarios
ONOS_TOPO=default
ONOS_USER=sdn
ONOS_WEB_PASS=rocks
ONOS_WEB_USER=onos
onos$ buck build onos
onos$ cp ~/network-cfg.json ~/onos/tools/package/config/
onos$ buck build package
onos$ stc setup |
Make sure to activate "only" the following ONOS applications.
Code Block |
---|
ONOS_APPS=drivers,openflow-base,openstackswitching |
If you want Neutron L3 service, enable openstackrouting, too.
Code Block |
---|
ONOS_APPS=drivers,openflow-base,openstackswitching,openstackrouting |
2. Check all the applications are activated successfully.
Code Block |
---|
onos> apps -s -a
* 4 org.onosproject.dhcp 1.7.0.SNAPSHOT DHCP Server App
* 6 org.onosproject.optical-model 1.7.0.SNAPSHOT Optical information model
* 12 org.onosproject.openflow-base 1.7.0.SNAPSHOT OpenFlow Provider
* 19 org.onosproject.ovsdb-base 1.7.0.SNAPSHOT OVSDB Provider
* 22 org.onosproject.drivers.ovsdb 1.7.0.SNAPSHOT OVSDB Device Drivers
* 27 org.onosproject.openstackinterface 1.7.0.SNAPSHOT OpenStack Interface App
* 28 org.onosproject.openstacknode 1.7.0.SNAPSHOT OpenStack Node Bootstrap App
* 29 org.onosproject.scalablegateway 1.7.0.SNAPSHOT Scalable GW App
* 30 org.onosproject.openstackrouting 1.7.0.SNAPSHOT OpenStack Routing App
* 44 org.onosproject.openstackswitching 1.7.0.SNAPSHOT OpenStack Switching App
* 50 org.onosproject.drivers 1.7.0.SNAPSHOT Default device drivers |
OpenStack Setup
How to deploy OpenStack is out of scope of this documentation. Here, it only describes some configurations related to use SONA. All other settings are completely up to your environment.
Controller Node
1. Install networking-onos (Neutron ML2 plugin for ONOS) first.
Code Block |
---|
|
$ git clone https://github.com/openstack/networking-onos.git
$ cd networking-onos
$ sudo python setup.py install |
How to deploy OpenStack is out of scope of this documentation. Here, it only describes configurations related to use SONA. All other settings are completely up to your environment.
Controller Node
Note |
---|
The guide is based on OpenStack Ocata/Pike version. If you want to install Newton version of OpenStack, please refer to here <TBA>. |
1. The first step is installing networking-onos. Please create '/opt/stack' folder and install networking-onos in /opt/stack as follows.
Code Block |
---|
/opt/stack$ git clone --branch [stable/ocata or stable/pike] https://github.com/openstack/networking-onos.git |
Next, please create a file /opt/stack/networking-onos/etc/conf_onos.ini using the following template. Please set the IP_ADDRESS_OF_ONOS to host IP address of ONOS controller.
Code Block |
---|
language | bash |
---|
title | conf_onos.ini |
---|
|
# Configuration options for ONOS ML2 Mechanism driver
[onos]
# (StrOpt) ONOS ReST interface URL. This is a mandatory field.
url_path = http://IP_ADDRESS_OF_ONOS:8181/onos/openstacknetworking
# (StrOpt) Username for authentication. This is a mandatory field.
username = onos
# (StrOpt) Password for authentication. This is a mandatory field.
password = rocks |
2. The next step is installing and running OpenStack services. For DevStack users, use the following sample DevStack local.conf to build OpenStack controller node. Make sure your DevStack branch is consistent with the OpenStack branch, "stable/ocata" for example.
Code Block |
---|
language | bash |
---|
title | clone DevStack |
---|
|
$ git clone -b [stable/ocata or stable/pike] https://git.openstack.org/openstack-dev/devstack |
The following is the example of local.conf. Please set the IP addresses correctly, and the network setting should be set properly as below.(Branches can be modified to stable/queens or stable/rocky for your wish)
Code Block |
---|
title | local.conf of Controller Node |
---|
|
[[local|localrc]]
HOST_IP=10.134.231.28
SERVICE_HOST=10.134.231.28
RABBIT_HOST=10.134.231.28
DATABASE_HOST=10.134.231.28
Q_HOST=10.134.231.28
ADMIN_PASSWORD=nova
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
# Log
USE_SCREEN=True
SCREEN_LOGDIR=/opt/stack/logs/screen
LOGFILE=/opt/stack/logs/xstack.sh.log
LOGDAYS=1
# Force config drive
FORCE_CONFIG_DRIVE=True
# Networks
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2
Q_ML2_PLUGIN_TYPE_DRIVERS=flat,vlan,vxlan
ML2_L3_PLUGIN=onos_router
NEUTRON_CREATE_INITIAL_NETWORKS=False
enable_plugin networking-onos https://github.com/openstack/networking-onos.git stable/pike
ONOS_MODE=controller_only
# Services
ENABLED_SERVICES=key,nova,n-api,n-cond,n-sch,n-novnc,n-cauth,placement-api,g-api,g-reg,q-svc,horizon,rabbit,mysql
# Branches
GLANCE_BRANCH=stable/pike
HORIZON_BRANCH=stable/pike
KEYSTONE_BRANCH=stable/pike
NEUTRON_BRANCH=stable/pike
NOVA_BRANCH=stable/pike |
If you use other deployment tool or build OpenStack manually, refer to the following Nova and Neutron configurations2. Specify ONOS access information. You may want to copy the config file to /etc/neutron/plugins/ml2/ where the other Neutron configuration files are.
Code Block |
---|
language | bash |
---|
title | networking-onos/etc/neutron/neutron.conf_onos.ini |
---|
|
# Configuration options for ONOS ML2 Mechanism driver
[onos]
# (StrOpt) ONOS ReST interface URL. This is a mandatory field.
url_path = http://10.134.231.29:8181/onos/openstackswitching
# (StrOpt) Username for authentication. This is a mandatory field.
username = onos
# (StrOpt) Password for authentication. This is a mandatory field.
password = rocks |
3. Next step is to install and run OpenStack services. For DevStack users, use this sample DevStack local.conf to build OpenStack controller node. Make sure your DevStack branch is consistent with the OpenStack branches, "stable/mitaka" in this example.
|
core_plugin = ml2
service_plugins = onos_router
dhcp_agent_notification = False |
Code Block |
---|
title | /etc/neutron/plugins/ml2/ml2_conf.ini |
---|
|
[ml2]
tenant_network_types = vxlan
type_drivers = flat,vlan,vxlan
mechanism_drivers = onos_ml2
[ml2_type_flat]
flat_networks = public1, public2 #Whatever physical networks you want to create
[securitygroup]
enable_security_group = True |
Set Nova to use config drive for metadata service, so that we don't need to run Neutron metadata-agent. And of course, set Neutron for network service.
Code Block |
---|
|
[DEFAULT]
force_config_drive = True
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
[neutron]
url = http:// |
Code Block |
---|
title | local.conf of Controller Node |
---|
|
[[local|localrc]]
HOST_IP=10.134.231.28
SERVICE_HOST=10.134.231.28
RABBIT_HOST=10.134.231.28:9696
DATABASEauth_HOST=10.134.231.28
Q_HOST=strategy = keystone
admin_auth_url = http://10.134.231.28
ADMIN_PASSWORD=nova
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
# Log
SCREEN_LOGDIR=/opt/stack/logs/screen
# Images
IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://www.planet-lab.org/cord/trusty-server-multi-nic.img"
FORCE_CONFIG_DRIVE=True
# Networks
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2
Q_PLUGIN_EXTRA_CONF_PATH=:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = [admin passwd] |
Don't forget to set conf_onos.ini when you start Neutron service.
Code Block |
---|
/usr/bin/python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /opt/stack/networking-onos/etc
Q_PLUGIN_EXTRA_CONF_FILES=(/conf_onos.ini)
ML2_L3_PLUGIN=networking_onos.plugins.l3.driver.ONOSL3Plugin
NEUTRON_CREATE_INITIAL_NETWORKS=False
# Services
enable_service q-svc
disable_service n-net
disable_service n-cpu
disable_service tempest
disable_service c-sch
disable_service c-api
disable_service c-vol
# Branches
GLANCE_BRANCH=stable/mitaka
HORIZON_BRANCH=stable/mitaka
KEYSTONE_BRANCH=stable/mitaka
NEUTRON_BRANCH=stable/mitaka
NOVA_BRANCH=stable/mitaka |
If you use other deploy tools or build the controller node manually, please set the following configurations to Nova and Neutron configuration files. Set Neutron to use ONOS ML2 plugin and ONOS L3 service plugin.
Code Block |
---|
language | bash |
---|
title | /etc/neutron/neutron.conf |
---|
|
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = networking_onos.plugins.l3.driver.ONOSL3Plugin
dhcp_agent_notification = False |
Code Block |
---|
title | /etc/neutron/plugins/ml2/ml2_conf.ini |
---|
|
[ml2]
tenant_network_types = vxlan
type_drivers = vxlan
mechanism_drivers = onos_ml2
[securitygroup]
enable_security_group = True |
Compute node
No special configurations are required for compute node other than setting network api to Neutron. For DevStack users, here's a sample DevStack local.conf.
Code Block |
---|
title | local.conf for Compute Node |
---|
|
[[local|localrc]]
HOST_IP=10.134.231.30
SERVICE_HOST=10.134.231.28
RABBIT_HOST=10.134.231.28
DATABASE_HOST=10.134.231.28
ADMIN_PASSWORD=nova
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
NOVA_VNC_ENABLED=True
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
VNCSERVER_LISTEN=$HOST_IP
# Force config drive
FORCE_CONFIG_DRIVE=True
LIBVIRT_TYPE=kvm # should be qemu if your compute node is a vm
# Log
USE_SCREEN=True
SCREEN_LOGDIR=/opt/stack/logs/screen
LOGFILE=/opt/stack/logs/xstack.sh.log
LOGDAYS=1
# Services
ENABLED_SERVICES=n-cpu,placement-client,neutron
enable_plugin networking-onos https://github.com/openstack/networking-onos.git stable/pike
ONOS_MODE=compute
# Branches
NOVA_BRANCH=stable/pike
KEYSTONE_BRANCH=stable/pike
NEUTRON_BRANCH=stable/pike |
Note |
---|
If your compute node is a VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set LIBVIRT_TYPE=qemu . Nested KVM is much faster than qemu , if possible. |
For manual set up, set Neutron as a network API in the Nova configurationSet Nova to use config drive for metadata service, so that we don't need to launch Neutron metadata-agent. And of course, set to use Neutron for network service.
Code Block |
---|
|
[DEFAULT]
force_config_drive = True
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
[neutron]
url = http://10.134.231.28:9696
auth_strategy = keystone
admin_auth_url = http://10.134.231.28:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = [admin passwd] |
Don't forget to specify conf_onos.ini when you start Neutron service.
Code Block |
---|
/usr/bin/python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /opt/stack/networking-onos/etc/conf_onos.ini |
Compute node
No special configurations are required for compute node other than setting network api to Neutron. For DevStack users, here's sample DevStack local.conf.
Code Block |
---|
title | local.conf for Compute Node |
---|
|
[[local|localrc]]
HOST_IP=10.134.231.30
SERVICE_HOST=10.134.231.30
RABBIT_HOST=10.134.231.28
DATABASE_HOST=10.134.231.28
ADMIN_PASSWORD=nova
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
NOVA_VNC_ENABLED=True
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
VNCSERVER_LISTEN=$HOST_IP
LIBVIRT_TYPE=kvm
# Log
SCREEN_LOGDIR=/opt/stack/logs/screen
# Services
ENABLED_SERVICES=n-cpu,neutron
# Branches
NOVA_BRANCH=stable/mitaka
KEYSTONE_BRANCH=stable/mitaka
NEUTRON_BRANCH=stable/mitaka |
Note |
---|
If your compute node is a VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set LIBVIRT_TYPE=qemu . Nested KVM is much faster than qemu , if possible. |
For manual set ups, set Nova to use Neutron as a network API.
Code Block |
---|
|
[DEFAULT]
force_config_drive = always
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
[neutron]
url = http://10.134.231.28:9696
auth_strategy = keystone
admin_auth_url = http://10.134.231.28:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = [admin passwd] |
Gateway node
No OpenStack service needs to be running on gateway nodes.
Node and ONOS-vRouter Setup
...
Post Installation Setup
After installing OpenStack completely, please run the following configuration in the OpenStack controller node. It is required for OpenStack Ocata version, NOT because of SONA. Please replace OPENSTACK_CONTROLLER_HOST_IP to the correct IP address.
Code Block |
---|
$ nova-manage cell_v2 map_cell0 --database_connection 'mysql+pymysql://root:nova@OPENSTACK_CONTROLLER_HOST_IP/nova_cell0?charset=utf8'
$ nova-manage cell_v2 simple_cell_setup --transport-url rabbit://stackrabbit:nova@OPENSTACK_CONTROLLER_HOST_IP:5672/
$ nova-manage cell_v2 discover_hosts |
Gateway node
No OpenStack service is required for a gateway node.
ONOS-SONA Setup
1. Refer to SONA Network Configuration Guide and write a network configuration file, typically named network-cfg.json. Place the configuration file under tools/package/config/, build, create package, and then install ONOS.
Note |
---|
Note that following tutorial is relevant only for building SONA apps against BUCK tool, and this only works from ONOS 1.13.0. So we strongly recommend the user to use ONOS 1.13.0 or above, if you would like to use BUCK to build SONA apps. For some reasons, if you have to use ONOS below 1.12.0, please use MAVEN to build the ONOS. |
Code Block |
---|
# SONA cluster (1-node)
export OC1=onos-01
export ONOS_APPS="drivers,openflow-base,openstacknetworking" |
In case you are using cell, here's example cell file for 3-node cluster
Code Block |
---|
export OC1=172.27.0.7
export OC2=172.27.0.8
export OC3=172.27.0.10
export ONOS_APPS="drivers,openflow-base,openstacknetworking" |
Code Block |
---|
onos$ ob
onos$ op
onos$ stc setup
onos$ curl --user onos:rocks -X POST -H "Content-Type: application/json" http://ONOS_IP:8181/onos/openstacknode/configure -d @network-cfg.json |
2. Check all applications are activated successfully.
Code Block |
---|
onos> apps -a -s
* 9 org.onosproject.ovsdb-base 1.13.0.SNAPSHOT OVSDB Provider
* 13 org.onosproject.optical-model 1.13.0.SNAPSHOT Optical information model
* 20 org.onosproject.drivers 1.13.0.SNAPSHOT Default device drivers
* 39 org.onosproject.drivers.ovsdb 1.13.0.SNAPSHOT OVSDB Device Drivers
* 47 org.onosproject.openflow-base 1.13.0.SNAPSHOT OpenFlow Provider
* 56 org.onosproject.openstacknode 1.13.0.SNAPSHOT OpenStack Node Bootstrap App
* 57 org.onosproject.openstacknetworking 1.13.0.SNAPSHOT OpenStack Networking App |
3. Check all nodes are registered and all COMPUTE type node's states are COMPLETE with openstack-nodes command. Use openstack-node-check command for more detailed states if the state is INCOMPLETE. If you want to reinitialize only a particular compute node, use openstack-node-init command with hostname. If you have no physical peer switch, for GATEWAY type node, the state of the node would be DEVICE_CREATED state. You'll need additional configurations explained later for gateway nodes.
Code Block |
---|
onos> openstack-nodes
Hostname Type Integration Bridge Router Bridge Management IP Data IP VLAN Intf State
sona-compute-01 COMPUTE of:00000000000000a1 10.1.1.162 10.1.1.162 COMPLETE
sona-compute-02 COMPUTE of:00000000000000a2 10.1.1.163 10.1.1.163 COMPLETE
sona-gateway-02 GATEWAY of:00000000000000a4 of:00000000000000b4 10.1.1.165 10.1.1.165 COMPLETE
Total 3 nodes |
Switch Setup
For switch to which Gatewy Node is connected, vlan and trunk setup is required.
1. Suppose we chose '172.27.0.1/24' as floating IP range and Gateway Nodes are connected to switch via port 2,3. And you decided to assign vlan number 20 to floating IP range. In that case, switch setup should be like below(Arista syntax).
Code Block |
---|
|
Swtich(config)#interface vlan 20
Swtich(config-vlan-20)ip address 172.27.0.1/24
Swtich(config-vlan-20)no shutdown
Swtich(config)#interface ethernet 2-3
Swtich(config-if-Et1-2)#switchport mode trunk
Swtich(config-if-Et1-2)#switchport trunk allowed vlan 20
Swtich(config-if-Et1-2)#switchport trunk native vlan tag 20 |
2. If you need multiple floating IP ranges, for example 172.27.1.1/24 with vlan 200, additional setup is required(Arista syntax).
Code Block |
---|
|
Swtich(config)#interface vlan 20
Swtich(config-vlan-20)ip address 172.27.0.1/24
Swtich(config-vlan-20)no shutdown
Swtich(config)#interface vlan 200
Swtich(config-vlan-20)ip address 172.27.1.1/24
Swtich(config-vlan-20)no shutdown
Swtich(config)#interface ethernet 2-3
Swtich(config-if-Et1-2)#switchport mode trunk
Swtich(config-if-Et1-2)#switchport trunk allowed vlan 20-21
Swtich(config-if-Et1-2)#switchport trunk native vlan tag 20 |
Gateway Node Setup
Basically there's no additional setup is required on Gateway nodes. Those are for whom doen't have physical peer switch.
1. Let's download and install Docker first.
Code Block |
---|
|
$ wget -qO- https://get.docker.com/ | sudo sh |
2. Install and configure OVS
OVS version depends on the SONA features you want to enable. If you want staful NAT features, you have to install OVS 2.6 or higher. Otherwise, you can install OVS 2.5.
Then, set the OVSDB listener port as 6640 so that ONOS can intiated OVSDB connection.
Code Block |
---|
$ sudo ovs-vsctl set-manager ptcp:6640 |
Confgiure br-int bridge using openstack-node-init command.
Code Block |
---|
onos> openstack-node-init gateway-01
Initializing gateway-01
Done. |
You can check if br-int bridge is configured correctly using ovs-vsctl command, as follows.
Code Block |
---|
$ sudo ovs-vsctl show
427d7ee0-218f-4d68-b503-a5639a367357
Manager "ptcp:6640"
Bridge br-int
Controller "tcp:10.1.1.30:6653"
is_connected: true
fail_mode: secure
Port br-int
|
Code Block |
---|
title | how to push network config |
---|
|
$ curl --user onos:rocks -X POST -H "Content-Type: application/json" http://$onos-ip:8181/onos/v1/network/configuration/ -d @network-cfg.json |
Code Block |
---|
onos> openstack-nodes
hostname=compute-01, type=COMPUTE, managementIp=10.134.231.30, dataIp=10.134.34.222, intBridge=of:00000000000000a1, routerBridge=Optional.empty init=COMPLETE
hostname=compute-02, type=COMPUTE, managementIp=10.134.231.31, dataIp=10.134.34.223, intBridge=of:00000000000000a2, routerBridge=Optional.empty init=COMPLETE
hostname=gateway-01, type=GATEWAY, managementIp=10.134.231.32, dataIp=10.134.33.224, intBridge=of:00000000000000a3, routerBridge=Optional[of:00000000000000b1] init=DEVICE_CREATED
hostname=gateway-02, type=GATEWAY, managementIp=10.134.231.33, dataIp=10.134.33.225, intBridge=of:00000000000000a4, routerBridge=Optional[of:00000000000000b2] init=DEVICE_CREATED
Total 4 nodes |
For your information, pushing network config file triggers reinitialization of all nodes at once. It's no harm to reinitialize COMPLETE state node. If you want to reinitialize only a particular compute node, use openstack-node-init command with hostname.
2. In GATEWAY type nodes, Quagga and additional ONOS instance for vRouter is required. Download and install Docker first.
Code Block |
---|
|
$ curl -s https://get.docker.io/ubuntu/ | sudo sh |
3. Download script to help setup gateway node.
Code Block |
---|
|
$ git clone https://github.com/hyunsun/sona-setup.git
$ cd sona-setup |
4. Modify volumes/gateway/zebra.conf
and volumes/gateway/bgpd.conf
as you want. Here are samples of the config files. Note that fpm connection ip
in zebra.conf
should be the eth0 interface IP address of onos-vrouter container, assigned by Docker. Run Quagga container with those config files. The IP address comes with the command would be equals to router-id
in bgpd.conf
. And the MAC address must be "fe:00:00:00:00:01", it is hard-coded right now and needs to be improved soon.
Code Block |
---|
title | volumes/gateway/bgpd.conf |
---|
|
! -*- bgp -*-
!
! BGPd sample configuration file
!
!
hostname gateway-01
password zebra
!
router bgp 65101
bgp router-id 172.18.0.254
timers bgp 3 9
neighbor 172.18.0.1 remote-as 65100
neighbor 172.18.0.1 ebgp-multihop
neighbor 172.18.0.1 timers connect 5
neighbor 172.18.0.1 advertisement-interval 5
network 172.27.0.0/24
!
log file /var/log/quagga/bgpd.log |
Code Block |
---|
title | volumes/gateway/zebra.conf |
---|
|
!
hostname gateway-01
password zebra
!
fpm connection ip 172.17.0.2 port 2620 |
Code Block |
---|
|
$ ./quagga.sh --name=gateway-01 --ip=172.18.0.254/24 --mac=fe:00:00:00:00:01 |
If you check the result of ovs-vsctl show
, there should be a new port named quagga
on br-router
bridge.
5. If there's no external router or emulation of it in your setup, add another Quagga container which acts as an external router. First, modify volumes/router/zebra.conf
and volumes/router/bgpd.conf
to make this Quagga an external router neighboring with the one created right before, and use the same command above but with additional argument --external-router
option to bring up the router container. You can put any MAC address this time.
Code Block |
---|
title | volumes/router/bgpd.conf |
---|
|
! -*- bgp -*-
!
! BGPd sample configuration file
!
!
hostname router-01
password zebra
!
router bgp 65100
bgp router-id 172.18.0.1
timers bgp 3 9
neighbor 172.18.0.254 remote-as 65101
neighbor 172.18.0.254 ebgp-multihop
neighbor 172.18.0.254 timers connect 5
neighbor 172.18.0.254 advertisement-interval 5
neighbor 172.18.0.254 default-originate
!
log file /var/log/quagga/bgpd.log |
Code Block |
---|
title | volumes/router/zebra.conf |
---|
|
!
hostname router-01
password zebra
! |
Code Block |
---|
|
$ ./quagga.sh --name=router-01 --ip=172.18.0.1/24 --mac=fa:00:00:00:00:01 --external-router |
If you check the result of ovs-vsctl show
, there should be a new port named quagga-router
on br-router
bridge.
6. The last container is ONOS with vRouter. Refer to the guide(SONA Network Configuration Guide) and write a network configuration for vRouter. Name it to vrouter.json, place under sona-setup and run vrouter.sh, which brings up ONOS container with vRouter application activated.
Code Block |
---|
|
# modify vrouter.json
sona-setup$ vrouter.sh
sona-setup$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed45f5e90cb4 hyunsun/quagga-fpm "/usr/bin/supervisord" 9 days ago Up 9 days 179/tcp, 2601/tcp, 2605/tcp Interface routerbr-01int
7e17cdc9624b hyunsun/quagga-fpm Port vxlan
"/usr/bin/supervisord" 9 days ago Interface vxlan
Up 9 days type: vxlan
179/tcp, 2601/tcp, 2605/tcp options: gateway-01
e5ac67e62bbb{key=flow, remote_ip=flow}
onosproject/onos:1.6 "./bin/onos-service" 9 days ago Up 9 days 6653/tcp, 8101/tcp, 8181/tcp, 9876/tcp onos-vrouter |
7. Once it is up and running, check ports
result. If any port number does not match to the ones in vrouter.json
, modify the config file with the correct port numbers, and just re-run the vrouter.sh. It actually happens often since you are going to re-create Quagga containers to fix the Quagga config files and OVS increase port number whenever new port is added to a bridge.
Code Block |
---|
$ ssh -p 8101 karaf@172.17.0.2
# password is karaf
onos> ports
id=of:00000000000000b1, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.5.0, serial=None, driver=softrouter, channelId=172.17.0.1:58292, managementAddress=172.17.0.1, name=of:00000000000000b1, protocol=OF_13
port=local, state=disabled, type=copper, speed=0 , portName=br-router, portMac=e6:a0:79:f9:d1:4a
port=1, state=enabled, type=copper, speed=0 , portName=patch-rout, portMac=fe:da:85:15:b1:bf
port=24, state=enabled, type=copper, speed=10000 , portName=quagga, portMac=06:96:1b:36:32:77
port=25, state=enabled, type=copper, speed=10000 , portName=quagga-router, portMac=ea:1e:71:d1:fd:81 |
3. Download sona-setup scripts as well.
Code Block |
---|
|
$ git clone https://github.com/sonaproject/sona-setup.git
$ cd sona-setup |
4. Write externalRouterConfig.ini and place it under sona-setup directory.
Code Block |
---|
title | externalRouterConfig.ini |
---|
linenumbers | true |
---|
|
floatingCidr = "172.27.0.1/24"
externalPeerMac = "fa:00:00:00:00:01" |
- line 1, floatingCidr: Floating IP address ranges. It can be comma separated list.
- line 2, externalPeerMac: Remote peer router's MAC address.
5. Run createExternalRouter.sh. It will create emulated external peer router.
Code Block |
---|
sona-setup$ ./createExternalRouter.sh
sona-setup$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED |
Code Block |
---|
|
"apps" : {
"org.onosproject.router" : {
STATUS "router"PORTS : {
"controlPlaneConnectPoint" : "of:00000000000000b1/24",
NAMES
5885654827e2 opensona/docker-quagga "ospfEnabled/usr/bin/supervisord" : "true",
3 weeks ago Up 3 weeks "interfaces" : [ "b1-1", "b1-2" ]
}
}
},
"ports" : {
"of:00000000000000b1/25" : {
"interfaces" : [
{
"name" : "b1-1",
"ips" : [ "172.18.0.254/24" ],
"mac" : "fe:00:00:00:00:01"
}
]
},
"of:00000000000000b1/1" : {
"interfaces" : [
{
"name" : "b1-2",
"ips" : [ "172.27.0.254/24" ],
"mac" : "fe:00:00:00:00:01"
}
]
}
},
"hosts" : {
"fe:00:00:00:00:02/-1" : {
"basic": {
"ips": ["172.27.0.1"],
"location": "of:00000000000000b1/1"
}
}
} |
Line 4: Device ID and port number of the port with portName=quagga
-> controlPlaneConnectPoint
Line 11: Device ID and port number of the port with portName=quagga-router
or other actual uplink port
If you have a floating range, 172.27.0.0/24 in this example, check the following configurations also.
Line 20: (optional interface config for floating IP address range) Device ID and port number of the port with portName=patch-rout
Line 34: (optional interface config for floating IP gateway) Device ID and port number of the port with portName=patch-rout
Once you fix the vrouter.json file, re-create onos-vrouter container with the updated configuration. vrouter.sh script will take care of removing the existing container.
Code Block |
---|
sona-setup$ vrouter.sh |
8. If everything's right, check fpm-connections
, hosts
and routes
. 172.18.0.1
is the external default gateway in this example. If you added interface and host for floating IP range, you should be able to see the host in the list.
Code Block |
---|
onos> hosts
id=FA:00:00:00:00:01/None, mac=FA:00:00:00:00:01, location=of:00000000000000b1/25, vlan=None, ip(s)=[172.18.0.1]
id=FE:00:00:00:00:01/None, mac=FE:00:00:00:00:01, location=of:00000000000000b1/24, vlan=None, ip(s)=[172.18.0.254]
id=FE:00:00:00:00:02/None, mac=FE:00:00:00:00:02, location=of:00000000000000b1/1, vlan=None, ip(s)=[172.27.0.1], name=FE:00:00:00:00:02/None
onos> fpm-connections
172.17.0.2:52332 connected since 6m ago
onos> next-hops
ip=172.18.0.1, mac=FA:00:00:00:00:01, numRoutes=1
onos> routes
Table: ipv4
Network Next Hop
0.0.0.0/0 172.18.0.1
Total: 1
Table: ipv6
Network Next Hop
Total: 0 |
9. Add route for floating IP range manually and check the route is added.
Code Block |
---|
onos> route-add 172.27.0.0/24 172.27.0.1
onos> routes
Table: ipv4
Network Next Hop
0.0.0.0/0 172.18.0.1
172.27.0.0/24 172.27.0.1
Total: 2
Table: ipv6
Network Next Hop
Total: 0
onos> next-hops
ip=172.18.0.1, mac=FA:00:00:00:00:01, numRoutes=1
ip=172.27.0.1, mac=FE:00:00:00:00:02, numRoutes=1 |
10. Now you should be able to see the gateway node is in COMPLETE state when you re-trigger node initialization. You can either run command openstack-node-init gateway-01 or push the network configuration file again.
Code Block |
---|
onos> openstack-node-init gateway-01
onos> openstack-node-init gateway-02 |
Or
Code Block |
---|
title | how to push network config |
---|
|
$ curl --user onos:rocks -X POST -H "Content-Type: application/json" http://$onos-ip:8181/onos/v1/network/configuration/ -d @network-cfg.json |
Code Block |
---|
onos> openstack-nodes
hostname=compute-01, type=COMPUTE, managementIp=10.134.231.30, dataIp=10.134.34.222, intBridge=of:00000000000000a1, routerBridge=Optional.empty init=COMPLETE
hostname=compute-02, type=COMPUTE, managementIp=10.134.231.31, dataIp=10.134.34.223, intBridge=of:00000000000000a2, routerBridge=Optional.empty init=COMPLETE
hostname=gateway-01, type=GATEWAY, managementIp=10.134.231.32, dataIp=10.134.33.224, intBridge=of:00000000000000a3, routerBridge=Optional[of:00000000000000b1] init=COMPLETE
hostname=gateway-02, type=GATEWAY, managementIp=10.134.231.33, dataIp=10.134.33.225, intBridge=of:00000000000000a4, routerBridge=Optional[of:00000000000000b2] init=COMPLETE
Total 4 nodes |
CLI
Command | Usage | Description |
---|
openstack-nodes | openstack-nodes | Shows the list of compute and gateway nodes that registered to openstack node service |
openstack-node-check | openstack-node-check [hosthame] | Shows the state of each node bootstrap steps |
openstack-node-init | openstack-node-init [hostname] | Try to re-initialize a given node. It's no harm to re-initialize already in COMPLETE state. |
179/tcp, 2601/tcp, 2605/tcp router |
*Note that we don't use quagga app. We just use quagga container for convenience.
6. When every work is done, you create router with appropriate extenal network that floating IP range is assigned. Then you execute below CLI to check MAC learning for external peer rotuer is working well,
Code Block |
---|
onos> openstack-peer-routers
Router IP Mac Address VLAN ID
172.27.0.1 FA:00:00:00:00:01 None |
HA Setup
Basically, ONOS itself provides HA by default when there are multiple instances in the cluster. This section describes how to add a proxy server beyond the ONOS cluster, and make use of it in Neutron as a single access point of the cluster. For the proxy server, we used the HA proxy server (http://www.haproxy.org) here.
Image Added
1. Install HA proxy.
Code Block |
---|
|
$ sudo add-apt-repository -y ppa:vbernat/haproxy-1.5
$ sudo add-apt-repository -y ppa:vbernat/haproxy-1.5
$ sudo apt-get update
$ sudo apt-get install -y haproxy |
2. Configure HA proxy.
Code Block |
---|
title | /etc/haproxy/haproxy.cfg |
---|
|
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend localnodes
bind *:8181
mode http
default_backend nodes
backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk GET /onos/ui/login.html
server web01 [onos-01 IP address]:8181 check
server web02 [onos-02 IP address]:8181 check
server web03 [onos-03 IP address]:8181 check
listen stats *:1936
stats enable
stats uri /
stats hide-version
stats auth someuser:password |
3. Set url_path to point to the proxy server in Neutron ML2 ONOS mechanism driver configuration and restart Neutron.
Code Block |
---|
title | networking-onos/etc/conf_onos.ini |
---|
|
# Configuration options for ONOS ML2 Mechanism driver
[onos]
# (StrOpt) ONOS ReST interface URL. This is a mandatory field.
url_path = http://[proxy-server IP]:8181/onos/openstackswitching
# (StrOpt) Username for authentication. This is a mandatory field.
username = onos
# (StrOpt) Password for authentication. This is a mandatory field.
password = rocks |
4. Stop one of the ONOS instance and check everything works fine.
Code Block |
---|
$ onos-service $OC1 stop |
...