Loading…
Juno Design Summit has ended
Friday, May 16 • 2:10pm - 2:50pm
Heiarchical Network Topologies

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

This session is a combination session around common ideas in network topologies.

This session will include the following subject(s):

Support for VDP in Neutron for Network Overlays:

This is for topologies where Openstack compute nodes are connected to the external switches or fabrics. The fabric comprising of the switches support more than 4K segments. In some of these architectures network based overlay (and not host based overlay) is deployed. So, one should be able to create more than 4K networks in Openstack. One way in which the VLAN to be used for communication with the switches is assigned by the switches using 802.1QBG (VDP) protocol. The VM's then sends .1q frames to the switches and the switches associate it to the segment (VNI in case of VXLAN).
With the current model:
1. One cannot use a type driver of VLAN because of the 4K limitation. Also a type driver of VXLAN or GRE cannot be used because that may mean host based overlay.
2. Also the programming of flows should be done after communicating the vNIC information to VDP and VDP communicates with the leaf switches and return the VLAN to be used by this VM. The Openstack module running in the compute communicate with VDP module (Opensource lldpad) running there. The VLAN is of local significance between the compute node and the switch to which it is connected.

Please refer the specification link in the blue print for some detailed information. This can be used in any Network based Overlays with either top down or bottom up VLAN assignment:
a. In VDP based VLAN assignment (bottom-up) , changes in the OVS Neutron Agent code to communicate with VDP and get the VLAN for programming the flows.
b. In Top down model, the controller knows the VLAN to be used by the VM. It can pass that information to the Neutron agent for programming the flows.

A configuration parameter is required that specifies whether the network overlay uses VDP based mechanism to get the VLAN. Something like below:
802_1_QBG_VDP=True/False

(Session proposed by Pradeep Krish)

physical network topology api:

Neutron extension for physical network topology.
This extension allows plugin/mechanism driver to utilize those information
about underlying networks in order to make better use of network resources.

(Session proposed by Isaku Yamahata)

Dynamic Network Segments in ML2:

This session addresses changes needed to ML2 to enable dynamically managed network segments connecting hypervisor vswitches to upstream switches that provide network fabric tunnels/overlays/etc.. The goal is to allow segments such as VLANs to be allocated separately for each switch so that the 4K VLAN limit no longer imposes a global limit on the number of virtual networks. This is similar to the way in which local VLAN tags are managed within br-int by openvswitch-agent.

The session is related to two other proposed sessions. One, http://summit.openstack.org/cfp/details/314, covers support for VDP, which is one way of dynamically managing hypervisor to switch segments. The other, http://summit.openstack.org/cfp/details/93, covers physical network topology, which would be useful in implementations where the ML2 mechanism driver itself dynamically manages hypervisor to switch segments.

(Session proposed by Robert Kukura)


Friday May 16, 2014 2:10pm - 2:50pm EDT
B304

Attendees (0)