The Software-Define Stack | E II

OpenStack: A Brief Networking Tour

  • Tiny Networking Capsule
  • Nova-network
  • Neutron Networking
  • Meta-data service
  • Summary


What is a layer 2 network?

It is a network that does require any routing hops (e.g., traffic in the same subnet).

  • Usually, a switch does not divide devices connected to it into multiple domains, all are part of the sane broadcast domain.
  • To break a broadcast domain VLANs are used (tagging traffic), the VLAN tagging types are:
    • Access port: configured for access to one-vlane (i.e., a one tag port)
    • Trunk port: can recieve frames tagged / labled from different VLANs

What is Nova-Network?                                                                                                 

It is the primitive way of doing networking in OpenStack.  It does the following:

  • Basic layer-2 bridging through Linux bridges
  • IP Address management for tenants (give IP address and management for tenants)
  • Configure DHCP and DNS entries in “dnsmasq”
  • Configure fw-policies and NAT in IPTables to build security for in/out-bound traffic

What are Nova-network Models?

  • FlatNetwork – no traffic segregation on the host level, VLANs and network functions are set at the network level
  • VLANetwork – each tenant gets one VLAN (the VLAN limit) + dnsmasq service that does DHCP functions on host level

Capture

Problems in Nova-Networking?

  • Inherits the VLAN sizing limitations (e.g., 4094 VLAN IDs).
  • Since it is coupled with Nova (the compute part of OS), it is very limited and does not allow integrations with networking products from other vendors
  • Poor at multi-tenancy (Flat-network)
  • No Support for L3 devices

From precious posts it is about SDN it is highlighted that Abstraction is a key element for a Software-Defined technology, and it is clear that Nova-network is not doing that abstraction very well, we need something else clearly…. Neutron to the rescue.

What is Neutron?

Neutron is an API wrapper. It basically receives commands / calls to perform traditional networking and relay it to the network plugin of choice (e.g. NSX). With this, abstraction is achieved.

What are the main components of Neutron?

Neutron runs along-side a controller node on what is referred to as the network node.  On the network node, the following services are running:

  • Neutron Server + Open vSwitch (OVS) Plugin – controls all the agents running on the network node
  • N-L3Agent – controls namespaces and ip-tables and does all the NAT functions
  • N-L3-DHCP – Assigning IPs to VMs and acting as a DHCP server (i.e., controls the “dnsmasq” MAC to IP mappings)
  • N-OVS-Agent – talks to the switch to configure flows for traffic

Capture2

Basically, the OVS plugin communicates with the OVS-agent which in turn configures the flows and tunnles on the underlying bridges. The ovs-agents are installed on all the nodes (compute and network). When the traffic is forwarded it gets tunneled in the bridge through the configuration applied to by the OVS agent.

Note: This is not a completely software-defined pattern as we still have the control-plane coupled in our virtual network. By adding a controller we could better comply with the software-defined pattern (will address that in a different post).

Nova Meta-data service

A Metadata services provides important information to the guest instance for correct communication, these include:

  • Setting a default locale or a host name
  • Setting up ephemeral storage mount points
  • Generate SSH private keys, and add SSH keys to us user’s .ssh directory

In previous versions of Neutron (e.g., Folsom), the dhcp-agent used to provide a static routing option to add static routes to “169.254.169.254” (i.e., dhcp-agent host), IPtables on the dhcp-agent hosts then NATs the request either to the local met-data server or to a meta-data services which is remotely installed. This method however has no support for overlapping IPs in different projects / Tenants.

met-data-service

In later versions of OpenStack (e.g., Grizzly, Havana, Icehouse, Juno, etc.) the problem was solved by dedicating two services (quantum-ns-metadata-proxy and quantum-metadata-proxy). Meta-data requests are then forwarded by the N-L3-Agent to the quantum-ns-metadata-proxy in the tenant namespace and then forwards it to the quantum-metadata-proxy service via a dedicated internal socket, and adding two headers that identifies the tenant namespace (X-Forward-For and X-Quantum-Router-ID) and this solving the overlapping IP problem previously existed. Since the metadata-proxy is the only service that can reach to the management network, it will be used to forward the request to the nova-metadata which could be installed on one of the nodes in the management network.

meta-data-late

Summary

In this post we reviewed nova-networking (deprecated) that was used before neutron, we also highlighted the problems that led to the development of Neutron. With Neutron, many extra functionalities were added (e.g., tunneling and improved multi-tenancy through the usage of IDs and headers), it should be noted that we did not address all the aspects of Neutron only those of major importance. In future writings, we will be tackling deeper topics and we will be looking more into the other components of OpenStack.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s