OpenFlow's internal mechanics are clean, elegant, and powerful. Once you know how OpenFlow works. you won't stop thinking about the possibilities.
Part one of this two-part series explained the need for a new way to provision networks in realtime, to meet the needs of private and public cloud infrastructures. A form of software defined networking – SDN – OpenFlow employs a central redundant controller to let you perform all manner of network engineering tasks from a central point. Adding VLANs, configuring Quality-of-Service (QoS) settings, prioritizing applications, and changing network topologies by adding or relocating switches and routers, are all point-and-click simple with OpenFlow.
IBM’s PureFlex supports OpenFlow, via the IBM Programmable Network Controller and IBM’s OpenFlow-enabled switches, such as the 40GbE RackSwitch G8316. In part one you learned the high-level concepts of OpenFlow. In this part, you’ll see how OpenFlow works under the covers. Armed with this knowledge, you’ll be ready to assess IBM’s current and future PureFlex OpenFlow offerings.
Picking up where we left off, OpenFlow centralizes all device configuration functions in a separate OpenFlow controller, eliminating the need to manually configure packet handling in switches and routers every time a new server or application comes online. Conceptually, OpenFlow create a unique virtual network for each application, which is, in essence, an isolated, and protected, slice of the physical network infrastructure:
That underlying physical network need not be organized in a traditional topology, but can instead conform to whatever topology best meets the capacity and performance needs of network users. Best of all, because packet forwarding polices are centrally managed, there is no need for the network itself to disable redundant paths. All paths can be used to carry traffic, based upon the policies implemented in the OpenFlow controller. This fundamental concept ultimately can be extended to future bandwidth augmentation technologies, such as fiber optic Dense Wave Division Multiplexing (DWDM), which lets a single optical fiber carry multiple physical communications channels.
At it heart, OpenFlow uses a straightforward process, comprising a simple protocol interconnecting the controller with OpenFlow-enabled switches. To comprehend how OpenFlow automates network configuration, you need to first understand how OpenFlow flows are identified and processed.
In OpenFlow, a flow is any set of packets that have an identical group of attributes. The attributes can be chosen from the following ten possible values, called 10-tuples. All but the first are derived from the packet header:
1. Input Port.The switch port through which the packet entered the network.
2 & 3. Source and Destination MAC address. The physical Ethernet addresses contained.
4. Ethernet Type.The IEEE 802.3 Ethernet frame type.
5. Ethernet VLAN ID.The IEEE 802.1Q Ethernet VLAN ID number.
6 & 7. Source and Destination IP address.The logical IP addresses, IPv4 or IPv6.
8. IP Protocol.The RFC-791 (IPv4) or RFC-2460 (IPv6) IP protocol number.
9 & 10. IP Source and destination ports.The TCP or UDP port numbers.
A given flow definition select one or more of these attributes to compare with a packet’s actual values to determine flow membership. The set of attributes to match is called a flow mask. Here is a table illustrating various types of flow masks that could be used to identify flows for various kinds of network control functions, such as switching, routing, and VLAN traffic isolation:
Each column represents one of the 10-tuples, and each row is an example masks for a particular packet forwarding tasks. For example, a mask selecting only the destination MAC address emulates the kind of packet forwarding performed by traditional Ethernet switching, while a mask selecting only the destination IP address emulates traditional IP routing. Other masks can select flows for more advanced forwarding methods.
Each device in the network performs packet forwarding by first identifying the flow to which a packet belongs, and then executing a rule for that flow. The basic structure of a rule is simple, but powerful:
The rule contains a pre-defined action and a definition of statistics to calculate, such as packet and byte counts, which the OpenFlow controller periodically collects from all devices to measure network performance.
OpenFlow defines four basic actions for a rule:
1. Forward.Send the packet to a specific destination port on the device. This is the most frequently executed action, and is one reason why OpenFlow can accommodate high traffic volumes.
2. Encapsulate. Enclose the packet in a separate transport header and send it to the controller, which then can use the packet to create and propagate new rules, for example, to route packets along a specific path.
4. Drop.Discard the packet, an action that can be used to provide access control list behaviors or Denial-of-Service (DoS) attack mitigation.
5. Send to switch.For devices that operate in both OpenFlow and traditional switched Ethernet realms, this action sends the packet to the switches legacy L2/L3 processing. This action is useful for migrating legacy networks to OpenFlow, letting the controller choose which traffic gets flow handling.
The OpenFlow controller implements these flow mechanics by separating the data and control planes of all network devices. Instead of sending network configuration commands in-band, as is done with legacy routers and switches, OpenFlow has a separate, physical secure path to each network device:.
This out-of-band management approach ensures that control traffic can’t be blocked by data traffic, and that the OpenFlow controller has reliable direct access to every OpenFlow-enabled device.
Because the OpenFlow controller directly collects statistics for every flow from every network element, it has all the information for key network performance and topology visualization. The controller can generate a wide variety of displays, showing network organization, application resource usage, throughput and SLA compliance, and failed components (Figure 6). Because all network links are available at all times to carry traffic, when a network link fails OpenFlow need only adjust the rules in directly affected switches to circumvent the problem. Because the entire network state is not being recalculated in order to find a single path, OpenFlow path convergence is much faster than with legacy Ethernet.
OpenFlow Hardware Components
OpenFlow switching devices have minimal onboard intelligence: essentially just the ability to match packets to flows and execute the four basic actions. Most existing Ethernet L2 and L3 managed switches have dedicated hardware, in the form of Ternary Content Addressable Memory (TCAM) and custom ASIC integrated circuits, to efficiently identify and process flows at wire speed. A number of switch vendors have added OpenFlow to current products through firmware upgrades.
OpenFlow-compliant switches come in two varieties: OpenFlow-only, and OpenFlow-hybrid. OpenFlow-only switches support only OpenFlow operation, and cannot perform traditional Ethernet switching. OpenFlow-hybrid switches support both OpenFlow and normal packet forwarding: L2 switching, L3 routing, VLAN isolation, and ACL and QoS processing. IBM's RackSwitch G8264 is a hybrid switch, so you can use it to join traditional and OpenFlow networks in support of migration to OpenFlow.
OpenFlow updates the flow rules in switches either proactively or reactively. For proactive updates, the controller pre-populates each switch in advance, using the network engineering requirements stipulated by the end user or application. Reactive updates occur when the controller receives a new packet for which no existing flow exists. Both approaches eliminate the need to maintain complex logic within switches, enabling the use of commodity switching elements.
The OpenFlow controller has no specific formal specification, other than using the OpenFlow protocol exclusively to configure switches. It can be an off-the-shelf computer – the only requirement is an Ethernet connection to the OpenFlow management network and sufficient CPU capacity to process the anticipated new-flow rate. Thus OpenFlow product developers are free to create NaaS capabilities using a wide range of technologies. For example, an OpenFlow controller could implement the OSPF routing protocol by using encapsulated packets to populate the OSPF adjacency tables and perform OSPF-compliant routing by pushing appropriate rules out to the switches.
To achieve network resilience, OpenFlow controllers can be deployed using standard clustering techniques. Alternatively, in a large-scale operation, multiple controllers could be distributed throughout the network, interconnected to share state information.
The combination of fast, simple switching and a flexible controller architecture lets network operators use a rich set of high-level network engineering constructs to build virtual networks using network-wide programming, rather than piecemeal device configuration. OpenFlow-specific languages, such as Frenetic, have been devised to make such network programming simpler and more reliable. Moreover, the OpenFlow controller itself can be virtualized, to enable multiple network operators to manage their own logical network domains.
Stanford University’s FlowVisor Research project is an example. FlowVisor is an open source, special purpose OpenFlow controller that acts as a transparent proxy between OpenFlow switches and multiple OpenFlow virtual controllers. FlowVisor introduces the networkslice concept to OpenFlow: a slice is any combination of switch ports (layer 1), src/dst Ethernet address or type (layer 2), src/dst IP address or type (layer 3), and src/dst TCP/UDP port or ICMP code/type (layer 4), delegated to a specific network operator.
Because many OpenFlow controller projects are open source, you can build and run your own controller on most any generic x86 server. OpenFlow.org's Components page lists reference implemenatations based on Linux for both switch and controller elements. The Comonent page also poinst you to wide variety of public test projects, with which you can explore advanced controller features, such as GUI visualization and automated policy enforcement.
OpenFlow Advantages and PureFlex
OpenFlow’s versatile rule system can replicate the functionality of nearly any network architecture in use today, including WAN architectures using circuit-switched path. In the data center it’s particularly valuable for multi-tenant networks, where traffic isolation, security, and SLA-compliance are critical. OpenFlow greatly simplifies such routine private and public cloud operations as live server migration and VLAN-within-VLAN (Q-in-Q) provisioning.
IBM teamed up with NEC to aggressively develop a production-quality OpenFlow controller, the IBM Programmable Network Controller (PNC), a RedHat-based application that provides an intelligent, graphical control console for controlling OpenFlow-compatible switches:
IBM's PNC provides end-to-end network configuration and visualization. It automatically and continuously discovers the OpenFlow network topology, mapping physical and virtual traffic flows spanning OpenFlow-compatible switches. It supports separate network policy groups to maintain multi-tenant isolation, and lets you implement any OpenFlow 10-tuple functional service, letting you create up to 10,000 VLANs, routers, QoS policers, and even firewalls. PNC officially supports IBM's RackSwitch G8264 and NEC's PF5240 OpenFlow switches, but is designed to work with any OpenFlow 1.0-compliant device.
PNC's visualization features include the ability to view traffic flows at the protocol level using sFlow statistical traffic monitoring. It can also depict individual host and VM interfaces and traffic loads, path utization, and detailed information on any flow in the network.
An OpenFlow network will continue to function even if its controller is unavailable, but it operates in a degraded mode using the last known network configuration. Until a controller comes back online, no dynamic traffic engineering policies will be enforced. To accomodate failures, PNC can operate in a fully-redundant active/standby pair to ensure network continuity even in the event of a failure of the controller software or hardware platfom. Because the state of the network can be read from the OpenFlow switches, the standby controller will learn the topology automatically when it comes online to replace the primary. It can then apply the replicated policies to resume dynamic network monitoring and traffic engineering.
By separating control and data planes, OpenFlow eliminates one of the most common causes of network failure, loss of control communications. OpenFlow’s intrinsic out-of-band management architecture ensures that even total saturation of data network paths leaves you in control of every network element. A single PNC installation employs two GigE ports: one to communicate with the out-of-band management network, and one to provide access for remote administration of the controller. A dual-PNC installation uses six GigE ports on each controller, to provide redundant connections between the controllers and the out-of-band network. IBM licenses PNC based on the number of switches managed, with options for 1, 10, and 50 devices.
Compared to traditional Ethernet switching and routing, OpenFlow’s automatic rule generation and distribution vastly simplifies network configuration. It consolidates L2, L3, and L4 (application) functions into a single cohesive network engineering implementation. This enhances network resilience and reliability. OpenFlow’s programmability permits virtually unlimited scalability, and greatly reduces service deployment times.
With network vendors enthusiastically developing OpenFlow products, this new generation packet processing technology is well positioned to deliver true network virtualization to both private and public clouds.