SDN FAQ

Software has been programming our networks for a long time, so how is SDN different?

It's true that software such as distributed routing algorithms and management protocols have been determining forwarding paths and setting network device parameters for a long time. However, the tools used have tended to be isolated to networking's ecology and proprietary per vendor. SDN has several big ideas that improve on things: centralized control, programmatic interfaces and integration with orchestration/automation tools.

Why is SDN better than the traditional network I have today?

How SDN can improve your network depends largely on the problem you're trying to solve. With the proper SDN solution in place, you could smooth out your operational processes, reduce human errors, or forward traffic in unconventional ways as defined by metrics unique to your organization. In short, you're gaining efficiency and flexibility.

What are the common use cases?

There are two major use cases SDN is addressing in the enterprise today. The first is to aid in network data capture and network visualization. In this use case, network traffic of interest as defined by a software policy is copied to collectors where it can be analyzed and visualized. The SDN controller is able to insert virtual taps throughout the network infrastructure and send copies of the flows from no matter where they are to wherever the analysis engine is.

The second is what could be thought of as creative forwarding, where traffic is forwarded across an engineered network path based on criteria other than traditional forwarding paradigms like OSPF, BGP or MPLS. Common  applications are for special treatment of latency- or jitter-sensitive traffic, forcing selected traffic through an inspection device to improve security and "routing for dollars," where the traffic is routed across paths that are cheaper for an organization to use depending on time of day or link utilization.

Why does the Open Networking Foundation act in a closed manner, unlike the IETF or IEEE?

The ONF was created in part to facilitate rapid development of the OpenFlow protocol. OpenFlow is a vendor-independent protocol used by an SDN controller to program forwarding tables in network switches using a variety of traffic-matching conditions and actions. Speed is best accomplished with a small set of defined members with a vested interest in a specific result. If operating in the open manner like the IETF or the IEEE, the development process would necessarily be slower to be inclusive of all parties, use cases and concerns that might come up.

There has been some discussion of opening up the ONF proceedings at some point to allow the larger networking community to observe the OpenFlow specification discussions.

Is OpenFlow destined to become the new way to forward traffic through a network?

OpenFlow's long-term future is uncertain at this point. Arguably, OF has proven most useful in soft switches that run at the network edge in a hypervisor, relying on server-based x86 computing power to do the needed processing. However, when implemented in traditional network hardware switches, OF's usefulness has depended on the silicon in the switch and the ability of that silicon to handle OpenFlow operations at the scale required for a given use case.

Network designers evaluating OpenFlow hardware must carefully evaluate vendors, as not all OF switches are created equal. Another point against OpenFlow as a long-term replacement for traditional forwarding is that OF doesn't necessarily replicate all the hardware capabilities custom ASIC designers like Cisco, Juniper and Brocade bake into their chips. While these vendors might support OF as an adjunct means of populating forwarding tables and policies, they are also exposing their own APIs that take full advantage of their hardware's capabilities.

Some argue that OF has scalability problems because of limited flow entries and the latency of punting to the controller. Is this true?

It is true that network switches with OF capability tend to have maximum flow entries under 10K. Whether this is a limitation depends on the use case and overall network design. Vendors point out that if using OF at the network edge (as opposed to the core), several thousand flow entries are unlikely to present a limitation, and that a simplified core (where edge tenants are obscured by an overlay) can also succeed.

It is also true that when an OpenFlow switch has no matching flow entry for a given bit of traffic, that traffic must be punted to the controller. And that introduces latency of anywhere from dozens to hundreds of milliseconds. In addition, an OpenFlow switch CPU can only punt so fast, typically limiting punting operations to 1,000 or less per second. While that sounds slow to the ear of a network designer used to line-rate forwarding of L2 and L3 traffic at terabit scale, vendors point out that in a typical deployment, flow tables can be pre-populated with flow entries, as endpoints are known to the controller. This minimizes the need for punting.

Isn't an SDN controller a single point of failure?

One of SDN's big ideas is that a centralized controller knows the entire network topology, and can therefore program the network in ways that a distributed control plane cannot. Vendors recognize the mission-critical role of the controller, and typically offer the controller as a distributed application that can be run as a clustered appliance, or as a virtual machine that takes advantage of a hypervisor's high availability. In addition, it doesn't necessarily follow that if the controller goes down, the network goes down with it. While architectures vary by vendor, it's usually a reasonable assumption that the network will continue to forward traffic (at least for a while) even if the controller is no longer present.

Can I install SDN alongside my existing network?

Yes. One common topology for deployments in a brownfield environment is an "SDN island" where an SDN domain flows through a gateway device to the legacy network. Another topology is that of hybrid switching, where a switch that can handle both OpenFlow and traditional networking splits its ports between the two domains. Hybrid capabilities vary by vendor.

What are overlays, and why are there so many different kinds?

An overlay is used to create virtual network containers that are logically isolated from one another while sharing the same underlying physical network. Virtual eXtensible LAN (VXLAN), Network Virtualization with GRE (NVGRE) and Stateless Transport Tunneling (STT) all emerged at roughly the same time, and all with different vendors leading each effort. [Also see: "Complete guide to network virtualization"]

If you'll allow for some generalization, Cisco (and others) have pushed VXLAN. Microsoft has driven NVGRE. Nicira (now part of VMware) has championed STT. Each overlay has similar characteristics, but differences in the details that make them the darling of some, but not others. Over time, VXLAN has gained the strongest following (including VMware, interestingly), but it's not yet clear that NVGRE and STT will be deprecated, as both have ardent supporters. In addition, the IETF NVO3 working group has been working on yet another overlay, although the encapsulation type is likely to be one that already exists.

Why are there so many different kinds of controllers?

Vendors early to market with SDN technology have necessarily had to bring a controller as a part of the overall solution. There is no such thing as an SDN controller standard at this time; therefore, each vendor has come up with a controller that best meets the needs of its target market.

Wouldn't it be better if there were SDN controller standards the industry could agree on?

With the creation of the OpenDaylight project, the industry seems to think so. OpenDaylight is a consortium of vendors from across the industry that are contributing code to an open source SDN effort. Time will tell how this translates into vendor products, and what this will mean for the SDN consumer.

Will network engineers have to become programmers?

Network engineers with an understanding of scripting and programming will be able to leverage SDN technology. Will they have to? That remains to be seen. The scenario I see playing out is that vendors will supply corporations with software that enables rich network functionality. Some engineers will use that software interface to provision the network, and will be satisfied as long as the network functions as intended. Other engineers will use that vendor-supplied software, but will also become proficient in a language that allows them to create the unique network applications required by their business. As these network engineers acquire programming skills, they will also maintain their ability to effectively monitor and maintain the network infrastructure.

What are the key things I should be thinking about when evaluating SDN technology?

The biggest thing to understand is that not all SDN solutions are solving the same problem. In addition, different SDN technologies have different expectations of the end user. While some solutions plan to abstract away network and operational complexity by providing you with a glossy solution, other solutions are more of a toolkit that lets you create your own application. Therefore, understanding the problem you're trying to solve at a deep technical level is quite important. The better you communicate your needs to your vendor, the better it will be able to articulate how its solution meets those needs.

Does SDN introduce new security risks to my environment?

While it's hard to say categorically that SDN introduces "new" risks, the fact is that exposing network devices via programmatic interfaces is risk to be managed. That said, SNMP is roughly analogous to programmatic APIs, but has a well defined risk mitigation strategy. In that sense, SDN presents nothing unusually risky. Yes, SDN presents a risk, but it is a risk that IT as a discipline can mitigate via access controls, trusts, encryption, deep packet inspection, etc.

That said, SDN advocates point out that a security benefit of centralized control is the reduction in human touch required to provision the network. On the assumption that human error is the greatest security risk to an IT infrastructure, SDN may actually prove to be a security asset.

ethan.banks@packetpushers.net| LinkedIn | @ecbanks

Read more about lan and wan in Network World's LAN & WAN section.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags NetworkingOpenFlowLAN & WANOpen Networking FoundationSDNsoftware defined networkingOpenDaylightSDN securitySDN controllerSDN riskSDN use casenetwork overlay

More about Australian Securities & Investment CommissionBrocade CommunicationsCiscoIEEEIETFJuniper NetworksLANMicrosoftSNMPSpeedVMware Australia

Show Comments
[]