This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>

If you need help, please click here:

SDN and TRILL: How to Understand the Two Buzzwords– An Interview With Dr. Radia Perlman, "The Mother of the Internet"

Many new buzzwords have arisen in the network communication field as network protocols have evolved. TRansparent Interconnection of Lots of Links (TRILL) and Software Defined Networking (SDN) are two of the latest representations of many that have appeared. What are Dr. Radia Perlman's thoughts of them?

ICT Insights: Not long ago, you traced a vision for Ethernet as supporting a large cloud with a flat address space, using the TRILL protocol to route Ethernet traffic within the cloud, and using IP to route between clouds. What would this vision mean for data center managers for large networks? Would they see big changes or just a slow, gradual migration?

Dr. Perlman: TRILL is designed to not have to throw everything away and start over by taking an existing Ethernet (network) and replacing any subset of the spanning tree bridges with TRILL switches. All that happens is bandwidth gets better. So in that sense, it's evolutionary. Ethernet has two advantages over IP. One, Ethernet has a flat address space, so you can move within the Ethernet cloud and keep your address, and two, that it can be self-configuring. Moving within the cloud is especially important with virtualization. But spanning tree-based Ethernet does not make optimal use of bandwidth. TRILL gives the best of both worlds; given the flat address space and self-configurability of Ethernet, plus the routing advantages of a true Layer 3 protocol like IP, you get shortest paths, path splitting, and you can do traffic engineering.

ICT Insights: So the node moves, but the conversation, so to speak, still goes on?

Dr. Perlman: Right. And just for a bit of history, the standards body ISO created a version of IP in the early 1980's with a 20-byte address space (versus IP's 4-byte address space). ISO's protocol was known as ConnectionLess Network Protocol (CLNP). DECnet adopted CLNP as its packet format. The top 14 bytes of the 20-byte address were the prefix for an entire cloud (in contrast to IP, where each link has a different block of addresses). The bottom 6 bytes of CLNP indicated a specific node in the cloud. Typically the Ethernet (MAC) address of the node was used in the bottom 6 bytes. This is like putting both the IP address and the Ethernet address into the Layer 3 header. With CLNP, the top 14 bytes would route to the correct cloud, then consisting of lots of links, and the cloud would route to the specific 6-byte address within the cloud. With IP, once you get to the final link, you have to do a protocol (known as ARP) to find the Ethernet address of the target node, and then put on an Ethernet header to get it to the final node. With CLNP, there was no need for ARP, or an extra header. And since CLNP had the same 14-byte prefix for the whole cloud, nodes could move within the cloud without changing their address.

It was unfortunate that people refused, in 1992, to replace IP with the CLNP packet format, deciding instead to invent what is now IPv6. If we'd adopted CLNP in 1992 it would have been very easy to convert. The Internet was just a small researchy thing, as opposed to now, when so much of society depends on it. If we'd adopted CLNP, the Internet would have been much simpler, since CLNP would have done the job of both IP and Ethernet. Even if we do manage to convert the Internet to IPv6, CLNP would have been a better solution because IPv6 also acts like IP, where each link must have its own prefix. People will still want Ethernet, in order to provide a cloud with a flat address space.

It is interesting that CLNP would be technically superior to the combination of IP and Ethernet, especially with Ethernet based on spanning trees. If Ethernet were based on TRILL instead of spanning tree, the one advantage of IP plus TRILL over CLNP is, with TRILL, once the packet arrives at the cloud, the packet is encapsulated in a TRILL header that specifies the last switch in the cloud.

What's nice about this extra header is that inside the TRILL cloud the switches don't need to know about where all the end nodes are. They only need to have a forwarding table to forward towards the last switch. So their forwarding table is a lot smaller.

ICT Insights: It gives you advantages in both speed and flexibility then?

Dr. Perlman: Yes. Speed because the TRILL address is a two-byte address. And also the price of the switch because your forwarding table can be smaller, just be a direct lookup, where for destination 57, it can be the 57th entry. When forwarding based on an Ethernet address, those six bytes are obviously too large for a table. So you have to do a first step of hashing. It can be done, but it's cheaper to just do a direct lookup.

ICT Insights: What do you think of SDN?

Dr. Perlman: That's a difficult question. It's a buzzword that means different things to different people, and actually, most people think it must be important because they keep hearing about it. To me, it's rather a strange buzzword. I don't know what those three words are doing together in the same phrase. I've found 3 or 4 different concepts that people use the term for, and they are very different things, and none are new.

ICT Insights: What is SDN to you?

Dr. Perlman: One vision for which people use the term SDN, is using a general purpose computer as a router instead of an appliance. This is actually the way that routers used to be built. Fine until links got to be too fast, and then you couldn't do it that way anymore.

Everything was carefully engineered to end up with a box that could do one job, but nothing else. As computation has become faster and massively parallel, because for a switch or a router – I'm going to use the term switch to mean anything that's forwarding packets – the job of switching is inherently massively parallelizable.

So it is feasible again to take a general-purpose computer, and do switching on it. This is likely a lot less expensive than a specialized box.

Further, it might be nice to standardize some sort of API, so people can build things that can run on a switch; receive packets on various ports, send them up through the API to some other application. Then different organizations than the one implementing the basic switch can write applications to do things like spam filtering or intrusion detection, or something else that might want to look at packets while they're transiting the network.

ICT Insights: Is there another vision of SDN you would like to talk about?

Dr. Perlman: The other vision is sort of interesting. I hadn't paid attention to network management. The vision of network management was to standardize a protocol for speaking between a management station and things being managed, for instance the protocol Simple Network Management Protocol (SNMP), and that for each protocol, all parameters that are readable or writable get specified in a Management Information Base (MIB), along with events to send messages to the management station. A human at the management station would type "big" commands, which are then converted to SNMP commands to set or read the relevant parameters. Maybe 30 years ago, I toured a major network, and this is how it worked. One machine in a big room with a huge display showed a whole picture of the network, with links flashing if congested or bright red if broken. A human would type global commands for the kinds of things he might want to do, like creating a path of this amount of bandwidth between this node and that node.

And this machine would go and translate it into SNMP or whatever language the machine was talking to the switches, and it would go and talk to each one and diddle with the parameters.

But apparently this vision has decayed. Vendors have created proprietary features, and have not obtained the names in the standard MIB for managing these new features. So, what is often done today is the network manager remotely logs onto each of the switches and types commands to it using the vendor's command line interface. If that's true, that's really horrible. That's a terrible way for networks to work.

So, another use of the SDN buzzword is, basically, recreating that vision – inventing a protocol for talking from a management station to each switch, and setting things at each switch.

That's definitely a good thing to do, though I'm not sure why it couldn't be accomplished by just using SNMP, and also not sure that all the things that should be managed are in the MIB. One complaint about SNMP is it doesn't have the ability to group commands to be "atomic," meaning that either all those commands should be done, or none of them. But again, it seems like it would have been easy to modify SNMP to do this. The IETF has standardized a new protocol called NETCONF, which I think is intended to replace SNMP. People referring to SDN in terms of this vision seem to be inventing yet another protocol for talking to the machines. I don't really care what the protocol is; it's really just setting things that are settable, reading things that are readable, and allowing switches to unilaterally send an alert message to the management machine when certain events occur (such as a link going down).

ICT Insights: What kind of technologies would you encourage people to work on? What type of solutions do we still need?

Dr. Perlman: Well, there are certainly some things that desperately need to be fixed. One is user authentication. It's astonishing that people are using passwords and it's even more astonishing how the industry has conspired to make them as unusable and insecure as possible, for instance each site having their own form factor for user name length, and how many characters and which kinds of characters can be in a password.

And then they have these absurd rules that if you forget your password then they penalize you. They allow you to reset it but then they don't let you set it to be the same password as you had before or any of the previous ones. What threat model are they thinking of? Just because you momentarily couldn't remember which of five passwords it was, you can't use it again? So yes, we have to make user authentication more convenient and more secure.

Another thing is all of the ways that malicious software can get into machines. It used to be simply that you'd be warned not to boot from an infected floppy disk. Now it's almost like the only way to keep your machine from being infected is not to turn it on. Once you connect to the Internet, it could get infected, or once you visit any website or read any email. And it's not fair to tell users, "Well we've given you a nice machine. It's just if you actually use it, it'll get infected." So we have to figure out how to avoid getting things infected. The industry performs heroic efforts with virus scanners and automatic patch distribution, but it's also important to work on more fundamental prevention techniques.

We also have to figure out how, even if your machine is infected, there are basic functions isolated enough that they will still work. Another thing is that Denial of Service (DoS) attacks are amazingly scary. With a large enough DoS attack I just don't see how you can continue making your service be available to the world.