I drove up to the Univ. of Utah this afternoon to hear this year's Organick Lecture by Vint Cerf, one of the inventors of the Internet (I believe he and Al Gore were lab partners). Vint is currently Senior VP for Technology Strategy at MCI, Chairman of ICANN, and a recent winner of the ACM Turing Award.
Where is the science in CS? Here are places some with underlying theory:
- Automata theory (strong)
- Compiler and language theory (strong)
- Operating system design (weak) - we are vulnerable to how to make OS's secure and they take too many resources trying to manage resources.
- Data structures (strong)
- Queuing theory (networks of queues) - strong theory, but too much of the network functionality has to be abstracted away before you can apply the theory.
- Animation and rendering (strong) - Vint has recently come to have a respect for the theory, physics, and mathematics hiding behind the artistry.
Networking is one area that he picks on as not having significant underlying theory. There are important principals, like layering, but much of the theory is shallow. Protocol design, as an example, doesn't have much theory. There has been some work in formalizing protocols and their analysis, but it's way too complex. Other examples of places where we need deep analytical elements are distributed algorithms and cooperating processes.
We know almost nothing about making programming more efficient and systems more secure and scalable. He characterizes our progress in programming efficiency as a "joke" compared to hardware.
Security (and here he's really mostly talking about identity) works well in hierarchical organizations, but not elsewhere. The cost of authenticating individual users is one of the key factors. Hierarchical organizations can more efficiently issue IDs and perform authentications.
He mentions virtual machines as an intriguing notion because theoretically they can create safe execution environments for various applications. JVMs do this, as an example. One of the reasons that people went to single application servers (for example, a DNS server, a mail server, etc.) in the 90's was to get safe execution environments and process independence. The falling cost of hardware made this possible. VMs allow the cost of creating a machine to fall more dramatically still.
Here are some potential trouble spots:
- Penetrable operating systems.
- Insecure networks
- Buggy servers
- Broken models of perimeter security
- Worms, virus, Trojan horses, keyboard and web page monitors
- Bluetooth security in mobiles
- SPAM, SPIM, and SPIT
- Phishing and Pharming
- IDN ambiguities and DNS hijacking
- Intellectual property problems
- Routing attacks with BGP routing
- Distributed denial of service
- Millions of zombies
- Insecure servers, laptops, desktops, mobiles, etc.
Worms have the potential to create resilient processes that run across multiple machines for business continuity. Vint notes that the first instance of a worm was at Xerox Park for precisely this purpose. Business processes could be broken up and run as worm-like agents on multiple machines.
Speaking of identity, Vint wishes that the original design of the Internet had required that each end point on the network be able to authenticate themselves to every other end point. He notes that public key cryptography was still four years in the future at that point and symmetric key encryption was too expensive.
He lists a few more challenges that remain:
- Identity theft
- Personal privacy
- Search algorithms
- Semantic networks (related to last point)
- Database sharing (genome and space data are examples)
- IPv6 deployment
- Layers of details such as the network management systems, DNS refactoring, provisioning
- Allocation policy development
- Networked scientific instruments (tele-operation)
Some policy challenges in the Internet environment:
- WSIS/WGIG - Internet governance
- ICANN vs. ITU
- International eCommerce - imagine an Amazon customer in Hong Kong, ordering from Amazon in the US. The book is sourced in South Africa, and shipped to Paris. Certain questions arise:
- dispute resolution
- online contracts (authenticity, legal framework)
- taxation policies
He calls out Creative Commons and iTunes and new, innovative models of solving content management challenges. He notes that the regulatory system we have today is broken because it's based on the modality of the communication and the Internet is subsuming them all.
Interplanetary Internet: InterPlaNet (IPN). The flow control mechanism of TCP doesn't work well when the latency goes to 40 minutes. What's more, planets are in motion, so distances apart vary with time and thus latency varies with time. So do error rates. Some of these problems are like mobile networks.
IPN assumes that you can use TCP/IP on the surface of the planet. Each planet has its own IP space demarked by a separate identifier. DNS doesn't work on an interplanetary scale since by the time you get a resolution for an earth DNS address from Mars, the IP number may have changed (think mobile or DHCP). The protocol looks more like a store-and-forward email system than an end-to-end protocol like TCP. The result is an interplanetary network protocol.
At the end, someone asked about the proposal to have the UN take over ICANN duties. It was the only point in the talk where I'd say that Vint got animated and even a little worked up. He clearly feels strongly that "ICANN ain't broke; don't fix it."
All in all, a very enjoyable talk. I'm glad that the U has the endowment and makes this happen each year. I took some additional photos, which you can see here.