Tutorial 1: Network Security Protocols: Today and Tomorrow

Radia Perlman and Charlie Kaufmann

Tutorial Summary:
This tutorial covers the concepts in network security protocols, describes the current standards and vulnerabilities, and suggests areas that need research. It approaches the problems first from a generic conceptual viewpoint, covering the problems and the types of technical approaches for solutions. For example, how would encrypted email work with distribution lists? What are the performance and security differences in basing authentication on public key technology versus secret key technology? What kinds of mistakes do people generally make when designing protocols? Armed with a conceptual knowledge of the toolkit of tricks that allow authentication, encryption, key distribution, etc., we describe the current standards, including Kerberos, S/MIME, SSL, IPsec, PKI, and web security.

Tutorial Outline (Table of Content):
  1. Introduction
    1. What are the types of problems to be solved ?
    2. What can attackers do?

  2. Cryptography
    1. Secret keys, public keys, message digests
    2. How they are generally used together for encryption, authentication, and integrity checks
    3. Intuition behind RSA, Diffie-Hellman

  3. Key distribution
    1. Secret key schemes (e.g., Kerberos) vs public key schemes (PKI)
    2. Building a hierarchy
      1. Who are the trust anchors
      2. What chains should be trusted? How are they found?
    3. Getting the private key to the human

  4. Cryptographic handshakes
    1. Pitfalls (reflection, replay, etc)
    2. Extra features (e.g., identity hiding, perfect forward secrecy)

  5. Distributed authorization and PKI
    1. Attributes, groups, cross-organizational issues

  6. Real-time protocols (SSL, IPsec (including IKEv1 and IKEv2))
  7. Email security
  8. Web security (URLs, cookies, pitfalls)
  9. Thoughts for the future

Expected Audience and Prerequisites:
This tutorial is for anyone wants to understand cryptography, network security protocols, and the system issues that make creating a truly secure system challenging, even if the underlying cryptography and protocols are secure. There are no prerequisites other than intellectual curiosity and a good night's sleep in the recent past.

Radia Perlman is a Distinguished Engineer at Sun Microsystems. She is also currently teaching a course on network security protocols at Harvard University. She is known for her contributions to bridging (spanning tree algorithm) and routing (link state routing) as well as security (sabotage-proof networks). She is the author of "Interconnections: Bridges, Routers, Switches, and Internetworking Protocols", and co-author of "Network Security: Private Communication in a Public World". She is one of the 25 people whose work has most influenced the networking industry, according to Data Communications Magazine. She has an S.B. and S.M in mathematics and a Ph.D. in computer science from MIT, about 50 issued patents, and an honorary doctorate from KTH, the Royal Institute of Technology in Sweden.

Charlie Kaufman, security architect for Lotus Notes & Domino, is a Distinguished Engineer at IBM. In IETF, he served as the chair of the Web Transaction Security working group, and is currently on the IAB (Internet Architecture Board) and editor of the IKEv2 document in the IPsec working group. He served on the National Academy of Sciences expert panel on computer security that produced the book "Trust in Cyberspace". Previously, he was network security architect for Digital Equipment Corporation. He is co-author of "Network Security: Private Communication in a Public World".

Tutorial 2: 10 Years of Self-Similar Traffic Research: A Circuitous Route Towards a Theoretical Foundation for the Internet

John Doyle and Walter Willinger

Tutorial Summary:
The original paper on the self-similar nature of network traffic appeared 10 years ago at SIGCOMM 1993. Since then, research on self-similar traffic has generally thrived, but has also seen its fair share of wrong turns, road blocks, dead ends, and specious claims. In regard to recent such claims (caused largely by orthodox physics views that associate self-similarity unambiguously with critical or scale-free phenomena), an early success story (that explains self-similarity in network traffic in terms of heavy-tailed phenomena exhibited by its constituent components) has become an illuminating test case for future research in this area. In particular, it has identified the Internet as an ideal proving ground for a scientific exploration of the broader issues of robustness in complex systems throughout technology and biology. Perhaps most importantly, it has led to the development of a nascent theoretical foundation for the Internet that potentially provides a sound framework for understanding both successes and shortcomings of existing Internet technologies, identifies protocols and layering as crucial ingredients, guides the rational design for the future evolution of ubiquitous networking, lets us separate sound from specious claims and theories, and suggests what new science will be needed for developing a useful, general theory of complex engineered systems such as the Internet.

Tutorial Outline (Table of Content):
  1. Overview
  2. 10 years of self-similar traffic research
    1. Self-similarity -- the discovery
    2. On explaining the phenomenon -- heavy tails
    3. Self-similarity -- network performance implications
    4. On wrong turns, road blocks, dead ends, and specious claims

  3. Learning from past successes and failures
    1. Network measurements and their analysis -- self-similar vs. dis-similar
    2. Network performance evaluation -- open-loop vs. closed-loop
    3. Network simulation -- exogenously given vs. endogenously determined
    4. Network topology -- scale-free vs. scale-rich

  4. What theory for the Internet?
    1. Robustness and the Internet -- design and evolution
    2. The Internet's complexity/robustness spiral
    3. Signatures of specious theories and claims for the Internet
    4. HOT -- highly optimized tolerance

  5. An emerging theoretical foundation for the Internet
    1. The mice-elephant coding of information
    2. "Horizontal" integration of TCP and AQM
    3. "Vertical" separation of the TCP/IP protocol stack
    4. Network level design: AS-level vs. router-level

  6. Outlook and discussion

Expected Audience and Prerequisites:
This tutorial is intended for those with some understanding of the Internet architecture and of existing Internet technologies and have not yet given up all hope for a practically relevant and theoretically sound treatment of complex, highly engineeered systems such as the Internet. Some understanding of basic concepts from mathematics, control theory, and communication theory will be helpful but is not required.

John C. Doyle is Professor of Control and Dynamical Systems, Bioengineering, and Electrical Engineering at Caltech. He has a BS and MS in EE, from MIT, 1977 and a PhD in mathematics, UC-Berkeley, 1984. His current research interests are in theoretical foundations for complex networks in engineering and biology, as well as multiscale physics and financial markets, focusing on the interplay between robustness, feedback, control, dynamical systems, computation, communications, and statistical physics. Prize papers include the IEEE Baker (also ranked in the top 10 "most important" papers world-wide in pure and applied mathematics from 1981-1993), the IEEE AC Transactions Axelby (twice), and the AACC Schuck. Individual awards include the IEEE Centennial Outstanding Young Engineer, the IEEE Hickernell, the American Automatic Control Council (AACC) Eckman, and the Bernard Friedman. He has held national and world records and championships in various sports.

Walter Willinger received the Diplom (Dipl. Math.) from the ETH Zurich, Switzerland, and the M.S. and Ph.D. degrees from the School of ORIE, Cornell University, Ithaca, NY, and is currently a member of the Information and Software Systems Research Center at AT&T Labs-Research, Florham Park, NJ. Before that, he was a Member of Technical Staff at Bellcore (1986-1996). He has been a leader of the work on the self-similar ("fractal") nature of data network traffic and is co-recipient of the 1996 IEEE W.R.G. Baker Prize Award form the IEEE Board of Directors and the 1994 W.R. Bennett Prize Paper Award from the IEEE Communications Society for the paper titled "On the Self-Similar Nature of Ethernet Traffic."