








|
SIGCOMM
2004
Wokshop Papers
Future
Directions in Network Architecture
The Case
for Separating
Routing from Routers
Simplified Layering and
Flexible Bandwidth with TWIN
Secure Routerless Routing
A
Virtualized Link
Layer with Support for Indirection
On Demand Label
Switching
for Spontaneous Networks
NUTSS: A SIP-based
approach to UDP and TCP network connectivity
Steps Towards a
DoS-resistant Internet Architecture
Loose Source Routing as
a
Mechanism for Traffic Policies
Invariants A New
Design Methodology for Network Architectures
Practice and
Theory of Incentives and Game Theory in Networked Systems
Internet
Congestion: A Laboratory Experiment
Experiences Applying
Game Theory to System
Design
Rethinking
Incentives for Mobile Ad Hoc Networks
On the Benefits
and Feasibility of Incentive Based Routing Infrastructure
A Case
for
Taxation in Peer-to-Peer Streaming Broadcast
Near rationality
and competitive equilibria in networked systems
Faithfulness in Internet
Algorithms
Free-Riding and
Whitewashing in Peer-to-Peer Systems
Network and
System Support for Games
A Mobile
Gaming Platform for the IMS
Lightweight
QoS-Support for Networked
Mobile
Gaming
Feedback, Latency,
Accuracy: Exploring
Tradeoffs in Location-Aware Gaming
Using Session
Initiation Protocol to Build
Context-Aware VoIP Support for Multiplayer Networked Games
Implementation of a
Service Platform for
Online Games
OpenPING: A
Reflective Middleware for the
Construction of Adaptive Networked Game Applications
Zoned Federation of
Game Servers: a
Peer-to-Peer Approach to Scalable Multi-player Online games
Realizing Bullet
Time Effect in
Multiplayer Games with Local Perception Filters
Scalable
Peer-to-peer Networked Virtual
Environment
Is Runtime
Verification Applicable to
Cheat
Detection?
A Cheat Controlled
Protocol for
Centralized
Online Multiplayer Games
The Effects of Loss
and Latency on User
Performance in Unreal Tournament 2003
Objective and
Subjective Evaluation of the
Influence of Small Amounts of Delay and Jitter on a Recent First Person
Shooter Game
Thoughts on
Emulating Jitter for User
Experience Trials
Accuracy in
Dead-Reckoning Based
Distributed
Multi-Player Games
Network
Troubleshooting: Research, Theory and Operations Practice Meet
Malfunctioning Reality
H.323 Beacon
Tool: An H.323 Application Related End-to-End Performance
Troubleshooting Tool
Experiences
in Traceroute and Available Bandwidth Change Analysis
A Wavelet-Based Framework for Proactive Detection of
Network Misconfigurations
Path
Diagnosis with IPMP
Distributed
DNS Troubleshooting
Is Your
Caching Resolver Polluting the Internet?
Mohonk:
Mobile honeypots to trace unwanted traffic early
Identifying
IPv6 Network Problems in the Dual-Stack World
Troubleshooting
on Intra-Domain Routing Instability
Fixing
BGP, One AS at a Time
Locating BGP Missing Routes Using Multiple Perspectives
IP Forwarding
Anomalies and Improving their Detection Using Multiple Data Sources
A
Measurement Framework for Pinpointing Routing Changes
The
Case for Seperating Routing from Routers
|
Nick
Feamster (MIT), Hari Balakrishnan
(MIT), Jennifer Rexford (AT&T Labs--Research), Aman
Shaikh (AT&T Labs--Research), Kobus van der Merwe (AT&T
Labs--Research)
|
|
| Abstract: |
|
|
|
|
Over the past
decade, the complexity of the Internet’s routing infrastructure has
increased dramatically. This complexity and the problems it causes stem
not just from various new demands made of the routing infrastructure,
but also from fundamental limitations in the ability of today’s
distributed infrastructure to scalably cope with new
requirements. The limitations in today’s routing system arise in
large part from the fully distributed path-selection computation that
the IP routers in an autonomous system (AS) must perform. To overcome
this weakness, interdomain routing should be separated from today’s IP
routers, which should simply forward packets (for the most part).
Instead, a separate Routing Control Platform (RCP) should select
routes on behalf of the IP routers in each AS and exchange reachability
information with other domains. Our position is that an approach like
RCP is a good way of coping with complexity while being responsive to
new demands and can lead to a routing system that is substantially
easier to manage than today. We present a design overview of RCP based
on three architectural principles—path computation based on a
consistent view of network state, controlled interactions between
routing protocol layers, and expressive specification of routing
policies—and discuss the architectural strengths and weaknesses of our
proposal.
|
Simplified
Layering and Flexible Bandwidth with TWIN
|
| Indra Widjaja (Bell
Laboratories, Lucent Technologies), Iraj Saniee (Bell Laboratories,
Lucent Technologies) |
|
| Abstract: |
|
|
|
|
This paper
describes a novel network architecture with simplifed layering, called
Time-domain Wavelength Interleaved Networking (TWIN), that scales
end-to-end bandwidth granularity flexibly up to the wavelength
capacity. In TWIN, all packet and complex processing functions are
pushed to the network edge such that the network core only has to deal
with an optical forwarding layer. Furthermore, by avoiding fast optical
switching and optical buffering in the core through scheduling
fast-tunable lasers and buffering packets at the edge, TWIN effectively
makes the network act like a switch. We examine distributed network
scheduling for this architecture and show its performance via analysis
and simulation. We also explore other research issues that are unique
in TWIN.
|
Secure
Routerless Routing
|
Vince Grolmusz (Eotvos
University), Zoltan Kiraly (Eotvos University)
|
|
| Abstract: |
|
|
|
|
Suppose that
there are n Senders and r Receivers. Our goal is to design a
communication network such that long messages can be sent from Sender i
to Receiver π(i) such that no other receiver can retrieve the message
intended for Receiver π(i). The task can easily be completed using some
classical interconnection network and routers in the network.
Alternatively, if every Receiver is directly connected to all n
Senders, then the Senders can choose which channel to
use for communication, without using any router. Fast optical networks
are slowed down considerably if routers are inserted in their nodes.
Moreover, handling queues or buffers at the routers is extremely hard
in all-optical setting. An obvious routerless solution, connecting each
possible Sender-Receiver pairs with direct channels seems to be
infeasible in most cases. The main result of the present work is the
mathematical model of two networks and corresponding networkprotocols
in which the Senders and the Receivers are connected with only ro(1)
channels (in practice no more than 32 channels in both networks); there
are no switching or routing-elements in the network, just linear
combinations of the signals are computed. Such designs would be usable
in fast all-optical networks. In the proof of the security of the
networks we do not use any unproven cryptographical or complexity
theoretical assumptions: the security is information-theoretically
proved, and does not depend on cryptographical primitives.
|
A
Virtualized Link Layer with Support for Indirection
|
| Richard Gold (Uppsala
University), Per Gunningberg (Uppsala University),
Christian Tschudin (University of Basel) |
|
| Abstract: |
|
|
|
|
The current
Internet today hosts several extensions for indirection like Mobile IP,
NAT, proxies, route selection and various network overlays. At the same
time, user-controlled indirection mechanisms foreseen in the Internet
architecture (e.g., loose source routing) cannot be used to implement
these extensions. This is a consequence of the Internet’s indirection
semantics not being rich enough at some places and too rich at others.
In order to achieve a more uniform handling of indirection we propose
SelNet, a network architecture that is based on a virtualized link
layer with explicit indirection support. Indirection in this
context refers to user-controlled steering of packet flows through the
network. We discuss the architectural implications of such a scheme and
report on implementation progress.
|
On
Demand Label Switching for Spontaneous Networks
|
| Vincent Untz (IMAG),
Martin Heusse (IMAG), Franck Rousseau (IMAG), Andrzej Duda (IMAG) |
|
| Abstract: |
|
|
|
|
We consider
the problem of interconnecting hosts in spontaneous edge networks
composed of various types of wired or wireless physical and link layer
technologies. All or some hosts in a spontaneous network can be
organized as a multihop ad hoc network, connected or not to the global
Internet. We argue that this kind of networks requires a more
sophisticated approach than standard IP forwarding: communication paths
should be managed on a per flow basis, multiple
paths need to be maintained to cope with link failures or changing
topologies, and the interconnection architecture should provide
information on destination reachability. We have designed and
implemented Lilith, a prototype of an interconnection node for
spontaneous edge networks. We handle network dynamics by
establishing MPLS (MultiProtocol Label Switching) label switched paths
(LSP) on demand with a reactive ad hoc routing protocol. Interconnection
at layer 2.5 makes all the hosts to appear as one single IP subnet so
that configuration protocols can use the subnet broadcast for all forms
of discovery (addresses, names, services). Performance
measurements of the Lilith implementation on Linux show good
performance compared with standard IP forwarding and important
performance gains when multiple paths are used.
|
NUTSS: A SIP-based Approach to UDP
and TCP Network Connectivity
|
Saikat Guha (Cornell
University), Yutaka Takeda (Cornell University), Paul
Francis (Cornell University)
|
|
| Abstract: |
|
|
|
|
The
communications establishment capability of the Session Initiation
Protocol is being expanded by the IETF to include establishing network
layer connectivity for UDP for a range of scenarios, including where
hosts are behind NAT boxes, and host are running IPv6. So far, this
work has been limited to UDP because of the assumed impossibility of
establishing TCP connections through NAT, and because of the difficulty
of predicting port assignments on certain common types of NATs. This
paper reports on preliminary success in establishing TCP connections
through NAT, and on port prediction. In so doing, we suggest that it
may be appropriate for SIP to take a broader architectural role in P2P
network layer connectivity for both IPv4 and IPv6.
|
Steps Towards a DoS-Resistant
Internet Architecture
|
Mark Handley (UCL),
Adam Greenhalgh (UCL)
|
|
| Abstract: |
|
|
|
|
Defending
against DoS attacks is extremely difficult; effective solutions
probably require significant changes to the Internet
architecture. We present a series of architectural changes aimed
at preventing most flooding DoS attacks, and making the remaining
attacks easier to defend against. The goal is to stimulate a debate on
tradeoffs between the flexibility needed for future Internet evolution
and the need to be robust to attack.
|
Loose Source Routing as a
Mechanism for Traffic Policies
|
Katerina Argyraki
(Stanford University), David Cheriton (Stanford University)
|
|
| Abstract: |
|
|
|
|
Internet
packet delivery policies have been of concern since the earliest times
of the Internet, as witnessed by the presence of the Type of Service
(ToS) eld in the IPv4 header. Efforts continue today with
Differentiated Services (DiffServ) and Multiprotocol Label Switching
(MPLS). We claim
that these approaches have not succeeded because they re- quire, either
explicitly or subtly, a network-layer virtual circuit mechanism.
In this paper, we describe how adding a form of Loose
Source and Record Route (LSRR) capability into the next generation
Internet provides adequate support for transmit and receive policies,
including filtering, while avoiding the problems of virtual circuits
and the original problems with LSRR in IPv4.
|
Invariants: A New Design
Methodology for Network Architectures
|
Bengt Ahlgren (Swedish
Institute of Computer Science), Marcus Brunner (NEC Network
Laboratories), Lars Eggert (NEC Network Laboratories), Robert
Hancock (Siemens/Roke Manor Research), Stefan Schmid (NEC Network
Laboratories)
|
|
| Abstract: |
|
|
|
|
The first age
of Internet architectural thinking concentrated on defining the correct
principles for designing a packet-switched network and its application
protocol suites. Although these same
principles remain valid today, they do not address the question of how
to reason about the evolution of the Internet or its interworking with
other networks of very different heritages. This
paper proposes a complementary methodology, motivated by the view that
evolution and interworking flexibility are determined not so much by
the principles applied during initial design, but by the choice of
fundamental components or “design invariants” in terms of which the
design is expressed. The paper discusses the characteristics of such
invariants, including examples from the Internet and other networks,
and considers what attributes of invariants best support architectural
flexibility.
|
Internet Congestion: A Laboratory
Experiment
|
Daniel Friedman
(UCSC), Bernardo Huberman (HP Labs)
|
|
| Abstract: |
|
|
|
|
Human players
and automated players (bots) interact in real time in a congested
network. A player’s revenue is proportional to the number of successful
“downloads” and his cost is proportional to his total waiting time.
Congestion arises because waiting time is an increasing random function
of the number of uncompleted download attempts by all players.
Surprisingly, some human players earn considerably higher profits than
bots. Bots are better able to exploit periods of excess capacity, but
they create endogenous trends in congestion that human players are
better able to exploit. Nash equilibrium does a good job of predicting
the impact of network capacity and noise amplitude. Overall efficiency
is quite low, however, and players overdissipate
potential rents, i.e., earn lower profits than in Nash equilibrium..
|
Experiences Applying Game Theory
to System Design
|
| Ratul Mahajan (U.
Wash.), Maya Rodrig (U. Wash.), David Wetherall (U. Wash.), John
Zahorjan (U. Wash.) |
|
| Abstract: |
|
|
|
|
We applied
techniques from game theory to help formulate and analyze solutions to
two systems problems: discouraging selfishness in multi-hop wireless
networks and enabling cooperation among ISPs in the Internet. It proved
difficult to do so. This paper reports on our experiences and explains
the issues that we encountered. It describes the ways in which the
straightforward use of results from traditional game theory did not fit
well with the requirements of our problems. It also identifies an
important characteristic of the solutions we did eventually adopt that
distinguishes them from those available using game theoretic
approaches. We hope that this discussion will help to highlight
formulations of game theory which are well-suited for problems
involving computer systems.
|
Rethinking Incentives for Mobile
Ad Hoc Networks
|
Elgan
Huang (U. Cambridge), Jon Crowcroft
(U. Cambridge), Ian Wassell (U.Cambridge)
|
|
| Abstract: |
|
|
|
|
Without
sufficient nodes cooperating to provide relaying functions, a mobile ad
hoc network cannot function properly. Consequently various
proposals have been made which provide incentives for individual users
of an ad hoc mobile network to cooperate with each other. In this paper
we examine this problem and analyse the drawbacks of currently proposed
incentive systems. We then argue that there may not be a need for
incentive systems at all, especially in the early stages of adoption,
where excessive complexity can only hurt the deployment of ad hoc
networks. We look at the needs of different customer segments at each
stage of the technological adoption cycle and propose that incentive
systems should not be used until ad hoc networks enter mainstream
markets. Even then, incentive systems should be tailored to the needs
of each individual application rather than adopting a generalised
approach that may be flawed or too technically demanding to be
implemented in reality.
|
On the Benefits
and Feasibility of Incentive Based Routing Infrastructure
|
| Mike Afergan (MIT),
John Wroclawski (MIT) |
|
| Abstract: |
|
|
|
|
Routing on
the Internet today is as much about money as it is traffic. The
business relationships of an ISP largely dictate its routing policy and
drive the work of its engineers. In today's routing mechanism, this
leads to a number of well-known pathologies. This structure is further
challenged by the emergence of user-directed routing. This paper
explores these challenges and argues for the introduction of explicit
incentives (prices) into the routing fabric of the Internet. We argue
that doing so addresses limitations of the current system that are
significant today and will only be exacerbated by user-directed
routing. To support this claim, we describe the benefits and properties
of incentive-based routing frameworks and demonstrate how such
frameworks can be applied to a number of routing architectures,
including BGP.
|
A Case for Taxation in
Peer-to-Peer Streaming Broadcast
|
Yang-hua
Chu (CMU), John Chuang (UC Berkeley), Hui Zhang (CMU)
|
|
| Abstract: |
|
|
|
|
Most existing
research on peer-to-peer (p2p) has been on file sharing applications.
In this paper, we focus on p2p streaming applications. In particular,
we argue that the Bit-for-Bit model, widely adopted in p2p file
sharing, is not applicable in p2p streaming. In p2p streaming, the
bottleneck resource is the upstream bandwidth capacity. Our empirical
experience with p2p streaming indicates that a large percent of peers
on the Internet have limited upstream bandwidth capacity,
and the Bit-for-Bit model severely limits the amount of bandwidth these
resource-poor peers can receive. To address this issue, we propose a
taxation model. In the taxation model, resource-rich peers contribute
more bandwidth to the system, and subsidize for the resourcepoor peers.
This redistribution of wealth improves social welfare. Such a model is
applicable in the streaming context because the publisher of the video
stream has the means to enforce taxation on peers and the will to
maximize their collective social welfare. We design a simple linear
taxation scheme and incorporate it in a distributed streaming protocol.
Our simulation results indicate that taxation can significantly improve
social welfare without incurring a significant overhead to the system.
|
Near Rationality and Competitive
Equilibria in Networked Systems
|
| Nicolas Christin (UC
Berkeley), Jens Grossklags (UC Berkeley), John Chuang(UC Berkeley) |
|
| Abstract: |
|
|
|
|
A growing
body of literature in networked systems research relies on game theory
and mechanism design to model and address the potential lack of
cooperation between self-interested users. Most game-theoretic models
applied to system research only describe competitive equilibria in
terms of pure Nash equilibria, that is, a situation where the strategy
of each user is deterministic, and is her best response to the
strategies of all the other users. However, the assumptions necessary
for a pure Nash equilibrium to hold may be too stringent for practical
systems. Using three case studies on network formation, computer
security, and TCP congestion control, we outline the limits of
game-theoretic models relying on Nash equilibria, and we argue that
considering competitive equilibria of a more general form helps in
assessing the accuracy of a game theoretic model, and can even help in
reconciling predictions from game-theoretic models with empirically
observed behavior.
|
Faithfulness in Internet Algorithms
|
Jeff Shneidman
(Harvard), David Parkes (Harvard), Laurent Massoulie (Microsoft
Research)
|
|
| Abstract: |
|
|
|
|
Proving or
disproving faithfulness (a property describing robustness to rational
manipulation in action as well as information revelation) is an
appealing goal when reasoning about distributed systems containing
rational participants. Recent work formalizes the notion of
faithfulness and its foundation properties, and presents a general
proof technique in the course of proving the ex post Nash faithfulness
of a theoretical routing problem [11].
In this paper, we use a less formal approach and take some first steps
in faithfulness analysis for existing algorithms running on the
Internet. To this end, we consider the expected faithfulness of
BitTorrent, a popular file download system, and show how manual
backtracing (similar to the the ideas behind program slicing [22]) can
be used to find rational manipulation problems. Although this primitive
technique has serious drawbacks, it can be useful in disproving
faithfulness.
Building provably faithful Internet protocols and their corresponding
specifications can be quite difficult depending on the system knowledge
assumptions and problem complexity. We present some of the open
problems that are associated with these challenges.
|
Free-Riding and Whitewashing in
Peer-to-Peer Systems
|
Michal Feldman (UC
Berkeley), Christos Papadimitriou (UC Berkeley), Ion Stoica (UC
Berkeley), John Chuang (UC Berkeley)
|
|
| Abstract: |
|
|
|
|
We develop a
model to study the phenomenon of free-riding in peer-to-peer (P2P)
systems. At the heart of our model is a user of a certain type, an
intrinsic and private parameter that re
ects the user's willingness to contribute resources to the system. A
user decides whether to contribute or free-ride based on how the
current contribution cost in the system compares to her type. When the
societal generosity (i.e., the average type) is low, intervention is
required in order to sustain the system. We present the effect of
mechanisms that exclude low type users or, more realistic, penalize
free-riders with degraded service. We also consider dynamic scenarios
with arrivals and departures of users, and with white-washers: users
who leave the system and rejoin with new identities to avoid
reputational penalties. We find that when penalty is imposed on all
newcomers in order to avoid whitewashing, system performance degrades
significantly only when the turnover rate among users is high.
|
A Mobile Gaming Platform for
the IMS
|
Amjad Akkawi, Sibylle
Schaller, Oliver Wellnitz, Lars Wolf
|
|
| Abstract: |
|
|
|
|
Mobile
devices offer the opportunity to play games nearly everywhere.
Moreover, networked games allow individual players to interact with
other people and to participate in a larger gaming
world, which also provides for new business opportunities. Hence,
we currently see an increased interest from game developers, providers
and players in mobile games. In this paper we propose a novel
architecture and platform for games on the IMS. This allows games to
utilize the features and capabilities that are inherent to the IMS. At
the same time existing games can be flexibly adapted to this new type
of network and have the possibility to reserve network resources for
game data transmission, thus improving the experience of players.
|
Lightweight QoS-Support for
Networked Mobile Gaming
|
Marcel Busse, Bernd
Lamparter, Martin Mauve, Wolfgang Effelsberg
|
|
| Abstract: |
|
|
|
|
In this
paper, we present an approach to provide Quality of Service (QoS) for
networked mobile gaming. In order to examine the QoS requirements of
mobile games, we ported a simple real-time game called GAV (GPL Arcade
Volleyball) to a PDA and performed several traffic measurements over
both GPRS and UMTS networks. We show that due to high end-to-end
delay and delay jitter, realtime games are not supported by GPRS. While
UMTS improves both delay and jitter, it still does not match the
requirements of real-time games. The key reason for this problem is
that overprovisioning, as it is used to allow real-time games in the
Internet, is very
expensive in mobile networks. At the same time, QoS classes for mobile
networks are not tailored to real-time games. In order to reduce delay
and jitter for this application class, while still accounting for the
very bursty nature of real-time game flows, we propose to use a
combination of statistical multiplexing and QoS guarantees. The
general idea is to aggregate multiple game flows and perform
reservation for that aggregate. As a theoretical background, we use a
queuing system based model. Through simulation of a sample network with
the traffic data generated by GAV, we validate our assumptions and
demonstrate the performance and characteristics of
our approach.
|
Feedback, Latency, Accuracy: Exploiting Tradeoffs in
Location-Aware Gaming
|
Kieran Mansley, David
Scott, Alastair Tse, Anil Madhavapeddy
|
|
| Abstract: |
|
|
|
|
We are
witnessing the development of large-scale location systems and a
corresponding rise in the popularity of location-aware applications,
especially games. Traditional computer games have pushed the limits of
CPU and graphics card performance for many years and experience
suggests that location-aware games will place similar demands upon
location systems. Unlike traditional gaming platforms however, the
mobile devices that interact with location systems
are heavily constrained especially in the number of ways that feedback
can be provided.
In this paper we describe a location-aware, fast-paced, close quarters
action game and use it to experiment with three key components of
future location-aware gaming platforms: (i) the location
system, (ii) the network to connect the mobile devices, and (iii) the
feedback and computational capabilities of the mobile devices
themselves.
We investigate the tradeoffs that are possible between these
components, the effect of the feedback channel and the suitability of
Bluetooth as a network for mobile game devices.
|
Using Session
Initiation Protocol to Build Context-Aware VOIP Support for Multiplayer
Networked Games
|
Aameek Singh, Arup
Acharya
|
|
| Abstract: |
|
|
|
|
Multiplayer
networked games are the trend of the day. Receiving a major boost from
various commercial ventures like Microsoft Xbox [19] and Sony
Playstation [13], the networked gaming industry is set to grow
dramatically. These multiplayer games allow geographically dispersed
and possibly distant players to participate in a single game. In order
to provide interaction amongst players in such environments, text
messaging and recently, real-time voice interaction through VoIP is
used. However, such interactions are mostly out-of-band (not based on
game contexts), user-initiated and limited in operability, failing to
exploit the entire potential and functionality of VoIP.
In this paper, we present mechanisms and design of a prototype that
allows game-context based VoIP communication between players. Thus, in
addition to allowing players to talk to each other to coordinate
teammates and activities (through a static team-based audio conference)
as in some of the current systems, it supports communication among
players based on shared contexts like the same physical location or
room within the gaming environment. We use the
Session Initiation Protocol (SIP) [14] to realize VoIP and describe
mechanisms for building network gaming services using SIP. We also
propose a sophisticated gaming scenario, in which VoIP is used to relay
information about another player's distance and location with respect
to the recipient, e.g. players farther away sound farther away.
|
Implementation of a Service
Platform for Online Games
|
Anees Shaikh, Sambit
Sahu, Marcel Rosu, Michael Shea, Debanjan Saha
|
|
| Abstract: |
|
|
|
|
Large-scale
multiplayer online games require considerable investment in hosting
infrastructures. However, the difficulty of predicting the success of a
new title makes investing in dedicated server
and network resources very risky. A shared infrastructure based on
utility computing models to support multiple games offers an attractive
option for game providers whose core competency is not in managing
large server deployments.
In this paper we describe a prototype implementation of a shared, on
demand service platform for online games. The platform builds on open
standards and off-the-shelf software developed to support utility
computing offerings for Web-based business applications. We describe
our early experience with identifying appropriate performance metrics
for provisioning game servers and with implementing the platform
components that we consider essential for its acceptance.
|
OpenPING: A Reflective Middleware for the Construction
of Adaptive Networked Game Applications
|
Paul Okanda, Gordon
Blair
|
|
| Abstract: |
|
|
|
|
The emergence
of distributed Virtual Reality (VR) applications that run over the
Internet has presented networked game application designers with new
challenges. In an environment where the public internet streams
multimedia data and is constantly under pressure to deliver over widely
heterogeneous user-platforms, there has been a growing need that
distributed VR applications be aware of and adapt to frequent
variations in their context of execution. In this paper, we argue that
in contrast to research efforts targeted at improvement of scalability,
persistence and responsiveness capabilities, much less attempts have
been aimed at addressing the flexibility, maintainability and
extensibility requirements in contemporary distributed VR platforms. We
propose the use of structural reflection as an approach that not only
addresses these requirements but also offers added value in the form of
providing a framework for scalability, persistence and responsiveness
that is itself flexible, maintainable and extensible. We also present
an adaptive middleware platform implementation called OpenPING1 that
supports our proposal in addressing these requirements.
|
Zoned Federation of Game Servers: A Peer-to-Peer
Approach to
Scalable
Multi-Player Online Games
|
Takuji
Iimura, Hiroaki Hazeyama, Youki Kadobayashi
|
|
| Abstract: |
|
|
|
|
Today’s
Multi-player Online Games (MOGs) are challenged by infrastructure
requirements, because of their server-centric nature. Peer-to-peer
networks are an interesting alternative, if they can implement the set
of functions that are traditionally performed by centralized
authoritative servers. In this paper, we propose a zoned federation
model to adapt MOG to peer-to-peer networks. In this model, zoning
layer is inserted between the game program and peer-to-peer networks.
We introduce the concept of zone and zone owner to MOG. Zone is some
part of the whole game world, and zone owner is an authoritative server
of a specific zone. According to the demands of the game program, each
node actively changes its role to zone owner and works in the same way
as a centralized authoritative server. By dividing the whole game world
into several
zones, workloads of the centralized authoritative game server can be
distributed to a federation of nodes. We have implemented the zoned
federation model, and evaluate it with a prototypical multi-player
game. Evaluation results indicate that our proposed approach is
applicable to small and medium-sized MOGs, where the number of nodes is
less than 500.
|
Realizing Bullet Time Effect in Multiplayer Games with
Local
Perception
Filters
|
Jouni
Smed, Henrik Niinisalo, Harri Hakonen
|
|
| Abstract: |
|
|
|
|
Local
perception filters exploit the limitations of human perception to
reduce the effects of network latency in multiplayer computer games.
Because they allow temporal distortions in the rendered view, they can
be modified to realize bullet time effect, where a player can get more
reaction time by slowing down the surrounding game world. In this
paper, we examine the concepts behind local perception filters and
extend them to cover artificially increased delays. The presented
methods are implemented in a testbench program, which is used to study
the usability and limitations of the approach.
|
Scalable Peer-to-Peer
Networked Virtual Environment
|
Shun-Yun
Hu, Guan-Ming Liao
|
|
| Abstract: |
|
|
|
|
We propose a
fully-distributed peer-to-peer architecture to solve the scalability
problem of Networked Virtual Environment in a simple and efficient
manner. Our method exploits locality of user interest inherent to such
systems and is based on the mathematical construct Voronoi diagram.
Scalable, responsive, fault-tolerant NVE can thus be constructed and
deployed in an affordable way.
|
Is Runtime Verification
Applicable to Cheat Detection?
|
Margaret DeLap, Bjorn
Knutsson, Honghui Lu, Oleg Sokolsky, Usa Sammapun, Insup Lee, Christos
Tsarouchis
|
|
| Abstract: |
|
|
|
|
We
investigate the prospect of applying runtime verification to cheat
detection. Game implementation bugs are extensively exploited by
cheaters, especially in massively multiplayer games. As games are
implemented on larger scales and game object interactions become more
complex, it becomes increasingly difficult to guarantee that high-level
game rules are enforced correctly in the implementation. We
observe that although implementing high-level rules in code is complex
because of interference between rules, checking for rule compliance at
runtime is simple because only a single rule is involved in each check.
We demonstrate our idea by applying the Java-MaC runtime verification
system to a simple game to detect a transaction bug that is common in
massively multiplayer games.
|
A Cheat Controlled Protocol
for Centralized Online Multiplayer Games
|
Bei Di Chen,
Muthucumaru Maheswaran
|
|
| Abstract: |
|
|
|
|
Ordering of
command messages from the clients at the game servers is an important
issue that impacts fairness, response times, and smoothness of the game
play. Recently, protocols based on “reaction” times were proposed to
order the command messages. This paper presents a protocol that can be
used to control cheating in reaction time based message ordering
schemes. We examine the performance of the proposed protocol by
emulating wide-area game play scenarios on the Planet-Lab. The results
from the experiments indicate that the proposed protocol is able to
dramatically reduce the cheating opportunities that exist for the
clients.
|
The Effects of Loss and Latency on User Performance in
Unreal
Tournament
2003
|
Tom
Beigbeder, Rory Coughlan, Corey Lusher, John Plunkett, Emmanuel Agu,
Mark Claypool
|
|
| Abstract: |
|
|
|
|
The growth in
the popularity of interactive network games has increased the
importance of a better understanding of the effects of packet loss and
latency on user performance. While previous work on network games
has studied user tolerance for high latencies and has studied the
effects of latency on user performance in real-time strategy games, to
the best of our knowledge, there has been no systematic study of the
effects of loss and latency on user performance. In this paper we
study user performance for Unreal Tournament 2003 (UT2003), a popular
first person shooter game, under varying amounts of packet loss and
latency. First, we
deduced typical real world values of packet loss and latency
experienced on the Internet by monitoring numerous operational UT2003
game servers. We then used these deduced values of loss and latency in
a controlled networked environment that emulated various conditions of
loss and latency, allowing us to monitor UT2003 at the network,
application and user levels. We designed maps that isolated the
fundamental first person shooter interaction components of movement and
shooting, and conducted numerous user studies under controlled network
conditions. We find that typical ranges of packet loss have no impact
on user performance or
on the quality of game play. The levels of latency typical for most
UT2003 Internet servers, while sometimes unpleasant, do not
significantly affect the outcome of the game. Since most first person
shooter games typically consist of generic player actions similar to
those that we tested, we believe that these results have broader
implications.
|
Objective and Subjective Evaluation of the Influence of
Small Amounts of Delay and Jitter on a Recent First Person Shooter Game
|
| Peter Quax, Patrick Monsieurs, Wim
Lamotte, Danny De Vleeschauwer, Natalie Degrande |
|
| Abstract: |
|
|
|
|
There have
been several studies in the past years that investigate the impact of
network delay on multi-user applications. Primary examples of
these applications are real-time multiplayer games. These studies have
shown that high network delays and jitter may indeed influence the
player’s perception of the quality of the game. However, the proposed
test values, which are often high, are not always representative for a
large percentile of on-line game players. We have therefore
investigated the influence of delay and jitter with numbers that are
more representative for typical access networks. This in effect allows
us to simulate a setup with multiplayer game servers that are located
at ISP level and players connected through that ISP’s access network.
To obtain further true-to-life results, we opted to carry out the test
using a recent first person shooter (FPS) game, Unreal Tournament 2003.
It can, after all, be expected that this new generation of games has
built-in features to diminish the effect of small delay values, given
the popularity of playing
these games over the Internet. In this paper, we have investigated both
subjective perceived quality and objective measurements and will show
that both are indeed influenced by even these small delay and jitter
values.
|
Thoughts on Emulating Jitter
for User Experience Trials
|
Grenville
Armitage, Lawrence Stewart
|
|
| Abstract: |
|
|
|
|
It is usually
hard to control the network conditions affecting public online game
servers when studying the impact of latency, loss and jitter on user
experience. This leads to a natural desire for running user-experience
trials under controlled network conditions, and hence a requirement for
accurate (or at least predictable) emulation of IP level latency, loss
and jitter on a localized network testbed. In this short paper we
reflect on some experiences with running userexperience
trials, and specifically evaluate the utility and limitations of using
FreeBSD’s kernel-resident dummynet module to introduce controlled
jitter. We expect these insights will stimulate further userexperience
trials built around low-cost, unix-based networking tools.
|
Accuracy in Dead-Reckoning
Based Distributed Multiplayer Games
|
Sudhir
Aggarwal, Hemant Banavar, Amit Khandelwal, Sarit Mukherjee, Sampath
Rangarajan
|
|
| Abstract: |
|
|
|
|
Distributed
multi-player games use dead reckoning vectors to intimate other (at a
distance) participating players about the movement of any entity by a
controlling player. The dead reckoning vector contains the current
position of the entity and the velocity components. When a
participating player receives a vector, traditionally it puts the
entity at the current position specified by the vector and starts
projecting the path of the entity from that point using the local clock
of the receiver. In this paper we show that this traditional method of
usage of dead reckoning vector brings in inaccuracy in the receivers’
rendering of the entity. This inaccuracy can be substantial even with
low network delay between the senderreceiver pairs and increases with
network delay. We propose the use of globally synchronized clocks among
the participating players and a time-stamp augmented dead reckoning
vector that enables the receiver to render the entity accurately. We
modified the popular game BZFlag with this technique, and compared the
accuracy seen in game playing using the traditional method and the
proposed technique. We conducted several types of experiments varying
the frequency of generation of dead reckoning vectors and the delay
between the sender and the receivers. The experiments show significant
quantitative improvement in accuracy even for 100ms delay between the
sender-receiver pairs and appreciable qualitative improvement in game
playing experience.
|
H.323 Beacon Tool: An
H.323
Application
Related End-to-End Performance Troubleshooting Tool
|
| Prasad Calyam (OARnet), Weiping Mandrawa
(OARnet), Mukundan Sridharan (The Ohio State University), Arif Khan
(OARnet), Paul Schopis (OARnet) |
|
| Abstract: |
|
|
|
|
H.323
protocol based Voice and Video conferencing solutions are established
popular technologies both in industry and academia. However, these
bandwidth intensive applications
are often plagued by various performance problems. In this paper
we describe a few common end-to-end performance problems noticed over
several years in our ITU-T H.323 protocol-based ”Video and Voice over
IP” (VVoIP) infrastructure and provide a tool for troubleshooting these
performance issues. Based on our operations experiences, we are
developing a tool called the ”H.323 Beacon” to measure, monitor and
qualify the performance of H.323 sessions. We describe the
development architecture and feature set of the H.323 Beacon, which can
be used by a novice end-user, network engineer or conference-operator
using VVoIP systems. H.323 Beacon is an application-specific network
measurement tool that provides H.323-protocol specific evidence and
other information necessary to troubleshoot H.323 application
performance in the network and at the host (endto-end). We also present
2 use-cases for the H.323 Beacon that demonstrate the utility of the
tool for VVoIP performance debugging. Finally, we outline essential
best practices for prevention and efficient resolution of intermittent
and imminent failures of H.323 VVoIP applications.
|
Experiences in Traceroute and
Available Bandwidth Change Analysis
|
| Connie Logg (Stanford Linear Accelerator
Center), R. Les Cottrell (Stanford Linear Accelerator Center)
|
|
| Abstract: |
|
|
|
|
SLAC has been
studying end-to-end WAN bandwidth availability and achievability for
2.5 years via IEPM-BW [1]. IEPM-BW performs network intensive tests
every 90 minutes. Based on that
experience we have also developed a light weight available bandwidth
(ABwE [2]) measurement tool that can make a measurement within a
second. We are now extending this to a WAN measurement and detection
system (IEPM-LITE) aimed at more quickly detecting and troubleshooting
network performance problems and also to be more friendly on lower
performance
paths. IEPM-LITE uses ping, forward traceroutes, and ABwE sensors to
monitor, in close to real-time, Round Trip Times (RTT), changes in
available bandwidth and routes to and from target hosts. This paper
discusses the experiences, techniques and algorithms used to detect and
report on significant traceroute and bandwidth changes. The ultimate
aim is to develop a lightweight WAN network performance monitoring
system that can detect, in near real time, significant changes and
generate alerts.
|
A Wavelet-Based Framework
for Proactive Detection of Network Misconfigurations
|
| Antonio Magnaghi (Fujitsu Laboratories of
America), Takeo Hamada (Fujitsu Laboratories of America), Tsuneo
Katsuyama (Fujitsu Laboratories Ltd.) |
|
| Abstract: |
|
|
|
|
An increasing
number of misconfigurations and malicious behaviors threaten the normal
operation conditions of data networks. Thus, field engineers are
constantly presented with the challenge of isolating new
misconfigurations and anomalies. In this paper, we present a group of
real-world problems reported by a set of six commercial networks we
surveyed. Successively, we focus on a well-defined family of
misconfigurations. Our analysis identifies common properties such
anomalous behaviors share. Misconfigured TCP flows experience packet
losses and RTObased
(Retransmission Time-Out) events during the opening phase of the TCP
connection (“Early RTO Events”). This introduces precise correlations
in misconfigured traffic that we utilize as a “signature” in order to
isolate the presence of anomalies. We propose a wavelet-based algorithm
that is capable of revealing such a family of anomalies from the
analysis of MIB data aggregating healthy and anomalous flows.
Simulation and the use of real datasets from a commercial network allow
us to quantitatively assess the effectiveness of our detection
procedure. Numerical results show that our algorithm can effectively
isolate the presence of an anomalous traffic component that is a
minimal percentage of the overall link throughput. Therefore, our
approach
provides a general and highly sensitive misconfiguration detection
instrument.
|
Path Diagnosis with IPMP
|
Matthew
Luckie (University of Waikato / NLANR MNA), Tony McGregor (University
of Waikato / NLANR MNA)
|
|
| Abstract: |
|
|
|
|
The ability
to measure and identify performance fault locations on an Internet path
between two hosts is an important first step towards diagnosing and
correcting a fault or avoiding fault locations entirely. The ability to
identify fault locations on both the forward and reverse paths from a
single point would be very powerful for both operators and users.
Rather than describing a tool for path diagnosis per se, this paper
describes how one could apply a simple measurement
protocol to diagnose faults.
|
Distributed DNS Troubleshooting
|
Vasileios
Pappas (UCLA), Patrik Fältström (Cisco), Daniel Massey (ISI),
Lixia Zhang (UCLA)
|
|
| Abstract: |
|
|
|
|
In this paper
we present a troubleshooting tool designed to identify a number of DNS
configuration errors. These errors range from commonly seen
misconfigurations that are well known among DNS operators, such as lame
delegations, to less known ones, such as cyclic zone dependencies. Left
unnoticed, these misconfigurations can seriously affect the
availability of the DNS infrastructure. Instead of explicitly
enumerating all possible configuration errors,
we first identify two essential properties that characterize a correct
DNS configuration, and detect misconfigurations as violations of these
properties. We also utilize multiple monitoring points to
identify configuration errors that are difficult or impossible to pin
down with a single vantage point. Furthermore, equipped with a
comprehensive graphical user interface, our tool provides network
operators with a tangible view of their DNS zones’ configuration and
the errors that may affect their availability.
|
Is Your Caching Resolver
Polluting the Internet?
|
| Duane Wessels (CAIDA and The Measurement
Factory) |
|
| Abstract: |
|
|
|
|
Previous
research has shown that most of the DNS queries reaching the root of
the hierarchy are bogus [1]. This behavior derives from two constraints
on the system: (1) queries that cannot be satisfied locally percolate
up to the root of the DNS; (2) some caching nameservers are behind
packet filters or firewalls that allow outgoing queries but block
incoming replies. These resolvers assume the network failure is
temporary and retransmit their queries, often aggressively.
DNS pollution may not be causing any perceivable performance problems.
The root servers seem well equipped to handle the load. Since DNS
messages are small, the pollution does not contribute significantly to
the total traffic generated by most organizations. Nonetheless, this
paper provides a few reasons why network operators should take the time
to investigate and fix these problems.
|
Mohonk: Mobine Honeypots to Trace
Unwanted Traffic Early
|
Balachander
Krishnamurthy (AT&T Labs--Research)
|
|
| Abstract: |
|
|
|
|
Honeypots
have been traditionally used to advertise dark address space and gather
information about originators of traffic to such addresses. With simple
thresholding mechanisms this technique has shown itself to be fairly
effective in identifying suspicious IP addresses. Honeypots are however
unsuitable to locate the precise entry point of unwanted traffic.
Tracing back to the origination of such traffic is hard due to the
delay and difficulty of maintaining state along the
path of such traffic. We propose a novel mobile honeypot mechanism that
allows unwanted traffic to be detected significantly closer to the
origin. The mobility in our scheme stems from additional information
that is made available to the upstream ASes as well as the changes in
the set of
dark address space advertised. Sharing information with a network of
friendly ASes has the potential to identify and significantly lower
unwanted traffic on such links.
|
Identifying IPv6 Network Problems
in a Dual-Stack World
|
| Kenjiro Cho (Sony Computer Science Labs,
Inc.), Matthew Luckie (University of Waikato), Bradley
Huffaker (CAIDA) |
|
| Abstract: |
|
|
|
|
One of the
major hurdles limiting IPv6 adoption is the existence of poorly managed
experimental IPv6 sites that negatively affect the perceived quality of
the IPv6 Internet. To assist network operators in improving IPv6
networks, we are exploring methods to identify wide-area IPv6 network
problems. Our approach makes use of parallel IPv4 and IPv6
connectivity to dual-stacked nodes.
We identify the existence of an IPv6 path problem by comparing IPv6
delay measurements to IPv4 delay measurements. Our test results
indicate that the majority of IPv6 paths have delay characteristics
comparable to those of IPv4, although a small number of paths exhibit a
much
larger delay with IPv6. Thus, we hope to improve the quality of the
IPv6 Internet by identifying the worst set of problems.
Our methodology is simple. We create a list of systems with IPv6 and
IPv4 addresses in actual use by monitoring DNS messages. We then
measure delay to each address in order to select a few systems per site
based on their IPv6:IPv4 response-time ratios. Finally, we run
traceroute with Path MTU discovery to the selected systems and then
visualize the results for comparative path analysis. This paper
presents the tools used to support this study, and the results of our
measurements conducted from two locations in Japan and one in Spain.
|
Troubleshooting on Intra-Domain
Routing Instability
|
| Zhang Shu (National Institute of
Information and Communications Technology, Japan), Youki Kadobayashi
(Nara Institute of Science and Technology) |
|
| Abstract: |
|
|
|
|
Routing
instability is a problem directly affecting the reliability of the
Internet. While a great deal of effort has been committed to
inter-domain routing instability, studies on intra-domain routing have
been quite limited. Most network operators still do not have sufficient
knowledge on this problem and often complain that: (i) They do not know
to what extent the intra-domain routing instability can occur on their
networks because this is difficult to detect, and (ii) the causes of
this instability are difficult to find. In this paper, we first present
the results of some passive measurements we did on intra-domain routing
instability. We show the statistical results of OSPF routing
information (for both IPv4 and IPv6) we collected on the WIDE Internet
and APAN Tokyo-XP network. Through the statistics, we demonstrate how
seriously routing instability can occur on a service network. We then
propose an approach to help network operators isolate the causes of
this. We emphasize the importance of gathering useful data for
troubleshooting in event-driven fashion and propose using SNMP or
telnet for this. We then explain what kind of data should be collected
for the purposes of troubleshooting and how to use this data to isolate
the problem.
|
Fixing BGP, One AS at a Time
|
Jaideep
Chandrashekar (University of Minnesota), Zhi-Li Zhang (University of
Minnesota), Haldane Peterson (University of Minnesota)
|
|
| Abstract: |
|
|
|
|
Debugging
inter-domain routing problems on the Internet is notoriously hard. This
is partly because BGP updates carry no information about the events
that trigger them, and also because operation is highly distributed and
complex, lacking a central point of control or authority. These factors
have impeded the development of tools that can help in the diagnosis
and troubleshooting of routing problems. Consequently, the dynamic
behavior of BGP is not well understood, even though it forms a critical
component of the Internet infrastructure.
In this paper, we argue that these problems can be effectively
addressed by incorporating a diagnostic capability into the interdomain
routing system. Such a capability would enable operators to effectively
pinpoint the cause (and location) of routing problems and take timely
steps to correct them. We present arguments to show that this
capability is best realized by localized, AS-specific entities called
BGP Auditors. We then explore the high level design of a distributed,
diagnostic framework and briefly discuss the relevant issues.
|
Locating BGP Missing Routes
Using Multiple Perspectives
|
Di-Fa
Chang (USC/Information Sciences Institute), Ramesh Govindan
(USC/Information Sciences Institute), John Heidemann
(USC/Information Sciences Institute)
|
|
| Abstract: |
|
|
|
|
There have
been many studies on measuring and interpreting interdomain routing
dynamics. Most of them, however, are based on the approach of off-line
and passive post-processing BGP routing updates. We propose a new
methodology that uses real-time and active monitoring to troubleshoot
various BGP routing anomalies. This paper focuses on a specific BGP
routing problem — missing routes that occur when some ASes can reach a
prefix while others can’t. The idea is to periodically monitor the BGP
routing status at multiple vantage points, like Route Views, and when a
possible missing route event is detected issue traceroute queries from
various looking glasses to learn of the packet-forwarding path status.
By comparing previous and current packet-forwarding paths, we can have
an idea of where the missing route event takes place. This paper
examines the plausibility of this methodology and discusses preliminary
experimental results.
|
IP
Forwarding
Anamolies
and Improving their Detection Using Multiple Data Sources
|
Matthew
Roughan (School of Mathematical Sciences, University of Adelaide), Tim
Griffin (Intel Research Cambridge), Z. Morley Mao (University of
Michigan), Albert Greenberg (AT&T Research), Brian Freeman
(AT&T Labs)
|
|
| Abstract: |
|
|
|
|
IP forwarding
anomalies, triggered by equipment failures, implementation bugs, or
configuration errors, can significantly disrupt and degrade network
service. Robust and reliable detection of such anomalies is essential
to rapid problem diagnosis, problem mitigation, and repair. We propose
a simple, robust method that integrates routing and traffic data
streams to reliably detect forwarding anomalies, and report on the
evaluation of the method in a tier-1 ISP backbone. First,
we transform each data stream separately, to produce informative alarm
indicators. A forwarding anomaly is then signalled only if the
indicators for both streams indicate anomalous behavior
concurrently. The overall method is scalable, automated and
self-training. We find this technique effectively identifies forwarding
anomalies, while avoiding the high false alarms rate that would
otherwise result if either stream were used unilaterally.
|
A Measurement Framework for
Pinpointing Routing Changes
|
| Renata
Teixeira (UC
San Diego), Jennifer Rexford (AT&T Labs -- Research) |
|
| Abstract: |
|
|
|
|
Changes in
the end-to-end path between two hosts can lead to sudden changes in the
round-trip time and available bandwidth, or even the complete loss of
connectivity. Determining the reason for the routing change is crucial
for diagnosing and fixing the problem, and for holding a particular
domain accountable for the disruption. Active measurement tools
like traceroute can infer the current path between two end-points, but
not where and why the path changed. Analyzing BGP data from multiple
vantage points seems like a promising way to infer the root cause of
routing changes. In this paper, we explain the inherent limitations of
using BGP data alone and argue for a distributed approach to
troubleshooting routing problems. We propose a solution where each AS
continuously maintains a view of routing changes in its own network,
without requiring additional support from the underlying routers. Then,
we describe how to query the measurement servers along the AS-level
forwarding path from the source to the destination to uncover the
location and the reason for the routing change.
|
|
|
|