Srinivas Shakkottai



Core Research Areas



The success of the Internet lies in part due to its layered architecture with an "hour glass" structure, in which a single common IP layer that provides routing and addressing allows the use of a variety of physical, MAC and control protocols, and more importantly allows for the creation of a multitude of application level overlays such as content distribution networks, social networks and peer-to-peer networks. Riding on top of the applications is the most important layer of all - the economic layer - that extracts value from applications and feeds this value in the form of infrastructure back into the physical layer. The continued success of the Internet requires both efficient usage of resources by application level overlays as well as a sound economic foundation for sustainable growth. My core research focus is on these two aspects - efficiency and viability - the interaction of technology and economics for the Internet of tomorrow.

Figure 1: Layered Architecture
Internet architecture
My primary research interests lie in determining the fundamental limits of communication network architectures, and designing algorithms that have provably good performance. I am also interested in the interplay between economics and technology as observed in large-scale network systems composed of many competing agents. I have studied caching and scheduling in content delivery networks, peer-to-peer networks, traffic management and congestion-control, new game theoretical methods for agent coordination, and network pricing. The impact of my work is partially evidenced by over 1000 citations to my work as seen on Google Scholar. I have also received research awards from Google and Cisco for contributions to real world content dissemination and traffic management systems. Below, I briefly describe some of the research problems that I have worked on recently, and discuss the implications of my results.


Game Theory and Economics

Game theory is a powerful tool to study coordination problems in large scale distributed systems. The agents in these systems could be applications that desire service, wireless sensor nodes, or peers in a content delivery network. Agents often have some common information, which is made available to them by entities such as Internet Service Providers or cellular base stations. From an controls perspective, the interesting part lies in the dynamics of games, since these dynamics can be translated into learning algorithms and protocols that the agents can use. A certain collective behavior could emerge from such learning, and the question is whether this outcome is preferable from a system perspective. We have been working on several problems of coordination and pricing as described below.

A commonly studied class of games is that of potential games, in which there exits a so-called potential function - a scalar value that can be thought of as representing the global "happiness" of the system. The potential function is such that the difference in the payoff received by an agent following from a unilateral change in action is equal to the change in the potential function. Thus, if we denote two possible actions of agent i by ai, and a'i, its payoff by Fi(.), and the (fixed) action of all the other agents by a-i, then the potential function V(.) satisfies
capacity-region

Intuitively, it seems that the coupling between an individual agent's happiness and that of the whole system ought to ensure that for myopic dynamics, the system state should converge - at least to a local maximum.

State Space Augmentation in a Network Coding Game
We present an example from the recent work [1] that illustrates shortcomings of the potential game approach, and proposes state space augmentation as a solution. Consider the wireless network coding scheme in Figure 2(a). Wireless nodes 1 and 2 need to exchange packets x1 and x2 through a relay node (node 3). A simple store-and-forward approach needs four transmissions. However, a network coding
solution uses a store-code-and-forward approach in which the two packets x1 and x2 are combined by means of a bitwise XOR operation at the relay and broadcast to nodes 1 and 2. Nodes 1 and 2 can then decode this packet to obtain the packets that they need.

Now, consider the scenario depicted in Figure 2(b). We have two sources with equal traffic, each of which is aware of two paths leading to its destination. Each has one path that costs 6 units, while the other costs 7 units. If both flows use their individually cheaper paths, the total cost is 12 units. However, if both use the more expensive path, since network coding is possible at node n2 the total cost is reduced to 11 units. We see that there is a dilemma here - savings can only be obtained if there is sufficient bi-directional traffic on (n1,n2,n3). The first mover in this case is clearly at a disadvantage as it essentially creates the route that the other can piggyback upon (in a reverse direction).

Figure 2: A network coding game.
Network Coding Game

The system can be represented as a potential game with the potential function being the total cost given the traffic splits. However, if each source attempts to learn its optimal traffic split based on the cost that it observes, it could easily choose an inefficient solution. The inefficiency was earlier characterized as being large in some networks. However, we showed in [1] that by adding an agent (with only local information) at node n2 that provides rebates for using the network coded path, the potential function seen by the original agents can be altered in such a way that the equilibrium is efficient. These augmented agents at the coding nodes use their own learning dynamics (gradient-based, in this case) to decide how to modify the potential function at each time. In spite of the rebates, the overall cost is not increased but is merely redistributed in the system. The result is valid for general topologies, with each augmented agent taking local decisions to offer rebates, and learning whether to increase or decrease the rebate based on the observed impact.

Discussion: The above example demonstrates our novel approach entitled state space augmentation, which we use in order to modify the potential, thereby assisting the learning dynamics to discover the appropriate equilibrium. Hence, we incentivize a greater exploration of the state space. The idea is represented in Figure 3, where we have both the primary agents as well as augmented agents, whose payoffs depend on the system potential in some way. We believe that the methodology that we developed can be utilized in a variety of scenarios, including those in which the augmentation can be in the form of some altruistic behavior by agents or memory of past behavior.


Figure 3: Augmented Potential Game.
Augmented Game

Network Pricing and Protocol Selection
In the context of a monopolist service provider, it can be shown that is possible to extract all of the consumer surplus through differential pricing based on user demand. However, the drawback of differential pricing schemes in general is that they tend to be highly nonlinear and not resemble the prices observed in practice. A simple way of implementing differential pricing is by providing various service classes with tiered prices. For example, it is possible to partition network capacity into differentially priced virtual networks (each routed separately), which is referred to as the Paris Metro Pricing (PMP).

In [2], we studied PMP for a single resource that is divided into several virtual links, each corresponding to a class of service. Each virtual link has a fixed price of entry, and all flows using a particular virtual link obtain equal bandwidth. The user utility function corresponds to a single type of traffic (say data transfer) and user i's utility is given by Ui(xi)=αi U(xi), where xi is the allocation to user i and α1 ≥ α2 ≥ ... ≥ αN ≥ 0. So the utilities all have the same shape, and change only in scale. Users can choose among the options available, and we say that a particular state is stable (Nash equilibrium) if no user has an incentive to change. We showed that in many environments, a single service class suffices, and that the increase in revenue from using multiple classes is small.

A complementary question is whether users see greater value when such choices are available. In [3], we considered this problem in the context of users whose payoff depends on both throughput and congestion. So users have a utility function U(xi), as well as a disutility function Ûi(x), which depends on the entire vector of allocations x. A congestion control protocol can be thought of as a way in which the source i interprets the load yl on a link l, and responds to it by decreasing or the offered load. Given the utility and disutility functions, each source tries to maximize the difference between the two by choosing a protocol appropriately. Further, if multiple tolled virtual networks are available, then users must decide which one to choose.

We showed that (i) a Nash equilibrium exists in which the optimal decision is for each user to use that protocol that resembles its disutility most closely, (ii) the system value of such equilibria can be arbitrarily bad as compared to the optimal protocol, (iii) by introducing multiple virtual networks with tolls, users can be incentivized to use particular service classes, and (iv) the equilibria in the differentiated services case are infinitely better in system value than the undifferentiated case.

Discussion: As more and more Internet access takes place through the wireless medium in which a very real network capacity constraint exists, some kind of differentiated services are likely to be implemented. Our original result on revenue extraction in a homogeneous utility setting is negative in the sense that it suggests that little gain can be achieved. However, our second result from the user perspective is positive, since it suggests that users would be willing to pay tolls for access to a network that suits their applications.


Peer-to-peer and Content Distribution Networks

More than half the traffic carried on the Internet goes through content distribution networks, and some estimates put it as high as 85%. A major fraction of this traffic is in the form of streaming stored videos, and realtime applications have already made an appearance. Given the high cost of CDN-based video streaming, it is inevitable that there would be significant interest in trying to create hybrids involving both CDN servers and P2P in order to provide CDN-like quality at P2P-like costs. We seek a systematic and principled approach to designing coordination methods in content distribution systems.Below we discuss some representative problems that I have worked on.

Mean Field Optimal Policy in a P2P Streaming System.
In [4], we studied optimal message selection policies for full mesh P2P streaming networks. The system consists of M peers who are all simultaneously interested in a real-time content stream generated by a server as shown in Figure 4. The stream consists of messages with one new message generated at each discrete time instant, and the server selects one peer at random for each new message. Peers obtain messages either if they are selected by the server, or by P2P with random peer selection. Peers maintain a buffer of size m, with the message in the mth location played out at each time instant if available. We have a target of skip-free playout probability of q over all peers.
Defining p(i+1) to be the steady state probability that a peer has the ith message at the beginning of any time instant, we consider the following equilibrium model:
p(i+1)=p(i) +s(i) p(i) (1-p(i)) for all i>1, with p(1)=1/M.
In the above, since buffer position i+1 is filled by a rightward shift from buffer position i, its steady state probability (mean field) at the beginning of the current time slot is the probability that i was already filled at the beginning of the last time slot, plus the probability that i was filled by P2P methods during the last time slot. The latter term is derived by considering that p(i) (1-p(i)) is the probability that a peer does not possess i but the selected peer does, and s(i) is the probability that a peer π chooses to download message i from its selected peer λ, given that π does not possess message i while λ does.

Figure 4: Server-assisted P2P streaming.
What should the message selection policy be?
P2P Streaming
How should s(i) be chosen so as to maximize p(m) (probability of having a message at the point of playout)? Note that the policy that is used to select s(i) creates p(i), which in turn provides the possible message selection options that determine feasible s(i). Thus, the mean field occupancy distribution is a fixed point of the selection dynamics, while the selection dynamics must be consistent with the mean field. We showed in [4] that neither a greedy policy (wherein priority is given to the messages closest to playout) nor a rarest-first policy (wherein priority is given to the messages farthest from playout) is optimal, but that a hybrid of the two that uses greedy up to a threshold, and then switches to rarest-first is order optimal.

Discussion: Message selection schemes are critical to the good performance of P2P networks. The most commonly employed heuristic is rarest first, which is used by the popular BitTorrent P2P file sharing scheme. BitTorrent is now increasingly used for streaming applications as well. Our result is perhaps the first to analytically tie message selection policies to the distribution of available messages in a P2P streaming system, and then optimize over these policies. A surprising aspect of our result is that in the streaming case, rarest first is not optimal, but that it can be used in combination with greedy message selection to create an optimal policy.

Demand Aware Content Distribution.
A simple abstraction of a CDN is illustrated in Figure 5, taken from our recent work [5]. It consists of frontend Web servers (denoted by `F') that aggregate queries arising in different geographical locations, and route each query (query types indicated by numbers) to an appropriate backend cache indicated by a `B'. Caches could be located at cable headends where a neighborhood's cables join (possible for P2P), at a point of presence (POP) where the traffic from several such headends is aggregated (Akamai), or at data centers that are farther away from the user (Google). Multiple caches could potentially serve each query, and the frontend has to take a decision on which to pick. For each request that is routed to a cache, a corresponding file is transmitted back to the requesting source across links of finite capacity. Hence, data is unicast from the cache to each end-user, with the capacity constraint usually being between the POP and the cable headend. Caches are of finite size, and the content can be refreshed periodically from a media vault, with the frequency of refresh representing the cost of access.
The objective is to design policies for request routing, content placement and content eviction with the goal of small user delays. Stable policies ensure the finiteness of the request queues, while good polices also lead to short queue lengths. Thus, request routing and content placement should be such that the system is stable so long as

capacity-region

where λcs is the arrival rate of requests for content c at frontend s, and Csd is capacity constraint between frontend s and cache d. Suppose that each frontend divides up requests into different queues based on the content requests, i.e., we have request queue length qcs[k] at source s for content type c at time k. Note that these queues are merely counters, and do not have real packets.

Figure 5: Query assignment in a CDN. Requests arrive at frontend servers (F), and must be routed to one of (possibly) several backend caches (B) that can service the query. The network links connecting backends to end-users are of finite capacity (C).
CDN
Since the CDN does not know λcs, it needs to infer the appropriate request routing and content placement using request queue length information. We showed in [5] that during the times at which content refreshing is allowed, the media vault should follow a maximum weight schedule (where the weight of any link in the schedule is the product qcs[k] Csd) coupled withminimum weight eviction. At times when refreshing is not allowed, the schedule is simply a maximum weight schedule subject to the content present in the cache. We showed using Foster-Lyapunov type ideas that such a policy stabilizes the system (i.e., implicitly determines the necessary content placement) and leads to short queue lengths (and hence users see small delays).

Discussion: Work on content caching systems has usually focused on single caches, without considering network and other constraints. Analysis is usually done from a worst case perspective, with minimum regret being the objective. Our main contribution is to create a model that captures the aspects of stochastic demand, network capacity constraints, and the availability of multiple caches, while yet retaining analytical tractability. The algorithm developed is simple, and can easily be used to create good heuristics. This result was the basis for a research award from Google.

P2P Assisted Content Distribution Systems.
A fundamental question with respect to P2P networks is whether they can reduce usage of expensive centralized servers that constitute traditional content distribution systems. A scheme to defray server costs is to design a hybrid system in which a central server is used to boost the performance of P2P systems to improve delay performance. Suppose that demand for a piece of content follows a viral diffusion shown in Figure 6, with a population of N users. If demand is satisfied with a certain delay guarantee, it means that the number of content instances present with peers grows with demand. Initially, most demand has to be met using a server, but in the latter part there should be enough instances of the content available at end-users.

Figure 6.Viral demand.
bass-diffusion

In [6] we analytically quantified the savings that can be attained by using such a hybrid. Our results indicate that there exists a "switching threshold" between using the server and P2P at time Θ(ln C), where C is the server capacity provisioned. Thus, service is dominated by the server before this threshold, and P2P afterward.
We also considered the user delay for a single popular file; our results can be summarized (Figure 7) using the case where provisioned server capacity is C=Θ(N/ln N) units. We show that while both the central server alone and pure P2P yield the same delay per user of Θ(ln N), the hybrid scheme utilizes the server only for ln N time and yields a per user delay of Θ(ln ln (N)), i.e., delay is practically constant.

Figure 7. Top row refers to arbitrary capacity C, while bottom refers to C = N/ln N.
Hybrid P2P performance

A key question that must be answered before we can expect mainstream utilization of such hybrid P2P approaches is: How can users that have obtained content legally be encouraged to reshare it legally? Said in a different way, can mechanisms be designed that ensure legitimate P2P swarms will dominate the illicit P2P swarms that make media headlines today? In [7], we investigate a "revenue sharing" approach to this issue. We suggest that users can be motivated to reshare the content legally by allowing them to share the revenue associated with future sales. This can be accomplished through either a lottery scheme or by simply sharing a fraction of the sale price.

Such an approach has two key benefits: First, obviously, this mechanism ensures that users are incentivized to join the legitimate P2P network since they can profit from joining. Second, this approach actually damages the illicit P2P network. Specifically, despite the fact that content is free in the illicit P2P network, since most users expect a reasonable quality of service, if the delay in the illegitimate swarm is large they may be willing to use the legitimate P2P network instead. We show that the revenue recovered by the content provider using a server-supported legitimate P2P swarm can exceed that of the monopolistic scheme by an order of magnitude.

Discussion: The novelty of the above work lies in the fact that we are able to obtain explicit analytical solutions to quantify "folk wisdom" regarding P2P systems. Thus, the first result can inform provisioning decisions on the amount of server capacity needed to obtain a desired service quality. These results were the basis for a Cisco research award. The second result on revenue sharing is contrary to the "conventional wisdom" of charging more rather than less to early adopters, and also to discourage file sharing using legal threats. However, as many recent studies have demonstrated, incentives work better than threats in human society, and adoption of our revenue sharing approach might result in a cooperative equilibrium between content owners, distributors and end-users.


References

(*: My student, †: Other student)

[1] V. Ramaswamy*, V. Reddy*, S. Shakkottai, A. Sprintson and N. Gautam, "Multipath Wireless Network Coding: An Augmented Potential Game Perspective", To appear in IEEE/ACM Transactions on Networking .

[2] S. Shakkottai, R. Srikant, A. Ozdaglar and D. Acemoglu, "The Price of Simplicity", in IEEE Journal on Selected Areas in Communication, Game Theory in Communication Systems, Volume 26, Issue 7, September 2008 .

[3] V. Ramaswamy*, D. Choudhury*, and S. Shakkottai, "Which Protocol? Mutual Interaction of Heterogeneous Congestion Controllers", To appear in IEEE/ACM Transactions on Networking .

[4] S. Shakkottai, R. Srikant and L. Ying, "The asymptotic behavior of minimum buffer size requirements in large P2P streaming networks", in IEEE Journal on Selected Areas in Communication, Vol. 29, Issue 5, May 2011 .

[5] M. Amble*, P. Parag*, S. Shakkottai and L. Ying, "Content-Aware Caching and Traffic Management in Content Distribution Networks" in IEEE INFOCOM '11, Shanghai, China, April 2011 .

[6] S. Shakkottai and R. Johari, "Demand-Aware Content Distribution on the Internet", in IEEE/ACM Transactions on Networking, Volume 18, Issue 2, April 2010 .

[7] V. Ramaswamy*, S. Adlakha†, S. Shakkottai, and A. Wierman, "Incentives for P2P-Assisted Content Distribution: If You Can"t Beat 'Em, Join 'Em" in the 50th Annual Allerton Conference on Communication, Control, and Computing, Allerton, IL, October 2012. Complete version: .


Department of ECE, Texas A&M University