Research Topics

Scheduling: competitive analysis, algorithms, lower bounds, resource augmentation
     Varying Degrees of Parallelizability: parallel, sequential
     Power Management
     Broadcast Scheduling
     TCP
Lower Bounds
     Cake Cutting
     Greedy and Dynamic Programming Algorithms
     Embedding Distortion
     Time-Space Trade-off Lower Bounds for st-Connectivity on JAG Models
     Complexity Classes
     Circuit Depth
     Comparisons Between Models of Parallel Computation
Randomness
     Job DAG
Handling Data
     Data Bases
     Data Transmission

Scheduling Problems

Most broadly, this area of research considers the scheduling to a steady stream of incoming jobs, a shared resource such as processing time, battery power, or internet bandwidth. The problem is to determine how to best allocate and reallocate these resources to the jobs as they arrive and complete in a way that minimizes some global quality of service measure. One difficulty is the lack of knowledge that the scheduler has. The scheduler is said to be on-line if it does not know what jobs are going to arrive in the future, non-clairvoyant if it does not know the properties of the jobs currently in the system, and distributed if decisions are not made centrally. A second difficulty is that even with this information, most variants of the problem are NP-complete. The problem is to find in real time a schedule that is almost as good as the optimal schedule when you do not have complete information. We provide some answers to this question using the resource augmentation competitive analysis framework. This involves proving that given any input sequence, the algorithm, when given a constant times more of resource, performs within a constant times the efficiency as the all knowing all powerful optimal algorithm.

Varying Degrees of Parallelizability: As chips with hundreds to thousand of processors chips dominate the market in the next decade, it is generally agreed that developing software to harness their power is going to be a much more difficult technical challenge than the development of the hardware. Suppose you must allocated your processors to incoming jobs in the way that minimizes the average completion time of the jobs. Suppose that some of these jobs are parallelizable and hence can effectively unitize any processors that you give it. Others are sequential, so giving it extra processors is a waste. We prove if you have twice as many processors as God, you can do almost as well as him simply by spreading your processors evenly among the jobs that are alive. This result was considered break through in this area, because no one previously believed that it could possibly be true. Ten years later it became a chapter in "The Encyclopedia of Algorithms." We also give a more complex algorithm that only needs 1+eps times as many processors.

Power Management: ``What matters most to the computer designers at Google is not speed, but power, low power, because data centers can consume as much electricity as a city.'' --- Dr. Eric Schmidt, CEO of Google. The most commonly used power management technique is speed scaling, which involves changing the speed of each processor. In tandem to determining at each time, which job to run on each processor, the operating system will need a speed scaling policy for setting the speed of each processor. We consider developing operating system algorithms/policies for scheduling processes with varying degrees of parallelism in a multiprocessor setting in a way that both optimizes some schedule quality of service objective, as well as some power related objective.

Broadcast Scheduling: We investigate server scheduling policies to minimize average user perceived latency in systems where multiple clients request data from a server and these are returned on a broadcast channel. Possibly being able to satisfy many requests to a common file with a single broadcast, allows the system to be more scalable to large numbers of clients. One notable commercial example is movies-on-demand. We provide new on-line scheduling algorithms and new analysis techniques.

TCP: The standard for reliable communication on the Internet is the well-known Transport Control Protocol (TCP). It performs well both in practice and extensive simulation have been done, but there have been minimal theoretical results. We evaluate the performance of TCP using the traditional competitive analysis framework, comparing its performance with that of an optimal offline algorithm.
* Warning: I do not do TCP AT ALL. I am a theory person. I prove theorems.

Lower Bounds

For both practical and theoretical reasons, we would like to know the minimum amount of time (or space) needed to solve a given computational problem on an input of size n. An upper bound provides an algorithm that achieves some time bound. A lower bound proves that no algorithm correctly solves the problem faster no matter how clever. Doing this on general models of computation (eg. in JAVA or a Turing Machine) is beyond our reach. For this reason, researchers often prove lower bounds on weaker modes of computation.

Cake Cutting: The fair division of resources is an important topic, be it settling a divorce, resolving international issues such as contested underwater mining territories, or simply cutting a cake. The task is about figuring out how to divide the resource so that each of the n recipient feels they've received a fair portion based on their needs and desires. The best algorithm achieves this with O(n log n) operations, where each operation either asks a recipient to either specify how much they value a particular piece or where they would cut the cake to produce a piece of a given value. We prove a matching lower bound. We also reduced the time to only O(n) operations by allowing the algorithm to flip coins and to provide an approximately fair solution. Interestingly enough, this work was written about in the Toronto Star [Feb 12, 2006. pg. D.16]

Greedy and Dynamic Programming Algorithms: The goal of this research area is to define a model of computation, prove that it to some extent captures an algorithm paradigm (in this case greedy algorithms or dynamic programming) because many of the key algorithms in this paradigm can be implemented in the model, and then to prove that other computational problems cant be solved efficiently in this model. We have a number of results in this area.

Embedding Distortion: The adversary places n points in the plane. For every pair of points, he tells me the distance between each pair distorted by factor of at most 1+epsilon. My task, without knowing the original placement, is to again place the points in plane in a way that respects the stated distances within a factor of 1+epsilon'. Surely for small epsilon and reasonably large epsilon', the problem is easy. Surprisingly, we did prove that the problem is NP-hard when 1+epsilon is 3.62 and 1+epsilon' is 3.99. My conjecture, is that it is poly-time for epsilon' > 2 epsilon and NP-hard for epsilon' < 2 epsilon.

Time-Space Trade-off Lower Bounds for st-Connectivity on JAG Models: The computational problem st-connectivity is to determine whether there is a path from vertex s to vertex t in a graph. The model considered is the JAG (``jumping automaton for graphs'') introduced by Cook and Rackoff in which pebbles are moved around the vertices of the input graph. It is a very natural structured model and is powerful enough so that most known algorithms can be implemented on it. We prove tight lower bounds on how the required time increases exponentially as the space (# of memory cells) available decreases.

Complexity Classes: Instead of studying classes of decision problems based on their computation times, Papadimitriou, Schafer, and Yannakakis categorized search problems into a number of complexity classes. This is particularly important when the associated decision problem is not known to be in P nor to be NP-complete and when the object being searched for is known to always exist. Our paper studies these classes and proves some equivalences and separations among them.

Circuit Depth: Parallel computation is important but not well understood. The next significant hurdle is proving that there is a problem that cannot be solved on a circuit (and, or, not) with only O(log n) depth (i.e. separate NC^1 from NC^2). Towards this goal, Karchmer, Raz, and Wigderson suggested the intermediate step of proving a lower bound on a slightly simplified version of their communication game characterization of circuit depth, which they call the Universal Composition Relation. Our paper gives an almost optimal lower bound for this intermediate problem.

Comparisons Between Models of Parallel Computation: For my masters, I used Ramsey theory and information theory to separate to models of parallel computation. This provided a greater understanding of the partial information a processor learns about the input.

Randomness

Job DAG: We consider the problem of computing bounds on the variance and expectation of the longest path length in a DAG from knowledge of variance and expectation of edge lengths. We present analytic bounds for various simple DAG structures, and present a new algorithm to compute bounds for more general DAG structures. Our algorithm is motivated by an analogy with balance of forces in a network of ``strange'' springs. The problem has applications in reliability analysis and probabilistic verification of interface systems.

Handling Data

(These were two side papers. I don't supervise people in data bases)

Data Bases: As part of data mining, we give a fast single scan algorithm for finding all maximal empty rectangles in large two-dimensional data sets. IBM bought the patent from us.

Data Transmission: In current transmission networks, packets of data often get lost in chaotic bursts. Applications such as video conferencing and video on demand suffer because the image ``can break up with a jarring abruptness'' even when a only few packets have been lost. We introduce a novel approach called Priority Encoding Transmission (PET) that ensures the video image will degrade gracefully with the number of packets lost, independent of which packets are lost.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.