Machine Learning: (I have done NO research but keen on learning)

Research Topics

Scheduling: competitive analysis, algorithms, lower bounds, resource augmentation
     Varying Degrees of Parallelizability: parallel, sequential
     Power Management
     Broadcast Scheduling
     TCP
Lower Bounds
     Braching Programs
     Communication Complexity
     Cake Cutting
     Greedy and Dynamic Programming Algorithms
     Embedding Distortion
     Time-Space Trade-off Lower Bounds for st-Connectivity on JAG Models
     Complexity Classes
     Circuit Depth
     Comparisons Between Models of Parallel Computation
Randomness
     Job DAG
Handling Data
     Data Bases
     Data Transmission

Machine Learning

Computers can now drive cars and find cancer in x-rays. For better or worse, this will change the world (and the job market). Strangely designing these algorithms is not done by telling the computer what to do or even by understanding what the computer does. The computers learn themselves from lots and lots of data and lots of trial and error. This learning process is more analogous to how brains evolved over billions of years of learning. The machine itself is a neural network which models both the brain and silicon and-or-not circuits, both of which are great for computing. The only difference with neural networks is that what they compute is determined by weights and small changes in these weights give you small changes in the result of the computation. The process for finding an optimal setting of these weights is analogous to finding the bottom of a valley. "Gradient Decent" achieves this by using the local slope of the hill (derivatives) to direct the travel down the hill, i.e. small changes to the weights. There is some theory. If a machine is found that gives the correct answers on the randomly chosen training data without simply memorizing, then we can prove that with high probability this same machine will also work well on never seen before instances.

Second level of Machine Learning (These topics are likely 30 min each)
- Generalizing from Training Data: We can prove that if we successfully train on enough randomly choose training data, then the produced machine generalities well to random examples never seen before.
- Reinforcement Learning & Markoff Chains: The basic theory needed to learn to solve some complex multi-step task.
- Dimension Reduction & Maximum Likelihood: How to compress your data while retaining the key features.
- Convolutional & Recurrent Networks: Used for learning images and sound that are invariant over location and time.
- Generative Adversarial Networks: Used for understanding and producing a random data item.
- Back Propagation: We could learn how to use matrix multiplication to learn the slope to your error space.
- Bayesian Inference: Given the probability of a symptom given a disease, we can compute the probability of a disease given a symptom.
- Decision Trees & Clustering: The basics.

Scheduling Problems

Most broadly, this area of research considers the scheduling to a steady stream of incoming jobs, a shared resource such as processing time, battery power, or internet bandwidth. The problem is to determine how to best allocate and reallocate these resources to the jobs as they arrive and complete in a way that minimizes some global quality of service measure. One difficulty is the lack of knowledge that the scheduler has. The scheduler is said to be on-line if it does not know what jobs are going to arrive in the future, non-clairvoyant if it does not know the properties of the jobs currently in the system, and distributed if decisions are not made centrally. A second difficulty is that even with this information, most variants of the problem are NP-complete. The problem is to find in real time a schedule that is almost as good as the optimal schedule when you do not have complete information. We provide some answers to this question using the resource augmentation competitive analysis framework. This involves proving that given any input sequence, the algorithm, when given a constant times more of resource, performs within a constant times the efficiency as the all knowing all powerful optimal algorithm.

Varying Degrees of Parallelizability: As chips with hundreds to thousand of processors chips dominate the market in the next decade, it is generally agreed that developing software to harness their power is going to be a much more difficult technical challenge than the development of the hardware. Suppose you must allocated your processors to incoming jobs in the way that minimizes the average completion time of the jobs. Suppose that some of these jobs are parallelizable and hence can effectively unitize any processors that you give it. Others are sequential, so giving it extra processors is a wa0ste. We prove if you have twice as many processors as God, you can do almost as well as him simply by spreading your processors evenly among the jobs that are alive. This result was considered break through in this area, because no one previously believed that it could possibly be true. Ten years later it became a chapter in "The Encyclopedia of Algorithms." We also give a more complex algorithm that only needs 1+eps times as many processors.

Power Management: ``What matters most to the computer designers at Google is not speed, but power, low power, because data centers can consume as much electricity as a city.'' --- Dr. Eric Schmidt, CEO of Google. The most commonly used power management technique is speed scaling, which involves changing the speed of each processor. In tandem to determining at each time, which job to run on each processor, the operating system will need a speed scaling policy for setting the speed of each processor. We consider developing operating system algorithms/policies for scheduling processes with varying degrees of parallelism in a multiprocessor setting in a way that both optimizes some schedule quality of service objective, as well as some power related objective.

Broadcast Scheduling: We investigate server scheduling policies to minimize average user perceived latency in systems where multiple clients request data from a server and these are returned on a broadcast channel. Possibly being able to satisfy many requests to a common file with a single broadcast, allows the system to be more scalable to large numbers of clients. One notable commercial example is movies-on-demand. We provide new on-line scheduling algorithms and new analysis techniques.

TCP: The standard for reliable communication on the Internet is the well-known Transport Control Protocol (TCP). It performs well both in practice and extensive simulation have been done, but there have been minimal theoretical results. We evaluate the performance of TCP using the traditional competitive analysis framework, comparing its performance with that of an optimal offline algorithm.
* Warning: I do not do TCP AT ALL. I am a theory person. I prove theorems.

Lower Bounds

For both practical and theoretical reasons, we would like to know the minimum amount of time (or space) needed to solve a given computational problem on an input of size n. An upper bound provides an algorithm that achieves some time bound. A lower bound proves that no algorithm correctly solves the problem faster no matter how clever. Doing this on general models of computation (eg. in JAVA or a Turing Machine) is beyond our reach. For this reason, researchers often prove lower bounds on weaker modes of computation.

Braching Programs: The most powerful model of computation measuring the amount of space used by an algorithm is branching programs. It is represented by a directed acyclic graph of states. At each point in time, the computation is in a particular state. Imagine that the name of this state specifies everything the computation knows. For example, being in state q means that the computation knows that it is the second line of the code and that its variables x and y have the stated values. If it reads that z=8 and forgets the value of x, then it branches into the state q. The number of "bits" of memory that the algorithm uses is said to be the log of the number of states because this is the number needed to remember simply the name of the state. We have many lower bound papers and two in the last five years specifically about branching programs.

Communication Complexity: Suppose Alice knows some information x and Bob knows y and together they want to solve some task f(x,y). Communication complexity measures the number of bits they must communicate in order to achieve this. Shannon's entropy is a good measure of the amount of information sent. We prove a number of results in this area. One of them looks at how much a little advice can help Alice and Bob.

Cake Cutting: The fair division of resources is an important topic, be it settling a divorce, resolving international issues such as contested underwater mining territories, or simply cutting a cake. The task is about figuring out how to divide the resource so that each of the n recipient feels they've received a fair portion based on their needs and desires. The best algorithm achieves this with O(n log n) operations, where each operation either asks a recipient to either specify how much they value a particular piece or where they would cut the cake to produce a piece of a given value. We prove a matching lower bound. We also reduced the time to only O(n) operations by allowing the algorithm to flip coins and to provide an approximately fair solution. Interestingly enough, this work was written about in the Toronto Star [Feb 12, 2006. pg. D.16] We give a faster randomized algorithm that is in https://en.wikipedia.org/wiki/Edmonds-Pruhs_protocol

Greedy and Dynamic Programming Algorithms: The goal of this research area is to define a model of computation, prove that it to some extent captures an algorithm paradigm (in this case greedy algorithms or dynamic programming) because many of the key algorithms in this paradigm can be implemented in the model, and then to prove that other computational problems cant be solved efficiently in this model. We have a number of results in this area.

Embedding Distortion: The adversary places n points in the plane. For every pair of points, he tells me the distance between each pair distorted by factor of at most 1+epsilon. My task, without knowing the original placement, is to again place the points in plane in a way that respects the stated distances within a factor of 1+epsilon'. Surely for small epsilon and reasonably large epsilon', the problem is easy. Surprisingly, we did prove that the problem is NP-hard when 1+epsilon is 3.62 and 1+epsilon' is 3.99. My conjecture, is that it is poly-time for epsilon' > 2 epsilon and NP-hard for epsilon' < 2 epsilon.

Time-Space Trade-off Lower Bounds for st-Connectivity on JAG Models: The computational problem st-connectivity is to determine whether there is a path from vertex s to vertex t in a graph. The model considered is the JAG (``jumping automaton for graphs'') introduced by Cook and Rackoff in which pebbles are moved around the vertices of the input graph. It is a very natural structured model and is powerful enough so that most known algorithms can be implemented on it. We prove tight lower bounds on how the required time increases exponentially as the space (# of memory cells) available decreases.

30 Anniversary of my PhD. Please humor me.
Tight Lower Bounds for st-Connectivity on the
Node Named Jumping Automata for Graphs NNJAG Model
It is a structured model,
- It moves pebbles around the input graph, ie. It either remembers the entire name of a node or nothing about it.
- it cant go to a node without walking there first. (The jumping is to nodes already containing pebbles.)
It can do arbitrary calculations on info learned and remembered, but it has limited memory space. With O(nlogn) memory it could traverse and memorize the entire graph in linear time and then solve the halting problem.
With less space, it cant remember which nodes it has been to before and hence wanders aimlessly.
No more work has been done in this area because:
- In this model, the results for directed graph are tight and good for undirected (Me)
- In any stronger model, lower bounds are too hard.
https://www.eecs.yorku.ca/~jeff/research/jag/ck_journal.pdf
https://www.eecs.yorku.ca/~jeff/research/jag/talk.pdf

Complexity Classes: Instead of studying classes of decision problems based on their computation times, Papadimitriou, Schafer, and Yannakakis categorized search problems into a number of complexity classes. This is particularly important when the associated decision problem is not known to be in P nor to be NP-complete and when the object being searched for is known to always exist. Our paper studies these classes and proves some equivalences and separations among them.

Circuit Depth: Parallel computation is important but not well understood. The next significant hurdle is proving that there is a problem that cannot be solved on a circuit (and, or, not) with only O(log n) depth (i.e. separate NC^1 from NC^2). Towards this goal, Karchmer, Raz, and Wigderson suggested the intermediate step of proving a lower bound on a slightly simplified version of their communication game characterization of circuit depth, which they call the Universal Composition Relation. Our paper gives an almost optimal lower bound for this intermediate problem.

Comparisons Between Models of Parallel Computation: For my masters, I used Ramsey theory and information theory to separate to models of parallel computation. This provided a greater understanding of the partial information a processor learns about the input.

Randomness

Job DAG: We consider the problem of computing bounds on the variance and expectation of the longest path length in a DAG from knowledge of variance and expectation of edge lengths. We present analytic bounds for various simple DAG structures, and present a new algorithm to compute bounds for more general DAG structures. Our algorithm is motivated by an analogy with balance of forces in a network of ``strange'' springs. The problem has applications in reliability analysis and probabilistic verification of interface systems.

Handling Data

(These were two side papers. I don't supervise people in data bases)

Data Bases: As part of data mining, we give a fast single scan algorithm for finding all maximal empty rectangles in large two-dimensional data sets. IBM bought the patent from us.

Data Transmission: In current transmission networks, packets of data often get lost in chaotic bursts. Applications such as video conferencing and video on demand suffer because the image ``can break up with a jarring abruptness'' even when a only few packets have been lost. We introduce a novel approach called Priority Encoding Transmission (PET) that ensures the video image will degrade gracefully with the number of packets lost, independent of which packets are lost.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.