MIME-Version: 1.0
Server: CERN/3.0
Date: Wednesday, 20-Nov-96 19:47:40 GMT
Content-Type: text/html
Content-Length: 4512
Last-Modified: Friday, 30-Aug-96 02:28:56 GMT
David Pearson
David Pearson
Research Interests
My thesis investigates highly scalable parallel computers consisting
of very simple processors connected in a 3-dimensional mesh.
The guiding vision of this work is of a time, perhaps 50 years hence,
in which materials science has taken the place of computer architecture,
and computers are crystals, with each processor a molecule in the lattice.
Long before this goal can be realized, we can and should prepare for the
ubiquitous parallelism it will offer. Our algorithms must heed the laws
of physics and pay attention, as chip designers do now, to spatial layout
and the (currently hidden) cost of communication. This can be accomplished
by designing our algorithms for a 3-D mesh.
To pursue this vision requires both theoretical and practical work.
So far, my work could be characterized as a feasibility study: I
have produced a 3-D cellular architecture which could
be efficiently realized with current hardware, a simulator for this
architecture, algorithms, programs, and an operating system design
for general-purpose computing.
(I believe that general-purpose computers, not problems like protein
structure, are the grand challenge of parallel architecture. Parallel
computational power will not really succeed until it becomes a commodity
and is sold for desktop machines or video games.)
Directions for future research include VLSI implementation of this architecture
and the design of a programming language. Most widely-used languages hide
the details of the machine's instruction set, but reflect the underlying
Von Neumann architecture. I believe that this connection to the architecture
has been a good thing for algorithm design, and to really exploit parallel
machines we need a language for which the costs of operations are as easy
to estimate as they are for C++ on a Von Neumann machine.
Publications
- S. D. Dunten, D. S. Pearson, and W. Y. Arms. ``The Kiewit Network: A
High-Speed Campus Network''. In 25th IEEE Computer Society
International Conference (IEEE COMPCON), pp. 247-254, (Fall 1982).
- D. Pearson, S. U. Pillai, and Y. Lee. ``An Algorithm for Near-Optimal
Placement of Sensor Elements''. IEEE Transactions on Information
Theory 36, pp. 1280-1284 (1990).
- D. Pearson and V. Vazirani. ``A Fast Parallel Algorithm for Finding a
Maximal Bipartite Set''. In Foundations of Software Technology
and Theoretical Computer Science 10 (FST&TCS), pp. 225-231,
(1990). Published as Lecture Notes in Computer Science
472.
- D. Pearson and V. Vazirani. ``Efficient Sequential and
Parallel Algorithms for Maximal Bipartite Sets''. Journal of
Algorithms 14, pp. 171-179 (1993).
- R. Johnson, D. Pearson, and K. Pingali. ``Finding Regions Fast:
Single Entry Single Exit and Control Regions in Linear Time''.
Cornell CS Tech. Report 93-1365.
- R. Johnson, D. Pearson, and K. Pingali. ``The program structure tree:
Computing control regions in linear time''. In Proceedings of the
Sigplan '94 Conference on Programming Language Design and Implementation
(PLDI), pp. 171-185, (1994). Published as ACM SIGPLAN Notices
29(6).
- D. Pearson. ``A Polynomial-time Algorithm for the
Change-Making Problem'', Cornell CS Tech. Report 94-1433.
- B. Hao and D. Pearson. ``Instruction Scheduling and Global Register
Allocation for SIMD Multiprocessors''. In International Workshop on
Parallel Algorithms for Irregularly Structured Problems
(Irregular 95), pp. 81-86, Sept. 1995.
Published as Lecture Notes in Computer Science 980.
- B. Hao, D. Pearson, and R. Zippel. ``Global Register Allocation for SIMD
Multiprocessors''. Journal of Computer Science and Technology,
Jan. 1996, Allerton Press.
- D. Pearson. ``A Parallel Implementation of RSA''.
in Selected Areas in Cryptography (SAC), Aug. 1996 (to appear).
Computer Science Department
5133 Upson Hall
Cornell University
Ithaca, New York 14853-7501, USA
Email: pearson@cs.cornell.edu
Tel: (607) 255-9189
Fax: (607) 255-4428