Parallel and Distributed Computing
Please remember to occaisionally reload this page as it
will be frequently modified.
Class Times and Location
January and February (before reading week) Tim Brecht
Tuesday 10:30 - 12:00 COSC 5494 S171 Ross
Thursday 10:30 - 12:00 COSC 5494 214 Stong
From March and April (after reading week) Harjinder Sandhu
Monday 10:30 - 12:00 COSC 5494 216 Stong
Wednesday 10:30 - 12:00 COSC 5494 220 Stong
Note: please notify us if you have a time conflict with another
course you would like to take.
Goals / Purpose
- Broad but Thorough Background in Parallel Computing
- Preparation for Research in the Area
Class Participation 5%
One Assignment 15% Due : Tuesday, February 13
Project Proposal 10% Due : Friday, February 23
Project Presentation 10% Due : Last week of classes
Final Project 60% Due : Friday, April 12
Note: you should begin thinking about a project from day one
and working on a project proposal shortly after.
NOTE: A number of pointers to various pieces of documentation
will be provide for you use for this course.
There is no need to print the vast majority of this stuff.
If you print things unneccessarily these helpful pointers will disappear.
Assignment 1 requires the use of two parallel programming
packages, PVM and Treadmarks. The following html guide for PVM,
and user doc for Treadmarks, will be both useful and necessary
for completing this assignment.
PVM User's Guide and Tutorial
Start by reading the Section on "Using PVM" (skipping "How to obtain PVM").
You must create a pvm3 directory in your home directory.
In the pvm3 directory you must add a bin directory and under the bin
directory you must have a directory for each architecture
that you wish to run your program on.
SGI5 for the SGI machines and SUN4SOL2 for the SUN machines.
(See ~brecht/pvm3 for examples).
Have a look at ~brecht/.cshrc as well as the modified
Makefiles in ~brecht/pvm3/examples.
Try to get the hello and hello_other examples working.
Configure a machine with multiple processors and run
hello a number of times.
PVM Introduction Paper
Note that this is a very dated paper but it will give you an overview.
Use the example programs and the "PVM User's Guide and Tutorial" to
help you to write programs.
Message Passing versus Distributed Shared Memory on Networks of Workstations
Minh Nguyen Travelling Salesperson Problem
Diego Moscoso Sorting
Patrick Chan LU Decomposition
Nicole Aucoin Neural Networks
Lian Pi Fast Fourier Transform
Edwin Law Jacobi
Jianlin Guo SOR
Lizhi Li Cholesky Decomposition
Calendar Course Description
This course investigates fundamental problems in
writing efficient and scalable parallel applications
with emphasis on operating systems support and
performance evaluation techniques.
Part of the course will involve
designing, writing, and comparing parallel programs
written using message-passing and shared-memory models while considering
the support for effective design, implementation, debugging, testing,
and performance evaluation of parallel applications and operating systems.
Expanded Course Description
The purpose of this course is to present students with an
introduction to state-of-the-art techniques for implementing
software for high performance computers.
This course will first motivate the need for
higher performance computers (parallel processing)
by providing a high level introduction to a few
computationally intensive but significant problem areas.
We discuss general issues in parallel computing including:
speedup, efficiency, limits to speedup, Amdahl's Law, iso-efficiency,
problem decomposition, granularity of computation,
load balancing, data locality, and
the relationship between software and architecture.
Different approaches to writing parallel software
for shared-memory and message-passing paradigms
are discussed including:
parallelizing compilers, parallel languages,
and parallel language extensions.
We examine current operating systems and issues related
to their support of parallel computation (or lack thereof).
Other possible topics are:
the design and implementation of efficient and effective thread packages,
process management, virtual memory,
and file systems for scalable parallel processing.
This course will not only provide students with the background
required to conduct research in the area of parallel applications and
operating system design but will also train them to critically and
effectively evaluate application and system software performance.
Students are expected to have experience in C and UNIX programming
as well knowledge of operating systems fundamentals at least at the level of
a basic knowledge of uniprocessor and multiprocessor architectures
Tentative Outline of Topics
- Why parallel computers?
- Types of parallel systems
- Types of architectures (MIMD versus SIMD)
- Shared-memory vs Message passing
- Hardware/Software interface
Parallel Program Metrics
- Limits to speedup (Amdahl's Law)
- Ease of Programming
- Ease of Understanding
- Response time
Performance Evaluation Methods
Approaches to Parallelization
- Parallelizing Compilers
- Parallel Languages
- Parallel Language Extentions
Programming / Performance Issues
- Problem Decomposition
- Load Balancing
- Problems of Scale
- Memory Constraints
The World-Wide Supercomputing Project
- What's wrong with currrent parallel systems
- Using Java to build a new system
- Other Alternatives
Distributed Shared Memory (DSM)
Some Systems for Parallel Computing
- PVM / MPI
Potential Reading Topics
- Parallel Architectures
- Parallel Software Systems
- Cache / Memory Consistency
- Treadmarks / Munin
- High-speed communication / Active Messages / UNET
- PVM / MPI
- Ease of Programming (Enterprise)
Possible Project Topics
Last modified: January 22, 1996