Parallel and Distributed Computing
(COSC 6422) was 5494
Note: although the title of this course is Parallel and Distributed
Computing, the real focus this year will be on parallel computing.
Please remember to occasionally reload this page as it
will be frequently modified.
Class Times and Location
Tuesday 10:30 - 12:00 COSC 6422 321 Petrie *** note the change
Thursday 10:30 - 12:00 COSC 6422 321 Petrie *** note the change
Please notify me if you have a time conflict with another
graduate course you would like to take.
List of current office hours
Some of the Overheads Used in Class
Problem Partitioning / Matrix Multiply
Parallel Program Performance I
Parallel Program Performance II
Goals / Purpose
- Broad Background in Parallel Computing
- Preparation for Research in the Area
One Assignment 20% Due : Tuesday, February 10
Project Proposal 5% Due : Tuesday, February 10
Term Exam 20% On : Tuesday, February 24
Project Presentation 10% Due : Last week of classes
Final Project 35% Due : At your presentation
Class Participation 10%
You should begin thinking about a project from day one and start
working on a project proposal shortly after.
You should also start assignment one as soon as possible.
Mid Term Exam
The midterm will cover material covered in class up to
the day of the exam.
Your assignments and likely your projects will be done
in the LCSR (Laboratory for Computer Systems Research).
General LCSR and Departmental Computing Information
This course investigates fundamental problems in
writing efficient and scalable parallel applications
with emphasis on operating systems support and
performance evaluation techniques.
Part of the course will involve
designing, writing, and comparing parallel programs
written using message-passing and shared-memory models while considering
the support for effective design, implementation, debugging, testing,
and performance evaluation of parallel applications and operating systems.
Expanded Course Description
The purpose of this course is to present students with an
introduction to state-of-the-art techniques for implementing
software for high performance computers.
This course will first motivate the need for
higher performance computers (parallel processing)
by providing a high level introduction to a few
computationally intensive but significant problem areas.
We discuss general issues in parallel computing including:
speedup, efficiency, limits to speedup, Amdahl's Law, iso-efficiency,
problem decomposition, granularity of computation,
load balancing, data locality, and
the relationship between software and architecture.
Different approaches to writing parallel software
for shared-memory and message-passing paradigms
are discussed including:
parallelizing compilers, parallel languages,
and parallel language extensions.
We examine current operating systems and issues related
to their support of parallel computation (or lack thereof).
Other possible topics are:
the design and implementation of efficient and effective thread packages,
process management, virtual memory,
and file systems for scalable parallel processing.
This course will not only provide students with the background
required to conduct research in the area of parallel applications and
operating system design but will also train them to critically and
effectively evaluate application and system software performance.
Students are expected to have experience in C and UNIX programming
as well knowledge of operating systems fundamentals at least at the level of
a basic knowledge of uniprocessor and multiprocessor architectures
List of Some Possible Topics
- Why parallel computers?
- Types of parallel systems
- Types of architectures (MIMD versus SIMD)
- Shared-memory vs Message passing
- Hardware/Software interface
Parallel Program Metrics
- Limits to speedup (Amdahl's Law)
- Ease of Programming
- Ease of Understanding
- Response time
Performance Evaluation Methods
Approaches to Parallelization
- Parallelizing Compilers
- Parallel Languages
- Parallel Language Extentions
Programming / Performance Issues
- Problem Decomposition
- Load Balancing
- Problems of Scale
- Memory Constraints
The World-Wide Supercomputing Project
- What's wrong with currrent parallel systems
- Using Java to build a new system
- Other Alternatives
Distributed Shared Memory (DSM)
Some Systems for Parallel Computing
- PVM / MPI
Potential Reading Topics
- Parallel Architectures
- Parallel Software Systems
- Cache / Memory Consistency
- Treadmarks / Munin
- High-speed communication / Active Messages / UNET
- PVM / MPI
- Ease of Programming (Enterprise)
Possible Sources of Information
Possible Project Topics
Last modified: January 4, 1998