Parallel and Distributed Computing

(COSC 5494)


Please remember to occaisionally reload this page as it will be frequently modified.


Contents


Instructors


Class Times and Location

January and February (before reading week)  Tim Brecht
Tuesday   10:30 - 12:00 COSC 5494    S171 Ross
Thursday  10:30 - 12:00 COSC 5494    214  Stong

From March and April (after reading week)   Harjinder Sandhu
Monday    10:30 - 12:00 COSC 5494    216  Stong
Wednesday 10:30 - 12:00 COSC 5494    220  Stong

Note: please notify us if you have a time conflict with another
      course you would like to take.

Goals / Purpose


Evaluation

Class Participation   5%
One Assignment       15%        Due : Tuesday, February 13
Project Proposal     10%        Due : Friday,  February 23
Project Presentation 10%        Due : Last week of classes
Final Project        60%        Due : Friday, April 12

Note: you should begin thinking about a project from day one
      and working on a project proposal shortly after.

Assignments

NOTE: A number of pointers to various pieces of documentation will be provide for you use for this course. There is no need to print the vast majority of this stuff. If you print things unneccessarily these helpful pointers will disappear.

Assignment 1

Assignment 1 requires the use of two parallel programming packages, PVM and Treadmarks. The following html guide for PVM, and user doc for Treadmarks, will be both useful and necessary for completing this assignment.

Student               Application

Minh Nguyen           Travelling Salesperson Problem
Diego Moscoso         Sorting
Patrick Chan          LU Decomposition
Nicole Aucoin         Neural Networks
Lian Pi               Fast Fourier Transform
Edwin Law             Jacobi
Jianlin Guo           SOR
Lizhi Li              Cholesky Decomposition

LCSR Information


Calendar Description

Calendar Course Description This course investigates fundamental problems in writing efficient and scalable parallel applications with emphasis on operating systems support and performance evaluation techniques. Part of the course will involve designing, writing, and comparing parallel programs written using message-passing and shared-memory models while considering the support for effective design, implementation, debugging, testing, and performance evaluation of parallel applications and operating systems.

Expanded Course Description

The purpose of this course is to present students with an introduction to state-of-the-art techniques for implementing software for high performance computers. This course will first motivate the need for higher performance computers (parallel processing) by providing a high level introduction to a few computationally intensive but significant problem areas. We discuss general issues in parallel computing including: speedup, efficiency, limits to speedup, Amdahl's Law, iso-efficiency, problem decomposition, granularity of computation, load balancing, data locality, and the relationship between software and architecture. Different approaches to writing parallel software for shared-memory and message-passing paradigms are discussed including: parallelizing compilers, parallel languages, and parallel language extensions. We examine current operating systems and issues related to their support of parallel computation (or lack thereof). Other possible topics are: the design and implementation of efficient and effective thread packages, communication mechanisms, process management, virtual memory, and file systems for scalable parallel processing.

This course will not only provide students with the background required to conduct research in the area of parallel applications and operating system design but will also train them to critically and effectively evaluate application and system software performance. Students are expected to have experience in C and UNIX programming as well knowledge of operating systems fundamentals at least at the level of COSC3321. As well, a basic knowledge of uniprocessor and multiprocessor architectures is helpful.


Tentative Outline of Topics

Introduction

Parallel Program Metrics

Performance Evaluation Methods

Approaches to Parallelization

Programming / Performance Issues

Scheduling

The World-Wide Supercomputing Project

Consistency

Distributed Shared Memory (DSM)

Other Topics


Some Systems for Parallel Computing


Potential Reading Topics


Possible Project Topics




Last modified: January 22, 1996
brecht@cs.yorku.ca