Self-Regulation of Workload in the Manchester Data-Flow Computer

John R. Gurd, David F. Snelling


Massively parallel programs generally use memory on a vast scale, compared with sequential programs. Indeed, performance seems to `trade-off' against memory use. Hence, regulation of memory use, via control of the workload, is a fundamental requirement in a massively parallel computer system. Moreover, this must be achieved with a minimum of disruption to the performance of its massively parallel computations.

This paper investigates how this has been achieved in the Manchester Data-Flow Computing System, which is based on an experimental, fine-grain massively parallel computer architecture that has been extensively developed over the last fifteen years. The design and performance of the Throttle Unit, which is the device responsible for managing the workload in this system, are presented and analysed.

The data-flow research community has been the first to encounter the problem of excessive memory use, and to begin to consider ways of controlling the trade-off with performance in a fashion appropriate to the execution environment. However, these problems are not exclusive to data-flow, and the conclusion briefly explores the implications of results from the data-flow world for the future development of thread-based systems.


Data-Flow, Parallel Computer Architecture, Workload Management

Talk Overheads (45110 bytes)