This paper investigates how this has been achieved in the Manchester Data-Flow Computing System, which is based on an experimental, fine-grain massively parallel computer architecture that has been extensively developed over the last fifteen years. The design and performance of the Throttle Unit, which is the device responsible for managing the workload in this system, are presented and analysed.
The data-flow research community has been the first to encounter the
problem of excessive memory use, and to begin to consider ways of
controlling the trade-off with performance in a fashion appropriate to
the execution environment. However, these problems are not exclusive to
data-flow, and the conclusion briefly explores the implications of
results from the data-flow world for the future development of