Automatic distributed programming using sequencel

Date
2016-08-16
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

Hybrid parallel programming, consisting of a distributed memory model for internode communication in combination with a shared-memory model to manage intranode parallelisms, is now a common method of achieving scalable parallel performance. Such a model burdens developers with the complexity of managing two parallel programming systems in the same program. I have hypothesized it is possible to specify heuristics which, on average, allow scalable “across-node” (distributed memory) and “across-core” (shared memory) hybrid parallel C++ to be generated from a program written in a high-level functional language. Scalable here means a distributed core-speedup that is no more than an order of magnitude worse than shared memory core-speedup. This dissertation reports the results of testing this hypothesis by extending the SequenceL compiler to automatically generate C++ which uses a combination of MPI and Intel's TBB to achieve scalable distributed and shared memory parallelizations.

Description
Keywords
Automatic programming, Compilers, Programming language, Distributed computing, Parallel programming, Functional language
Citation