Architecture Paradigms and Programming Languages for Efficient programming of multiple CORES
It is the goal of the Apple-CORE project to make multi-core computing mainstream and to usher in an era where many-core chips are the PCs of the future – by many think thousands to millions of cores per chip! The application of the project’s SVP programming model is much broader than this, as is its implementation as a DRISC core implementation. However, it is this goal of achieving general-purpose concurrent computing systems that gives the greatest challenges.
These challenges include:
- The design for a chip architecture, where even on chip the memory has the characteristics of a distributed system, i.e. asynchronous access with latencies in excess of 1000s of processor cycles.
- The selection and development of a suitable tool chain to allow the concurrent system to be deterministically and correctly programmed.
- The management of power and the use of processing resources, where the data-driven model and dynamic scheduling of instructions support dynamic and adaptive distribution of computation as it unfolds.
- Finally there are issues of binary-code compatibility that have constrained this segment of the market. In Apple-CORE legacy binary code must be supported but more importantly, once compiled with the new tool chain, the new binaries must execute on an arbitrary number of cores (up to some limit defined by the code’s scalability).
These are non-trivial challenges as the architecture and programming model are disruptive. They require new compilers, new operating systems foundations and of course new processor architectures. The Apple-CORE project is developing all of the above.
Only have a few minutes then read the Apple-CORE one-two-three…
- Architecture and programming – dataflow scheduling with conventional programming – yielding architectures that are conservative in their use of power, have good tolerance to high-latency operations and are programmed in sequential, data-parallel or functional languages (properties of determinism, deadlock freedom and locality);
- Hierarchy and scalability - distributed and dynamic resource allocation – yielding distributed lightweight operating systems and a solution to the dataflow curse (properties of controlled non-determinism, generality and self-adaptation);
- Disruption for stabilisation – from sequential to parallel – rebuilding from the foundations in necessary but requires a new infrastructure of tools (properties of scalability, binary compatibility and target-neutral programming).
Chris Jesshope – Project co-ordinator