Investigating the limits of instruction level parallelism

R. Potter

    Research output: Book/ReportOther report

    54 Downloads (Pure)


    High performance computer architectures increasingly use compile-time instruction scheduling to reorder code to expose parallelism that can be exploited at run-time. Although respectable performance increases have been reported, there is still a significant performance gap between what has been achieved and what has theoretically been shown to be possible. All scheduling algorithms used to reorder code, either explicitly or implicitly introduce barriers to code motion, which in turn limit the performance realised. Trace driven simulation is used to quantify the amount of instruction level parallelism available in general purpose code and the impact of various artificial barriers to code motion. This work is based on the Hatfield Superscalar Architecture, a progressive multiple instruction issue processor. The results of this study will be used to direct future developments in instruction scheduling technology.
    Original languageEnglish
    PublisherUniversity of Hertfordshire
    Publication statusPublished - 1996

    Publication series

    NameUH Computer Science Technical Report
    PublisherUniversity of Hertfordshire


    Dive into the research topics of 'Investigating the limits of instruction level parallelism'. Together they form a unique fingerprint.

    Cite this