To expose sufficient instruction-level parallelism (ILP) to make effective use of wide-issue
superscalar and VLIW processor resources, the compiler must perform aggressive low-level
code optimization and scheduling. However, ambiguous memory dependences can significantly
limit the compiler's ability to expose ILP. To overcome the problem of ambiguous memory
dependences, optimizing compilers perform memory disambiguation.
Both dynamic and static approaches to memory disambiguation have been proposed. Dynamic
memory disambiguation approaches resolve the dependence ambiguity at run-time. Compiler
transformations are performed which provide alternate paths of control to be followed
based upon the results of this run-time ambiguity check. In contrast, static memory disambiguation
attempts to resolve ambiguities during compilation. Compiler transformations can be
performed based upon the results of this disambiguation, with no run-time checking required.
This dissertation investigates the application of both dynamic and static memory disambiguation
approaches to support low-level optimization and scheduling. A dynamic approach,
the memory conflict buffer, originally proposed by Chen [1], is analyzed across a large suite of
integer and floating-point benchmarks. A new static approach, termed sync arcs, involving the
passing of explicit dependence arcs from the source-level code down to the low-level code, is
proposed and evaluated. This investigation of both dynamic and static memory disambiguation
allows a quantitative analysis of the tradeoffs between the two approaches.