In this paper we analyze the effect of compiler
optimizations on fine grain parallelism in scalar programs.
We characterize three levels of optimization: classical,
superscalar, and multiprocessor. We show that classical
optimizations not only improve a program's efficiency but also
its parallelism. Superscalar optimizations further improve
the parallelism for moderately parallel machines. For highly
parallel machines, however, they actually constrain available
parallelism. The multiprocessor optimizations we consider are
memory renaming and data migration.