Developing high performance GPU code is labor intensive. Ideally, developers could recoup high GPU development costs by generating high-performance programs for CPUs and other architectures from the same source code. However, current OpenCL compilers for non-GPUs do not fully exploit optimizations in well-tuned GPU codes.
To address this problem, we develop an OpenCL implementation that efficiently exploits GPU optimizations on multicore CPUs. Our implementation translates SIMT parallelism into SIMD vectorization and SIMT coalescing into cache-efficient access patterns. These translations are especially challenging when control divergence is present. Our system addresses divergence through a multi-tier vectorization approach based on dynamic convergence checking.
proposed approach outperforms existing industry implementations achieving geometric mean speedups of 2.26× and 1.09× over AMDs and Intels OpenCL implementations respectively.