Recent & Highlighted Items

Programming and Tuning Massively Parallel Systems summer school (PUMPS) (July 7, 2014)

?The fifth edition of the Programming and Tuning Massively Parallel Systems summer school (PUMPS) is aimed at enriching the skills of researchers, graduate students and teachers with cutting-edge technique and hands-on experience in developing applications for many-core processors with massively parallel computing resources like GPU accelerators. (July 7-11, 2014)

ISCA 2014 Tutorial: Heterogeneous System Architecture (HSA): Architecture and Algorithms Tutorial (June 15, 2014)

?Heterogeneous computing is emerging as a requirement for power-efficient system design: modern platforms no longer rely on a single general-purpose processor, but instead benefit from dedicated processors tailored for each task.  Traditionally these specialized processors have been difficult to program due to separate memory spaces, kernel-driver-level interfaces, and specialized programming models.  The Heterogeneous System Architecture (HSA) aims to bridge this gap by providing a common system architecture and a basis for designing higher-level programming models for all devices.  This tutorial will bring in experts from member companies of the HSA Foundation to describe the Heterogeneous Systems Architecture and how it addresses the challenges of modern computing devices.  Additionally, the tutorial will show example applications and use cases that can benefit from the features of HSA.

Wen-Mei Hwu Speaks at Michigan Engineering (March 19, 2014)

Scalability, Performance, Stability, and Portability of Many-core Computing Algorithms

The IMPACT group at the University of Illinois has been working on the co-design of scalable algorithms and programming tools for massively threaded computing. A major challenge that we are addressing is to simultaneously achieve scalability, performance, numerical stability, portability, and development cost. In this talk, I will go over the major building blocks involved: memory layout and dynamic tiling. I will show experimental results to demonstrate how these building blocks jointly enable the first scalable, numerically stable tri-diagonal solver that matches the numerical stability of the Intel Math Kernel Library (MKL) and surpass the performance of CUSPARSE. I will then give an overview of the Tangram and Triolet projects that are aimed to drastically improve the quality and reduce the development and maintenance cost future many-core algorithms.

Attachment #1: Flyer (PDF)
Attachment #2: Slides (PDF)
  
(View Archive of Highlighted Items)
 

Upcoming Items

Recent & Highlighted Papers

2014

"Triolet: A Programming System that Unifies Algorithmic Skeleton Interfaces for High-Performance Cluster Computing", Rodrigues, Christopher I.; Dakkak, Abdul; Jablin, Tom; Hwu, Wen-mei; Aarts, Baastian, Proceedings of the 2014 ACM SIGPLAN Conference on Principles and Practice of Parallel Programing?, February 2014. [more...]
 
"Multi-tier Dynamic Vectorization for Translating GPU Optimizations into CPU Performance", Kim, Hee-Seok; El Hajj, Izzat; Stratton, John A.; Hwu, Wen-mei, IMPACT Technical Report, IMPACT-14-01, University of Illinois at Urbana-Champaign, Center for Reliable and High-Performance Computing, February 5, 2014. [more...]
 
"In-place transposition of rectangular matrices on accelerators", Sung, Ray; Gómez-Luna, Juan; González-Linares, José María; Guil, Nicolás; Hwu, Wen-mei, PPoPP '14 Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming. [more...]