The objective of IMPACT (Illinois Microarchitecture Project using Algorithms and Compiler Technology) is to provide critical research, architecture innovation, and algorithm and compiler prototypes for heterogeneous parallel architectures. We achieve portable performance and energy efficiency for emerging real-world applications by developing novel hardware, compiler, and algorithmic solutions.
 

 

Upcoming Items

Wen-Mei gives ICS Keynote (June 2, 2016)

Innovative Applications and Technology Pivots - A Perfect Storm in Computing

Since early 2000, we have been experiencing two very important developments in computing. One is that a tremendous amount of resources have been invested into innovative applications such as first-principle based models, deep learning and cognitive computing. The other part is that the industry has been taking a technological path where application performance and power efficiency vary by more than two orders of magnitude depending on their parallelism, heterogeneity, and locality. Since then, most of the top supercomputers in the world are heterogeneous parallel computing systems. New standards such as the Heterogeneous Systems Architecture (HSA) are emerging to facilitate software development. Much has been and needs to be learned about of algorithms, languages, compilers and hardware architecture in this movement. What are the applications that continue to drive the technology development? How hard is it to program these systems today? How will we programming these systems in the future? How will innovations in memory devices present further opportunities and challenges? What is the impact on long-term software engineering cost on applications? In this talk, I will present some research opportunities and challenges that are brought about by this perfect storm.


Teaching Kit Tutorial at ISC 2016

Accelerated Computing Teaching Kit Talk & Demo


PUMPS 2016 (Barcelona, Spain)

In its seventh edition, the Programming and tUning Massively Parallel Systems summer school (PUMPS) offers researchers and graduate students a unique opportunity to improve their skills with cutting-edge techniques and hands-on experience in developing and tuning applications for many-core processors with massively parallel computing resources like GPU accelerators.

The summer school is oriented towards advanced programming and optimizations, and thus previous experience in basic GPU programming will be considered in the selection process. We will also consider the current parallel applications and numerical methods you are familiar with, and the specific optimizations you would like to discuss.


Teaching Kit Tutorial at XSEDE 2016

Joe Bungo (NVIDIA), Andy Schuh (UIUC), and Carl Pearson (UIUC) will host a hands-on tutorial focused on the Accelerated Computing Teaching Kit. This hands-on tutorial introduces a comprehensive set of academic labs and university teaching material for use in introductory and advanced accelerated computing courses. The tutorial will then take attendees through some of the same introductory and intermediate/ advanced lecture slides and hands-on lab exercises that are part of the curriculum.



Recent & Highlighted Items

IPDPS - Teaching Kit Tutorial (May 24, 2016)

At IPDPS 16, Dr. Wen-Mei Hwu from The University of Illinois (UIUC) will lead a free hands-on tutorial that introduces the GPU Teaching Kit for Accelerated Computing for use in university courses that can benefit from parallel processing.


Wen-Mei Hwu gives AsHES Keynote in Chicago (May 23, 2016)

Since the introduction of CUDA in 2006, we have made tremendous progress in heterogeneous supercomputing. We have built heterogeneous top-ranked supercomputers. Much has been learned about of algorithms, languages, compilers and hardware architecture in this movement. What is the benefit that science teams are seeing? How hard is it to program these systems today? How will we programming these systems in the future? In this talk, I will go over the lessons learned from educating programmers, migrating Blue Waters applications into GPUs, and developing performance-critical libraries. I will then give a preview of the types of programming systems that will be needed to further reduce the software cost of heterogeneous computing.


Invited Lecture at the University of Pennsylvania Symposium (April 20, 2016)

Parallelism, Heterogeneity, Locality, why bother?
Speaker: Wen-Mei Hwu

Computing systems have become power-limited as the Dennard scaling got off track in early 2000. In response, the industry has taken a path where application performance and power efficiency can vary by more than two orders of magnitude depending on their parallelism, heterogeneity, and locality. Since then, we have built heterogeneous top supercomputers. Most of the top supercomputers in the world are heterogeneous parallel computing systems. We have mass-produced heterogeneous mobile computing devices. New standards such as the Heterogeneous Systems Architecture (HSA) are emerging to facilitate software development. Much has been learned about of algorithms, languages, compilers and hardware architecture in this movement. Why do applications bother to use these systems? How hard is it to program these systems today? How will we programming these systems in the future? How will heterogeneity in memory devices present further opportunities and challenges? What is the impact on long-term software engineering cost on applications? In this talk, I will go over the lessons learned from educating programmers and developing performance-critical libraries. I will then give a preview of the types of programming systems that will be needed to further reduce the software cost of heterogeneous computing.


(View Archive of Highlighted Items)

Recent & Highlighted Papers

"SpaceJMP: Programming with Multiple Virtual Address Spaces", Izzat El Hajj, Alexander Merritt, Gerd Zellweger, Dejan Milojicic, Reto Achermann, Paolo Faraboschi, Wen-mei Hwu, Timothy Roscoe, Karsten Schwan, Proceedings of the 21th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '16) . [more...]
 
"DySel: Lightweight Dynamic Selection for Kernel-based Data-parallel Programming Model", Li-Wen Chang, Hee-Seok Kim, Wen-mei Hwu, Proceedings of the 21th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '16) . [more...]
 
"A Programming System for Future Proofing Performance Critical Libraries", Li-Wen Chang, Izzat El Hajj, Hee-Seok Kim, Juan Gómez-Luna, Abdul Dakkak, Wen-mei Hwu, Proceedings of the 21th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2016) [poster] . [more...]
 
"Locality-Centric Thread Scheduling for Bulk-synchronous Programming Models on CPU Architectures", Hee-Seok Kim, Izzat El Hajj, John A. Stratton, Steve S Lumetta, Wen-mei Hwu, International Symposium on Code Generation and Optimization (CGO) . (Best Paper Award Nominee) [more...]