The Scalable Parallel Computing Lab's *SPCL_Bcast* seminar continues with *Albert Cohen of Google* presenting on *Can I Cook a 5 o'clock Compiler Cake and Eat It at 2?* Everyone is welcome to attend (over Zoom)!
*When:* Thursday, 7th December, 9AM CET
*Where:* Zoom
Join https://spcl.inf.ethz.ch/Bcast/join
*Abstract:* In high-performance computing words: can we build a compiler that will eventually save a lot of performance engineering effort while immediately delivering competitive results? Here, competitiveness refers to achieving near hardware peak-performance for important applications. The question is particularly hot in a domain-specific setting, where the building blocks for constructing an effective optimizing compiler may be inadequate, too generic, or too low-level. It is widely understood that compiler construction has failed to deliver early afternoon sweets. I personally feel bad about it, but until recently it remained an academic exercise to challenge the status quo. Maybe it is now time to reconsider this assumption: ML-enhanced compilers become the norm rather than the exception. New compiler frameworks reconcile optimizations for the common case with application-specific performance. Domain-specific code generators play an essential role in the implementation of dense and sparse numerical libraries. But even with the help of domain-specific compilers, peak performance can only be achieved at the expense of a dramatic loss of programmability. Are we ever going to find a way out of this programmability/performance dilemma? What about the velocity and agility of compiler engineers? Can we make ML-based heuristics scalable enough to compile billions of lines of code? Can we do so while enabling massive code reuse across domains, languages and hardware? We will review these questions, based on recent successes and half-successes in academia and industry. We will also form an invitation to tackle these challenges in future research and software development.
*Biography:* Albert Cohen is a research scientist at Google. An alumnus of École Normale Supérieure de Lyon and the University of Versailles, he has been a research scientist at Inria, a visiting scholar at the University of Illinois, an invited professor at Philips Research, and a visiting scientist at Facebook Artificial Intelligence Research. Albert works on parallelizing, optimizing and machine learning compilers, and on dataflow and synchronous programming languages, with applications to high-performance computing, artificial intelligence and reactive control.
More details & future talks https://spcl.inf.ethz.ch/Bcast/
Scalable Parallel Computing Lab (SPCL) Department of Computer Science, ETH Zurich Website https://spcl.inf.ethz.ch X(Twitter) https://twitter.com/spcl_eth YouTube https://www.youtube.com/@spcl GitHub https://github.com/spcl
The Scalable Parallel Computing Lab's *SPCL_Bcast* seminar continues with *Albert Cohen of Google* presenting on *Can I Cook a 5 o'clock Compiler Cake and Eat It at 2?* Everyone is welcome to attend (over Zoom)!
*When:* Thursday, 7th December, 9AM CET
*Where:* Zoom
Join https://spcl.inf.ethz.ch/Bcast/join
*Abstract:* In high-performance computing words: can we build a compiler that will eventually save a lot of performance engineering effort while immediately delivering competitive results? Here, competitiveness refers to achieving near hardware peak-performance for important applications. The question is particularly hot in a domain-specific setting, where the building blocks for constructing an effective optimizing compiler may be inadequate, too generic, or too low-level. It is widely understood that compiler construction has failed to deliver early afternoon sweets. I personally feel bad about it, but until recently it remained an academic exercise to challenge the status quo. Maybe it is now time to reconsider this assumption: ML-enhanced compilers become the norm rather than the exception. New compiler frameworks reconcile optimizations for the common case with application-specific performance. Domain-specific code generators play an essential role in the implementation of dense and sparse numerical libraries. But even with the help of domain-specific compilers, peak performance can only be achieved at the expense of a dramatic loss of programmability. Are we ever going to find a way out of this programmability/performance dilemma? What about the velocity and agility of compiler engineers? Can we make ML-based heuristics scalable enough to compile billions of lines of code? Can we do so while enabling massive code reuse across domains, languages and hardware? We will review these questions, based on recent successes and half-successes in academia and industry. We will also form an invitation to tackle these challenges in future research and software development.
*Biography:* Albert Cohen is a research scientist at Google. An alumnus of École Normale Supérieure de Lyon and the University of Versailles, he has been a research scientist at Inria, a visiting scholar at the University of Illinois, an invited professor at Philips Research, and a visiting scientist at Facebook Artificial Intelligence Research. Albert works on parallelizing, optimizing and machine learning compilers, and on dataflow and synchronous programming languages, with applications to high-performance computing, artificial intelligence and reactive control.
More details & future talks https://spcl.inf.ethz.ch/Bcast/
Scalable Parallel Computing Lab (SPCL) Department of Computer Science, ETH Zurich Website https://spcl.inf.ethz.ch X(Twitter) https://twitter.com/spcl_eth YouTube https://www.youtube.com/@spcl GitHub https://github.com/spcl