US High-Performance Computing Expert Yelick to Lead Berkeley National Laboratory, Three Decades of Deep Expertise in Parallel Computing
2026-05-09 14:25
Favorite

en.Wedoany.com Reported - The Regents of the University of California have officially approved the appointment of Katherine Yelick as the ninth director of Lawrence Berkeley National Laboratory, with her term beginning July 1. Yelick is a leading figure in the US high-performance computing field—she is the co-inventor of the Unified Parallel C (UPC) and Titanium languages, a key driver of the Exascale Computing Project, and a member of the National Academy of Engineering and an ACM Fellow. This appointment marks the first time one of the oldest national laboratories under the US Department of Energy has handed the reins to a computer scientist.

Yelick currently serves as Vice Chancellor for Research at the University of California, Berkeley, and is a professor in the Department of Electrical Engineering and Computer Sciences, while also holding the position of Senior Faculty Scientist at Berkeley Lab. She joined the UC Berkeley faculty in 1991 and began serving concurrently as a scientist at Berkeley Lab in 1996, forging a dual identity spanning academia and the national laboratory system over the subsequent three decades.

Yelick's technical contributions in high-performance computing center on a core question: how to make massively parallel systems both programmable and high-performing. The UPC and Titanium languages she co-invented pioneered the Partitioned Global Address Space (PGAS) programming model—a model that provides a shared-memory-like programming abstraction on distributed memory hardware, enabling scientists to write efficient parallel programs without needing deep mastery of message-passing details. Reflecting on this work, Yelick noted that a key insight from PGAS languages was the performance advantage of one-sided communication: this mode is closer to underlying hardware primitives and can more effectively achieve overlap and pipelining of communication and computation. Furthermore, the Sparsity project she led developed the industry's first auto-tuning kernel library for sparse matrices, and she co-led the development of the Optimized Sparse Kernel Interface (OSKI), enabling sparse matrix operations to maximize performance across diverse hardware architectures.

Yelick's management experience at Berkeley Lab encompasses the most critical computing infrastructure under the DOE Office of Science. From 2008 to 2012, she served as Director of the National Energy Research Scientific Computing Center (NERSC), overseeing the flagship supercomputing facility for the DOE Office of Science; from 2010 to 2019, she was promoted to Associate Laboratory Director for Computing Sciences at Berkeley Lab, managing the three major divisions of NERSC, the Energy Sciences Network (ESnet), and the Computational Research Division—a vast matrix encompassing high-performance computing, high-speed research networking, and advanced computing research. During her tenure, she spearheaded the procurement and deployment of the NERSC-8 supercomputing system, advanced the upgrade of the ESnet backbone network, and in 2015 oversaw the completion of Shyh Wang Hall, which houses both NERSC and ESnet, integrating computing infrastructure and high-speed networking into a single physical space, thereby laying both the physical and organizational groundwork for decades of scientific innovation at Berkeley Lab.

At the national strategic level, Yelick was deeply involved in the launch and execution of the DOE's Exascale Computing Project. Running from 2016 to 2024, this project aimed to develop key applications and software stacks capable of efficiently utilizing exascale hardware. During this period, Yelick led the ExaBiome project, applying PGAS languages to microbial genome analysis, aiming to solve the computational bottleneck of metagenome assembly at the exascale. This project showcased the frontier application of high-performance computing in the life sciences—accelerating research ranging from new enzyme discovery to drug target identification through the parallel assembly and analysis of gene sequences from massive microbial communities. She also assisted the DOE nationally in formulating research strategies for artificial intelligence and big data, serving as a bridge at the intersection of scientific computing and AI.

Yelick assumes leadership of Berkeley Lab at a time when the largest supercomputing system in the lab's history is about to be deployed. The next-generation supercomputer, named Doudna after Nobel laureate Jennifer Doudna, is expected to be operational by the end of 2026. Built on the Vera Rubin platform jointly developed by Dell and NVIDIA, it will deliver over ten times the performance of the current flagship supercomputer, Perlmutter, and will serve as the core computational foundation supporting large-scale molecular dynamics simulations, high-energy physics research, and AI training and inference for the DOE Office of Science. With nearly 4,000 employees, an annual budget of approximately $1.4 billion, and a cumulative total of 17 Nobel laureates, Berkeley Lab is a core hub for multidisciplinary big science research in the US. How to translate Doudna's computational power advantage into scientific breakthroughs across multiple fields will be the primary engineering and strategic challenge Yelick faces upon taking office.

Yelick has her own clear assessment of the relationship between AI and scientific computing. In a keynote speech at ISC 2024, she pointed out that given the current trend of AI chips generally weakening 64-bit floating-point performance, there is a need to guard against the risk of high-precision arithmetic capabilities being marginalized—high-precision computation is crucial for generating reliable scientific data, which precisely forms the knowledge base for large language models. Discussing the paradigm shift in scientific research in the AI era, she further elaborated that applying AI to science is not simply a matter of using off-the-shelf models; rather, it should be an opportunity to explore whether better AI implementations exist—the evolutionary information embedded in genomic data might offer perspectives different from text-based models, while the inherent physical laws of the natural world impose constraints on AI far more stringent than those of language environments. This stance is neither blindly optimistic about AI nor rigidly adhering to traditional computing paradigms, but instead seeks a more constructive path for integration between the two forces.

This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com