.\yx\home >

mugshot here!

Hi! I'm Yuxin Chen.

I am a computer science Ph.D. student at University of California, Davis. My research interests include general-purpose computing on graphics processing units (GPGPU), distributed computing, parallel computing, asynchronous programming, and graph processing. I am especially interested in the intersection of these areas. Due to the increasingly widespread use of many-core architectures, like GPUs, I am interested in making these devices easier to program as well as extracting the most performance from those devices, both for single or distributed systems.

Research and Articles

Atos: A Task-Parallel GPU Scheduler for Graph Analytics.

| ICPP'22: Proceedings of the 51st International Conference on Parallel Processing
Yuxin Chen, Benjamin Brock, Serban Porumbescu, Aydın Buluç, Katherine Yelick, and John D. Owens

Abstract: We present Atos, a task-parallel GPU dynamic scheduling framework that is especially targeted at dynamic irregular applications. Compared to the dominant Bulk Synchronous Parallel (BSP) frameworks, Atos exposes additional concurrency by supporting task-parallel formulations of applications with relaxed dependencies, achieving higher GPU utilization, which is particularly significant for problems with concurrency bottlenecks. Atos also offers implicit task-parallel load balancing in addition to data-parallel load balancing, providing users the flexibility to balance between them to achieve optimal performance. Finally, Atos allows users to adapt to different use cases by controlling the kernel strategy and task-parallel granularity. We demonstrate that each of these controls is important in practice. We evaluate and analyze the performance of Atos vs. BSP on three applications: breadth-first search, PageRank, and graph coloring. Atos implementations achieve geomean speedups of 3.44x, 2.1x, and 2.77x and peak speedups of 12.8x, 3.2x, and 9.08x across three case studies, compared to a state-of-the-art BSP GPU implementation. Beyond simply quantifying the speedup, we extensively analyze the reasons behind each speedup. This deeper understanding allows us to derive general guidelines for how to select the optimal Atos configuration for different applications. Finally, our analysis provides insights for future dynamic scheduling framework designs.

Scalable irregular parallelism with GPUs: getting CPUs out of the way.

| SC'22: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Yuxin Chen, Benjamin Brock, Serban Porumbescu, Aydın Buluç, Katherine Yelick, and John D. Owens

Abstract: We present Atos, a dynamic scheduling framework for multi-node-GPU systems that supports PGAS-style lightweight one-sided memory operations within and between nodes. Atos's lightweight GPU-to-GPU communication enables latency hiding and can smooth the interconnection usage for bisection-limited problems. These benefits are significant for dynamic, irregular applications that often involve fine-grained communication at unpredictable times and without predetermined patterns. Some principles for high performance: (1) do not involve the CPU in the communication control path; (2) allow GPU communication within kernels, addressing memory consistency directly rather than relying on synchronization with the CPU; (3) perform dynamic communication aggregation when interconnections have limited bandwidth. By lowering the overhead of communication and allowing it within GPU kernels, we support large, high-utilization GPU kernels but with more frequent communication. We evaluate Atos on two irregular problems: Breadth-First-Search and PageRank. Atos outperforms the state-of-the-art graph libraries Gunrock, Groute and Galois on both single-node-multi-GPU and multi-node-GPU settings.

Accelerated Dimension-Independent Adaptive Metropolis

SIAM on Scientific Computing 2016
Yuxin Chen, David Keyes, Kody J. H. Law, and Hatem Ltaief

Abstract: This work describes improvements by algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [H. Haario, E. Saksman, and J. Tamminen, Bernoulli, (2001), pp. 223--242] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension 𝑑≥1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justified a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL (math kernel library) parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) with fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.

Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context

The International Conference on Machine Learning (ICML) 2024
Xiang Cheng, Yuxin Chen, Suvrit Sra

Abstract: Many neural network architectures are known to be Turing Complete, and can thus, in principle implement arbitrary algorithms. However, Transformers are unique in that they can implement gradient-based learning algorithms under simple parameter configurations. This paper provides theoretical and empirical evidence that (non-linear) Transformers naturally learn to implement gradient descent in function space, which in turn enable them to learn non-linear functions in context. Our results apply to a broad class of combinations of non-linear architectures and non-linear in-context learning tasks. Additionally, we show that the optimal choice of non-linear activation depends in a natural way on the class of functions that need to be learned.

Performance Trade-offs in GPU Communication: A Study of Host and Device-initiated Approaches

PMBS 2020 IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems
Taylor Groves, Ben Brock, Yuxin Chen, Khaled Z. Ibrahim, Lenny Oliker, Nicholas J. Wright, Samuel Williams, and Katherine Yelick

Abstract: Network communication on GPU-based systems is a significant roadblock for many applications with small but frequent messaging requirements. One common question for application developers is, "How can they reduce the overheads and achieve the best communication performance on GPUs?" This work examines device initiated versus host initiated inter-node GPU communication using NVSHMEM. We derive basic communication model parameters for single message and batched communication before validating our model against distributed GEMM benchmarks. We use our model to estimate performance benefits for applications transitioning from CPUs to GPUS for fixed-size and scaled workloads and provide general guidelines for reducing communication overheads. Our findings show that the host-initiated approach generally outperforms the device-initiated approach for the system evaluated.

RDMA vs. RPC for Implementing Distributed Data Structures

IA3 2019 IEEE/ACM 9th Workshop on Irregular Applications: Architectures and Algorithms
Benjamin Brock, Yuxin Chen, Jiakun Yan, John D. Owens, Aydın Buluç, and Katherine Yelick

Abstract: Distributed data structures are key to implementing scalable applications for scientific simulations and data analysis. In this paper we look at two implementation styles for distributed data structures: remote direct memory access (RDMA) and remote procedure call (RPC). We focus on operations that require individual accesses to remote portions of a distributed data structure, e.g., accessing a hash table bucket or distributed queue, rather than global operations in which all processors collectively exchange information. We look at the trade-offs between the two styles through microbenchmarks and a performance model that approximates the cost of each. The RDMA operations have direct hardware support in the network and therefore lower latency and overhead, while the RPC operations are more expressive but higher cost and can suffer from lack of attentiveness from the remote side. We also run experiments to compare the real-world performance of RDMA- and RPC-based data structure operations with the predicted performance to evaluate the accuracy of our model, and show that while the model does not always precisely predict running time, it allows us to choose the best implementation in the examples shown. We believe this analysis will assist developers in designing data structures that will perform well on current network architectures, as well as network architects in providing better support for this class of distributed data structures..

Projects

Extend Triton to enable device function code generation for NVIDIA Grafia

This project extended Triton to enable device function code generation for NVIDIA Grafia, which imposes several constraints on those generated functions. These constraints include:

  • Only using Grafia register pool for device functions.
  • Only using pre-allocated Grafia shared memory.
  • Generating Grafia input context and return context with the right set of registers on the fly without exposing those contexts to the Triton programming and user level.
  • decomposing and segregating the MLIR into distinct functions based on functionality (DMA, Compute, Sync, etc.) during the pipeline process to lower Triton to the LLVM level.
This effort also involved surpassing the limitation, prevailing at the time, where Triton functions accepted only global memory pointers as arguments. In addition to my efforts to enable user-programmed Triton Python code to generate device functions that could plug into NVIDIA Grafia runtime, multiple optimizations were incorporated during the pipeline for lowering the Triton MLIR to LLVM. These optimizations include: Leveraging Hopper warp specialization and Extracting the producer and consumer relationship and executing them asynchronously.

PGAS (Partition Global Addressing Space) with Distributed GPUs

|

This project led the research and open-source development of novel, state-of-the-art asynchronous distributed GPU data structures and algorithms that support irregular applications in areas such as biology, graph analytics, and sparse linear algebra. These solutions leverage Partitioned Global Address Space (PGAS)-style communication between distributed GPUs, allowing for:

  • More efficient communication latency hiding
  • Higher bandwidth utilization
  • Reduced pipeline stalls through increased pipeline scheduling flexibility
This project created a highly effective programming environment that enabled end-users to efficiently leverage GPUs for irregular communication patterns, contributing to the HPC community's understanding of how to design primitives for accelerator-equipped distributed-memory machines.

Blogs

Kernel launch SMs scheduling

April 2022.

Observation notes on how CUDA blocks are distributed on SMs

GPU and NIC

March 2018

Groute Notes

Nov 2017

MiniGunrock

Oct 2017

Spark Notes

Jan 2017

Scala Notes

Jan 2017

Tail Recursion

Dec 2016

Scala Notes

Dec 2016

Education

University of California, Davis

Doctor of Computer Science

2017 - 2023

King Abdullah University of Science and Technology (KAUST)

Master of Computer Science

2013 - 2015

Fuwang Management School, Xiamen University

Bachelor of Management

2009 - 2013

The Wang Yanan Institute for Studies in Economics,Xiamen University (WISE)

Bachelor of Economy

2009 - 2013