Hi! I'm Yuxin Chen.
I have extensive experience in GPU programming, specializing in multi-GPU communication and GPU runtime design. My expertise includes developing high-performance computing frameworks, optimizing deep learning models, and enhancing large-scale distributed systems.
Research and Articles
Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context
The International Conference on Machine Learning (ICML) 2024
Xiang Cheng, Yuxin Chen, Suvrit Sra
Abstract: Many neural network architectures are known to be Turing Complete, and can thus, in principle implement arbitrary algorithms. However, Transformers are unique in that they can implement gradient-based learning algorithms under simple parameter configurations. This paper provides theoretical and empirical evidence that (non-linear) Transformers naturally learn to implement gradient descent in function space, which in turn enable them to learn non-linear functions in context. Our results apply to a broad class of combinations of non-linear architectures and non-linear in-context learning tasks. Additionally, we show that the optimal choice of non-linear activation depends in a natural way on the class of functions that need to be learned.
Atos: A Task-Parallel GPU Scheduler for Graph Analytics.
| ICPP'22: Proceedings of the 51st International Conference on Parallel Processing
Yuxin Chen, Benjamin Brock, Serban Porumbescu, Aydın Buluç, Katherine Yelick, and John D. Owens
Abstract: We present Atos, a task-parallel GPU dynamic scheduling framework that is especially targeted at dynamic irregular applications. Compared to the dominant Bulk Synchronous Parallel (BSP) frameworks, Atos exposes additional concurrency by supporting task-parallel formulations of applications with relaxed dependencies, achieving higher GPU utilization, which is particularly significant for problems with concurrency bottlenecks. Atos also offers implicit task-parallel load balancing in addition to data-parallel load balancing, providing users the flexibility to balance between them to achieve optimal performance. Finally, Atos allows users to adapt to different use cases by controlling the kernel strategy and task-parallel granularity. We demonstrate that each of these controls is important in practice. We evaluate and analyze the performance of Atos vs. BSP on three applications: breadth-first search, PageRank, and graph coloring. Atos implementations achieve geomean speedups of 3.44x, 2.1x, and 2.77x and peak speedups of 12.8x, 3.2x, and 9.08x across three case studies, compared to a state-of-the-art BSP GPU implementation. Beyond simply quantifying the speedup, we extensively analyze the reasons behind each speedup. This deeper understanding allows us to derive general guidelines for how to select the optimal Atos configuration for different applications. Finally, our analysis provides insights for future dynamic scheduling framework designs.Scalable irregular parallelism with GPUs: getting CPUs out of the way.
| SC'22: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Yuxin Chen, Benjamin Brock, Serban Porumbescu, Aydın Buluç, Katherine Yelick, and John D. Owens
Abstract: We present Atos, a dynamic scheduling framework for multi-node-GPU systems that supports PGAS-style lightweight one-sided memory operations within and between nodes. Atos's lightweight GPU-to-GPU communication enables latency hiding and can smooth the interconnection usage for bisection-limited problems. These benefits are significant for dynamic, irregular applications that often involve fine-grained communication at unpredictable times and without predetermined patterns. Some principles for high performance: (1) do not involve the CPU in the communication control path; (2) allow GPU communication within kernels, addressing memory consistency directly rather than relying on synchronization with the CPU; (3) perform dynamic communication aggregation when interconnections have limited bandwidth. By lowering the overhead of communication and allowing it within GPU kernels, we support large, high-utilization GPU kernels but with more frequent communication. We evaluate Atos on two irregular problems: Breadth-First-Search and PageRank. Atos outperforms the state-of-the-art graph libraries Gunrock, Groute and Galois on both single-node-multi-GPU and multi-node-GPU settings.
Performance Trade-offs in GPU Communication: A Study of Host and Device-initiated Approaches
PMBS 2020 IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems
Taylor Groves, Ben Brock, Yuxin Chen, Khaled Z. Ibrahim, Lenny Oliker, Nicholas J. Wright, Samuel Williams, and Katherine Yelick
Abstract: Network communication on GPU-based systems is a significant roadblock for many applications with small but frequent messaging requirements. One common question for application developers is, "How can they reduce the overheads and achieve the best communication performance on GPUs?" This work examines device initiated versus host initiated inter-node GPU communication using NVSHMEM. We derive basic communication model parameters for single message and batched communication before validating our model against distributed GEMM benchmarks. We use our model to estimate performance benefits for applications transitioning from CPUs to GPUS for fixed-size and scaled workloads and provide general guidelines for reducing communication overheads. Our findings show that the host-initiated approach generally outperforms the device-initiated approach for the system evaluated.
RDMA vs. RPC for Implementing Distributed Data Structures
IA3 2019 IEEE/ACM 9th Workshop on Irregular Applications: Architectures and Algorithms
Benjamin Brock, Yuxin Chen, Jiakun Yan, John D. Owens, Aydın Buluç, and Katherine Yelick
Abstract: Distributed data structures are key to implementing scalable applications for scientific simulations and data analysis. In this paper we look at two implementation styles for distributed data structures: remote direct memory access (RDMA) and remote procedure call (RPC). We focus on operations that require individual accesses to remote portions of a distributed data structure, e.g., accessing a hash table bucket or distributed queue, rather than global operations in which all processors collectively exchange information. We look at the trade-offs between the two styles through microbenchmarks and a performance model that approximates the cost of each. The RDMA operations have direct hardware support in the network and therefore lower latency and overhead, while the RPC operations are more expressive but higher cost and can suffer from lack of attentiveness from the remote side. We also run experiments to compare the real-world performance of RDMA- and RPC-based data structure operations with the predicted performance to evaluate the accuracy of our model, and show that while the model does not always precisely predict running time, it allows us to choose the best implementation in the examples shown. We believe this analysis will assist developers in designing data structures that will perform well on current network architectures, as well as network architects in providing better support for this class of distributed data structures..
Accelerated Dimension-Independent Adaptive Metropolis
SIAM on Scientific Computing 2016
Yuxin Chen, David Keyes, Kody J. H. Law, and Hatem Ltaief
Abstract: This work describes improvements by algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [H. Haario, E. Saksman, and J. Tamminen, Bernoulli, (2001), pp. 223--242] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension 𝑑≥1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justified a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL (math kernel library) parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) with fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.
Projects
In-Context Learning using Transformer
Motivated by the in-context learning phenomenon, this work investigates how the Transformer neural network can implement learning algorithms in its forward pass. A linear Transformer is shown to naturally learn to implement gradient descent, enabling it to learn linear functions in-context. More broadly, a non-linear Transformer can implement functional gradient descent with respect to some RKHS metric, allowing it to learn a wide class of functions in-context. Additionally, the RKHS metric is determined by the choice of attention activation, with the optimal attention activation depending naturally on the class of functions to be learned.
Extend Triton to enable device function code generation for NVIDIA Grafia
This project extended Triton to enable device function code generation for NVIDIA Grafia, which imposes several constraints on those generated functions. These constraints include:
- Only using Grafia register pool for device functions.
- Only using pre-allocated Grafia shared memory.
- Generating Grafia input context and return context with the right set of registers on the fly without exposing those contexts to the Triton programming and user level.
- decomposing and segregating the MLIR into distinct functions based on functionality (DMA, Compute, Sync, etc.) during the pipeline process to lower Triton to the LLVM level.
Accelerating Embedding Table Retrieval For DLRM
|
Proposed using GPU direct one-sided asynchronous small messages to replace the widely used collective communication calls for sparse input multi-GPU embedding retrieval in deep learning recommendation models (DLRM). This GPU direct communication approach achieves:
- better communication and computation overlap
- smoother network usage
- reduced overhead (due to the data unpack and rearrangement steps associated with collective communication calls).
Design and Prototype a Multi-GPU PGAS Programming Model
|
This project led the research and open-source development of novel, state-of-the-art asynchronous distributed GPU data structures and algorithms that support irregular applications in areas such as biology, graph analytics, and sparse linear algebra. These solutions leverage Partitioned Global Address Space (PGAS)-style communication between distributed GPUs, allowing for:
- More efficient communication latency hiding
- Higher bandwidth utilization
- Reduced pipeline stalls through increased pipeline scheduling flexibility
Fingerprint based Hash Tables
This project developed a Hash Table which supports fast membership queries for shared-memory CPU machines, AVX machines and GPUs.
- I enhanced the effectiveness of hash table member queries by utilizing fingerprint tables for hash tables, where each fingerprint occupied only 8-16 bits of memory. This approach significantly reduced memory traffic and addressed the Translation Lookaside Buffer (TLB) problem, while ensuring high query accuracy. On RMAT datasets, the solution effectively filtered out false queries with less than 5% false positive rate
- Rather than recreating a larger hash table when an insertion failed, I devised an overflow buffer that could store failed insertions. This approach enabled GPU multi-threading searches of keys, leading to quicker query and insertion times. During testing on RMAT datasets, this approach yielded an overflow ratio of less than 0.1% for varying load factors. The resulted hash table achieved up to 8x speedup over the standard library unordered map
Communication Aggregator for Multi-GPU systems
This project developed a communication aggregator that facilitates dynamic communication aggregation for gathering, scattering, and randomly accessing data between GPUs, resulting in significant improvements in bandwidth utilization. Beside better utilzation bandwidth, the aggregator brings following benefits:
- Decoupling communication granularity from computation granularity, the aggregator allows users to select the optimal communication granularity. Additionally, by decoupling the time that generated the message from the time it actually sent the data, the aggregator enables users to code in an algorithm's natural logic flow to send messages of their native size
- The aggregator runs transparently along the application code, minimizing the burden on the user, and allows for sending data both in an eager (immediate) and in a more bandwidth-efficient manner via aggregation.
GPU utilized Apache Spark
- This project developed a GPU version of Spark's fundamental data structure, Resilient Distributed Datasets (RDDs), while adhering to the original design philosophy of Spark. The GPU component was designed to be compatible, efficient, flexible, and principled, enabling seamless integration with existing Spark workflows.
- Developed a highly efficient GPU-accelerated logistic regression, map and reduce operations, etc. Our GPU-accelerated map and reduce operations are twice as fast as its CPU-only Spark version, and the GPU-accelerated logistic regression algorithm outperforms its CPU-only Spark version by a factor of five, improving the overall performance of Spark workflows.
- Collaborated with other members of the team to integrate these enhancements into the Apache Spark codebase, enabling end-users to leverage GPU acceleration to gain deeper insights from their data, improve decision-making, and accelerate time-to-insight.
Blogs
Learn by examples: C Pytorch Bindings
Jan 2023.
GPU shared memory bank conflict
Jan 2023.
Kernel launch SMs scheduling
April 2022.
Observation notes on how CUDA blocks are distributed on SMs
GPU and NIC
March 2018
GPU and PGAS communication
Feb 2018
GPU architectures Notes
Dec 2017
Connected Components (CC) Notes
Dec 2017
Groute Notes
Nov 2017
Gunrock Connected Component (CC)
Oct 2017
MiniGunrock
Oct 2017
Spark Notes
Jan 2017
Scala Notes
Jan 2017
Tail Recursion
Dec 2016
Scala Notes
Dec 2016
Education
University of California, Davis
Doctor of Computer Science
2017 - 2023
King Abdullah University of Science and Technology (KAUST)
Master of Computer Science
2013 - 2015
Fuwang Management School, Xiamen University
Bachelor of Management
2009 - 2013
The Wang Yanan Institute for Studies in Economics,Xiamen University (WISE)
Bachelor of Economy
2009 - 2013