Relevant if you build with AI tools, APIs, or coding agents. Relevant als je bouwt met AI-tools, API's of coding agents.
Introducing Triton: Open-source GPU programming for neural networks Introducing Triton: Open-source GPU programming for neural networks
Title: Introducing Triton: Open-source GPU programming for neural networks Title: Introducing Triton: Open-source GPU programming for neural networks
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Useful for builders who need to understand API, coding, or workflow changes. Nuttig voor bouwers die API-, code- of workflowwijzigingen willen begrijpen.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Introducing Triton: Open-source GPU programming for neural networks | OpenAI
Listen to article
We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce.
Why it matters
Triton makes it possible to reach peak hardware performance with relatively little effort; for example, it can be used to write FP16 matrix multiplication kernels that match the performance of cuBLAS—something that many GPU programmers can’t do—in under 25 lines of code. Our researchers have already used it to produce kernels that are up to 2x more efficient than equivalent Torch implementations, and we’re excited to work with the community to make GPU programming more accessible to everyone.
Novel research ideas in the field of Deep Learning are generally implemented using a combination of native framework operators. While convenient, this approach often requires the creation (and/or movement) of many temporary tensors, which can hurt the performance of neural networks at scale. These issues can be mitigated by writing specialized GPU kernels, but doing so can be surprisingly difficult due to the many intricacies of GPU programming.1, 2, 3 And, although a variety of systems have recently emerged4, 5to make this process easier, we have found them to be either too verbose, lack flexibility or generate code noticeably slower than our hand-tuned baselines. This has led us to extend and improve Triton6, a recent language and compiler whose original creator now works at OpenAI.
The architecture of modern GPUs can be roughly divided into three major components—DRAM, SRAM and ALUs—each of which must be considered when optimizing CUDA code:
* Memory transfers from DRAM must be _coalesced_ into large transactions to leverage the large bus width of modern memory interfaces.
* Data must be manually stashed to SRAM prior to being re-used, and managed so as to minimize shared memory bank conflicts upon retrieval.
* Computations must be partitioned and scheduled carefully, both across and within Streaming Multiprocessors (SMs), so as to promote instruction/thread-level parallelism and leverage special-purpose ALUs (e.g., tensor cores).
Basic architecture of a GPU.
Reasoning about all these factors can be challenging, even for seasoned CUDA programmers with many years of experience. The purpose of Triton is to fully automate these optimizations, so that developers can better focus on the high-level logic of their parallel code. Triton aims to be broadly applicable, and therefore does not automatically schedule work across SMs -- leaving some important algorithmic considerations (e.g. tiling, inter-SM synchronization) to the discretion of developers.
CUDATRITON
Memory Coalescing Manual Automatic
Shared Memory Management Manual Automatic
Scheduling (Within SMs)Manual Automatic
Scheduling (Across SMs)Manual Manual
Compiler optimizations in CUDA vs Triton.
Out of all the Domain Specific Languages and JIT-compilers available, Triton is perhaps most similar to Numba: kernels are defined as decorated Python functions, and launched concurrently with differentprogram_id’s on a grid of so-called _instances_. However, as shown in the code snippet below, the resemblance stops there: Triton exposes intra-instance parallelism via operations on _blocks_—small arrays whose dimensions are powers of two—rather than a Single Instruction, Multiple Thread (SIMT)7execution model. In doing so, Triton effectively abstracts away all the issues related to concurrency _within_ CUDA thread blocks (e.g., memory coalescing, shared memory synchronization/conflicts, tensor core scheduling).
BLOCK = 512
This is a GPU kernel in Numba.
Different instances of this
function may run in parallel.
@jit
def add(X, Y, Z, N):
In Numba/CUDA, each kernel
instance itself uses an SIMT execution
model, where instructions are executed in
parallel for different values of threadIdx
tid = threadIdx.x
bid = blockIdx.x
scalar index
idx = bid * BLOCK + tid
if id < N:
There is no pointer in Numba.
Z,X,Y are dense tensors
Z[idx] = X[idx] + Y[idx]
...
grid = (ceil_div(N, BLOCK),)
block = (BLOCK,)
addgrid, block
This is a GPU kernel in Triton.
In Triton, each kernel instance
executes block operations on a
single thread: there is no construct
analogous to threadIdx
pid = program_id(0)
block of indices
idx = pid * BLOCK + arange(BLOCK)
mask = idx < N
Triton uses pointer arithmetics
rather than indexing operators
x = load(X + idx, mask=mask)
y = load(Y + idx, mask=mask)
store(Z + idx, x + y, mask=mask)
no thread-block
addgrid
Vector addition in Triton.
While this may not be particularly helpful for embarrassingly parallel (i.e., element-wise) computations, it can greatly simplify the development of more complex GPU programs.
Consider for example the case of a fused softmax kernel (below) in which each instance normalizes a different row of the given input tensor _X\_∈R\_M\_×\_N_. Standard CUDA implementations of this parallelization strategy can be challenging to write, requiring explicit synchronization between threads as they concurrently reduce the same row of _X_. Most of this complexity goes away with Triton, where each kernel instance loads the row of interest and normalizes it sequentially using NumPy-like primitives.
Note that the Triton JIT treats X and Y as _pointers_ rather than tensors; we felt like retaining low-level control of memory accesses was important to address more complex data structures (e.g., block-sparse tensors).
Importantly, this particular implementation of softmax keeps the rows of _X_ in SRAM throughout the entire normalization process, which maximizes data reuse when applicable (~
A100 performance of fused softmax for M=4096.
The lower performance of the Torch (v1.9) JIT highlights the difficulty of automatic CUDA code generation from sequences of high-level tensor operations.
Fused softmax with the Torch JIT.
Being able to write fused kernels for element-wise operations and reductions is important, but not sufficient given the prominence of matrix multiplication tasks in neural networks. As it turns out, Triton also works very well for those, achieving peak performance with just ~25 lines of Python code. On the other hand, implementing something similar in CUDA would takea lot more effort(opens in a new window)and would even be likely to achieve lower performance.
Matrix multiplication in Triton.
One important advantage of handwritten matrix multiplication kernels is that they can be customized as desired to accommodate fused transformations of their inputs (e.g., slicing) and outputs (e.g., Leaky ReLU). Without a system like Triton, non-trivial modifications of matrix multiplication kernels would be out-of-reach for developers without exceptional GPU programming expertise.
V100 tensor-core performance of matrix multiplication with appropriately tuned values for BLOCK$_{M}$, BLOCK$_{N}$, BLOCK$_{K}$, GROUP$_{M}$.
The good performance of Triton comes from a modular system architecture centered around Triton-IR, an LLVM-based intermediate representation in which multi-dimensional blocks of values are first-class citizens.
Python
Triton-IR
LLVM-IR
PTX
idx= pid * 512 + arange(512)
def void add(i32* X .aligned(16) , i32* Y .aligned(16) , i32* Z .aligned(16) , i32 N .multipleof(2) )
{
entry:
%0 = get_program_id[0] i32;
%1 = mul i32 %0, 512;
%3 = make_range[0 : 512] i32;
%4 = splat i32 %1;
%6 = add i32 %4, %3;
%9 = splat i32 N;
%11 = icmp_slt i1 %6, %9;
%14 = splat i32* X;
%16 = getelementptr i32* %14, %6;
%19 = broadcast i1 %11;
%21 = splat i32 undef;
%22 = masked_load i32 %16, %19, %21;
%26 = splat i32* Y;
%28 = getelementptr i32* %26, %6;
%31 = broadcast i1 %11;
%33 = splat i32 undef;
%34 = masked_load i32 %28, %31, %33;
%38 = splat i32* Z;
%40 = getelementptr i32* %38, %6;
%43 = add i32 %22, %34;
%46 = broadcast i32 %43;
%48 = broadcast i1 %11;
masked_store void %40, %46, %48;
ret void;
}
.visible .entry add(
.param .u64 add_param_0, .param .u64 add_param_1,
.param .u64 add_param_2, .param .u32 add_param_3
)
.maxntid 128, 1, 1
.reg .pred %p;
.reg .b32 %r;
.reg .b64 %rd;
ld.param.u64 %rd4, [add_param_0];
ld.param.u64 %rd5, [add_param_1];
mov.u32 %r13, %tid.x;
ld.param.u32 %r14, [add_param_3];
shl.b32 %r15, %r13, 2;
mov.u32 %r16, %ctaid.x;
mad.lo.s32 %r17, %r16, 512, %r15;
setp.ge.s32 %p3, %r17, %r14;
setp.lt.s32 %p1, %r17, %r14;
mul.wide.s32 %rd7, %r17, 4;
add.s64 %rd2, %rd4, %rd7;
@%p1 ld.global.cg.v4.b32 {%r5,%r6,%r7,%r8}, [ %rd2 + 0];
add.s64 %rd3, %rd5, %rd7;
@%p1 ld.global.cg.v4.b32 {%r9,%r10,%r11,%r12}, [ %rd3 + 0];
@%p3 bra LBB0_2;
ld.param.u64 %rd6, [add_param_2];
add.s64 %rd1, %rd6, %rd7;
add.s32 %r1, %r5, %r9;
add.s32 %r2, %r6, %r10;
add.s32 %r3, %r7, %r11;
add.s32 %r4, %r8, %r12;
st.global.v4.u32 [%rd1], {%r1, %r2, %r3, %r4};
LBB0_2:
ret;
High-level architecture of Triton.
The@triton.jitdecorator works by walking the Abstract Syntax Tree (AST) of the provided Python function so as to generate Triton-IR on-the-fly using a common SSA construction algorithm.8The resulting IR code is then simplified, optimized and automatically parallelized by our compiler backend, before being converted into high-quality LLVM-IR—and eventually PTX—for execution on recent NVIDIA GPUs. CPUs and AMD GPUs are not supported at the moment, but we welcome community contributions aimed at addressing this limitation.
We have found that the use of blocked program representations via Triton-IR allows our compiler to automatically perform a wide variety of important program optimizations. For example, data can be automatically stashed to shared memory by looking at the operands of computationally intensive block-level operations (e.g.,tl.dot)—and allocated/synchronized using standard liveness analysis techniques.
The Triton compiler allocates shared memory by analyzing the live range of block variables used in computationally intensive operations.
On the other hand, Triton programs can be efficiently and automatically parallelized both (1) across SMs by executing different kernel instances concurrently, and (2) within SMs by analyzing the iteration space of each block-level operation and partitioning it adequately across different SIMD units, as shown below.
Automatic parallelization in Triton. Each block-level operation defines a blocked iteration space that is automatically parallelized to make use of the resources available on a Streaming Multiprocessor (SM).
_If you’re interested in joining our team and working on Triton & GPU kernels,we’re hiring__!_
add.s64 %rd1, %rd6, %rd7;
add.s32 %r1, %r5, %r9;
add.s32 %r2, %r6, %r10;
add.s32 %r3, %r7, %r11;
add.s32 %r4, %r8, %r12;
st.global.v4.u32 [%rd1], {%r1, %r2, %r3, %r4};
LBB0_2:
ret;
}
High-level architecture of Triton.
The@triton.jitdecorator works by walking the Abstract Syntax Tree (AST) of the provided Python function so as to generate Triton-IR on-the-fly using a common SSA construction algorithm.8The resulting IR code is then simplified, optimized and automatically parallelized by our compiler backend, before being converted into high-quality LLVM-IR—and eventually PTX—for execution on recent NVIDIA GPUs. CPUs and AMD GPUs are not supported at the moment, but we welcome community contributions aimed at addressing this limitation.
We have found that the use of blocked program representations via Triton-IR allows our compiler to automatically perform a wide variety of important program optimizations. For example, data can be automatically stashed to shared memory by looking at the operands of computationally intensive block-level operations (e.g.,tl.dot)—and allocated/synchronized using standard liveness analysis techniques.
The Triton compiler allocates shared memory by analyzing the live range of block variables used in computationally intensive operations.
On the other hand, Triton programs can be efficiently and automatically parallelized both (1) across SMs by executing different kernel instances concurrently, and (2) within SMs by analyzing the iteration space of each block-level operation and partitioning it adequately across different SIMD units, as shown below.
Automatic parallelization in Triton. Each block-level operation defines a blocked iteration space that is automatically parallelized to make use of the resources available on a Streaming Multiprocessor (SM).
_If you’re interested in joining our team and working on Triton & GPU kernels,we’re hiring__!_
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesOur principles Our principles
By Sam Altman By Sam Altman
GPT-5.5 Bio Bug Bounty GPT-5.5 Bio Bug Bounty
Title: GPT-5.5 Bio Bug Bounty Titel: GPT-5.5 Bio Bug Bounty
How to get started with Codex Zo begin je met Codex
Tips to set up Codex, create your first project, and start completing real tasks. Tips om Codex in te stellen, je eerste project te maken en echte taken af te ronden.
What is Codex? Wat is Codex?
Understand what Codex is and how it fits into your work Begrijp wat Codex is en hoe het in je werk past