Add torch.Tensor fast path for StridedMemoryView via AOTI tensor bridge#1894
Draft
leofang wants to merge 17 commits intoNVIDIA:mainfrom
Draft
Add torch.Tensor fast path for StridedMemoryView via AOTI tensor bridge#1894leofang wants to merge 17 commits intoNVIDIA:mainfrom
leofang wants to merge 17 commits intoNVIDIA:mainfrom
Conversation
Provide a fast path for constructing a StridedMemoryView from a
torch.Tensor by reading tensor metadata directly through PyTorch's
AOT Inductor (AOTI) stable C ABI, avoiding DLPack/CAI protocol
overhead (~10 ns per tensor via pointer arithmetic).
Key design:
- Vendored AOTI shim header (aoti_shim.h) with extern "C" wrapping
- _tensor_bridge.pyx loaded lazily (only when a torch.Tensor is first
passed) to avoid undefined AOTI symbols at import time
- RTLD_GLOBAL bootstrap via sys.modules["torch._C"] before loading
_tensor_bridge.so
- torch detection via type(obj).__module__.startswith("torch")
- PyTorch is NOT a build-time or run-time dependency of cuda.core
Closes NVIDIA#749
Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…pty .pxd - Remove unused aoti_torch_get_numel and aoti_torch_get_storage_offset declarations from aoti_shim.h and _tensor_bridge.pyx - Fix license headers on new files to 2026 (not 2024-2026) - Delete empty _tensor_bridge.pxd (nothing cimports from it) - Defer numpy dtype resolution for torch tensors: store raw AOTI dtype code in metadata, compute itemsize from a cheap lookup table, and only resolve the full numpy dtype on first .dtype access via get_dtype() Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Instead of short-circuiting in __init__ and from_any_interface, add the AOTI fast path check to from_dlpack, from_cuda_array_interface, and from_array_interface. This ensures torch tensors always take the fast path regardless of which constructor the user calls. Simplify from_any_interface and _StridedMemoryViewProxy to just delegate to the from_* methods (which now handle torch internally). Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When stream_ptr is not -1, establish stream ordering between PyTorch's current CUDA stream (the producer) and the consumer stream, using the same event record + stream wait pattern as the CAI path. Uses aoti_torch_get_current_cuda_stream to get the producer stream, matching what PyTorch's own __dlpack__ does internally. Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Factor out stream ordering into a cpdef sync_torch_stream() helper in _tensor_bridge.pyx, callable from both C (view_as_torch_tensor) and Python (_memoryview.pyx). Apply the same stream ordering in view_as_cai for torch tensors: PyTorch's __cuda_array_interface__ reports version 2 and omits the "stream" field, so the standard CAI sync path is a no-op — leaving the consumer with no guarantee that the producer's work is visible. We now detect torch tensors in the CAI path and query PyTorch's current CUDA stream via AOTI to establish proper ordering. Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add check_aoti() inline helper to replace repetitive err/raise patterns for AOTI calls (one-liner per call) - Change itemsize type from int to size_t - Add test_torch_tensor_bridge_sliced_2d test case Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Revert itemsize back to int (size_t was unnecessary for small values) - Memoize int(stream_ptr) to avoid redundant Python operator conversion Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Better Cython 3 performance: except?-1 avoids the overhead of except* which always checks for exceptions. Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The AOTI stable C ABI functions we use (get_dim, get_dtype, get_device_type, get_device_index, get_current_cuda_stream, complex dtype constants) were all introduced in PyTorch 2.3.0. Earlier versions are missing some or all of them. _is_torch_tensor now returns False when torch < 2.3, causing a graceful fallback to the standard DLPack/CAI paths. The version check result is memoized in a module-level variable. Also move `import ctypes, sys` from _get_tensor_bridge to module level. Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Document the AOTI-based fast path for torch.Tensor in StridedMemoryView with ~10-20x speedup and stream ordering support. Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The cdata field changed from MaybeOwned<at::Tensor> (2.3-2.9) to at::Tensor (2.10+). Both layouts are compatible with our offset trick. Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Cache the result of the torch tensor type check (module + hasattr + version) keyed by type(obj). Subsequent calls for the same type are a single dict lookup (~76 ns) instead of the full check (~186 ns). Non-torch objects also benefit as the cache returns False immediately after the first miss. Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The pyobj_to_aten_handle trick and AtenTensorHandle == at::Tensor* identity are undocumented internals that could change. Cap at the latest tested version so unknown future versions fall back to the standard DLPack/CAI paths. Bump after verifying each new release. Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Contributor
Co-Authored-By: Emilio Castillo <ecastillo@nvidia.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
24aeb0f to
8c019b9
Compare
leofang
commented
Apr 11, 2026
Comment on lines
+112
to
+121
| cdef inline AtenTensorHandle pyobj_to_aten_handle(object obj): | ||
| """Extract AtenTensorHandle by offsetting past PyObject_HEAD. | ||
|
|
||
| In PyTorch 2.3–2.9 the first field after PyObject_HEAD is | ||
| ``c10::MaybeOwned<at::Tensor> cdata``; from 2.10 onward it is | ||
| ``at::Tensor cdata``. In both cases the address of ``cdata`` | ||
| is usable as an ``AtenTensorHandle`` (``at::Tensor*``) for the | ||
| AOTI stable C ABI functions. | ||
| """ | ||
| return <AtenTensorHandle>(<char*><PyObject*>obj + sizeof(PyObject)) |
Member
Author
There was a problem hiding this comment.
Note: I have filed a feature request to discuss if this API can be formalized in AOTI directly, so that we can relax upper bound safely and be forward compatible: pytorch/pytorch#180107.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add a fast path for constructing
StridedMemoryViewfromtorch.Tensorobjects using PyTorch's AOT Inductor (AOTI) stable C ABI, bypassing the DLPack/CAI protocol overhead.How it works
When a
torch.Tensoris passed to anyfrom_*classmethod (from_dlpack,from_cuda_array_interface,from_array_interface, orfrom_any_interface), the tensor metadata (data pointer, shape, strides, dtype, device) is read directly from the underlying C struct via AOTI function pointers, instead of going through the Python-level__dlpack__()or__cuda_array_interface__protocols.The key technique (
pyobj_to_aten_handle) extracts theAtenTensorHandleby offsetting pastPyObject_HEADin theTHPVariablestruct — pure C pointer arithmetic with zero Python API calls. The AOTI functions (aoti_torch_get_data_ptr,aoti_torch_get_sizes, etc.) then read tensor metadata through PyTorch's stable C ABI.PyTorch is NOT a build-time or runtime dependency. The AOTI symbols are resolved lazily at runtime from
torch._C(loaded withRTLD_GLOBAL) only when the user actually passes atorch.Tensor. The_tensor_bridgemodule is never imported atcuda.coreload time.Performance
Benchmarked with
%timeit(Python 3.12, PyTorch 2.11, NVIDIA RTX 6000 Ada):At the C level (no Python overhead), AOTI extracts all 7 metadata fields in ~14 ns — ~4x faster than the DLPack C exchange API (~60 ns) for the same metadata.
Stream ordering
stream_ptr != -1, establishes stream ordering between PyTorch's current CUDA stream (queried viaaoti_torch_get_current_cuda_stream) and the consumer stream, using the same event-based pattern as the existing CAI path.__cuda_array_interface__reports version 2 with nostreamfield, making the standard CAI sync path a no-op. We detect torch tensors in the CAI path and apply AOTI-based stream ordering to fix this safety gap.Version compatibility
THPVariablestruct layout andAtenTensorHandle == at::Tensor*identity are undocumented internals)Files changed
cuda/core/_tensor_bridge.pyx(new): AOTI tensor bridge —pyobj_to_aten_handle,view_as_torch_tensor,sync_torch_stream, dtype/itemsize mappingcuda/core/_include/aoti_shim.h(new): Vendored subset of PyTorch's AOTI stable C ABI declarationscuda/core/_memoryview.pyx: Torch detection (_is_torch_tensorwith type cache + version bounds), lazy bridge loading, fast path in allfrom_*classmethods, CAI stream safety fix, lazy dtype resolutiontests/test_utils.py: 12 new test cases (dtypes, shapes, slicing, devices, decorator)docs/source/release/1.0.0-notes.rst: Release notes entryCloses #749
Co-Authored-By: Emilio Castillo ecastillo@nvidia.com
Test plan