#[repr(transparent)]pub struct cudaLaunchAttributeID(pub c_uint);
Expand description
Launch attributes enum; used as id field of ::cudaLaunchAttribute
Tuple Fields§
§0: c_uint
Implementations§
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeIgnore: cudaLaunchAttributeID
pub const cudaLaunchAttributeIgnore: cudaLaunchAttributeID
< Ignored entry, for convenient composition
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeAccessPolicyWindow: cudaLaunchAttributeID
pub const cudaLaunchAttributeAccessPolicyWindow: cudaLaunchAttributeID
< Valid for streams, graph nodes, launches. See ::cudaLaunchAttributeValue::accessPolicyWindow.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeCooperative: cudaLaunchAttributeID
pub const cudaLaunchAttributeCooperative: cudaLaunchAttributeID
< Valid for graph nodes, launches. See ::cudaLaunchAttributeValue::cooperative.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeSynchronizationPolicy: cudaLaunchAttributeID
pub const cudaLaunchAttributeSynchronizationPolicy: cudaLaunchAttributeID
< Valid for streams. See ::cudaLaunchAttributeValue::syncPolicy.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeClusterDimension: cudaLaunchAttributeID
pub const cudaLaunchAttributeClusterDimension: cudaLaunchAttributeID
< Valid for graph nodes, launches. See ::cudaLaunchAttributeValue::clusterDim.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeClusterSchedulingPolicyPreference: cudaLaunchAttributeID
pub const cudaLaunchAttributeClusterSchedulingPolicyPreference: cudaLaunchAttributeID
< Valid for graph nodes, launches. See ::cudaLaunchAttributeValue::clusterSchedulingPolicyPreference.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeProgrammaticStreamSerialization: cudaLaunchAttributeID
pub const cudaLaunchAttributeProgrammaticStreamSerialization: cudaLaunchAttributeID
< Valid for launches. Setting ::cudaLaunchAttributeValue::programmaticStreamSerializationAllowed to non-0 signals that the kernel will use programmatic means to resolve its stream dependency, so that the CUDA runtime should opportunistically allow the grid’s execution to overlap with the previous kernel in the stream, if that kernel requests the overlap. The dependent launches can choose to wait on the dependency using the programmatic sync (cudaGridDependencySynchronize() or equivalent PTX instructions).
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeProgrammaticEvent: cudaLaunchAttributeID
pub const cudaLaunchAttributeProgrammaticEvent: cudaLaunchAttributeID
< Valid for launches. Set
::cudaLaunchAttributeValue::programmaticEvent to
record the event. Event recorded through this launch
attribute is guaranteed to only trigger after all
block in the associated kernel trigger the event. A
block can trigger the event programmatically in a
future CUDA release. A trigger can also be inserted at
the beginning of each block’s execution if
triggerAtBlockStart is set to non-0. The dependent
launches can choose to wait on the dependency using
the programmatic sync (cudaGridDependencySynchronize()
or equivalent PTX instructions). Note that dependents
(including the CPU thread calling
cudaEventSynchronize()) are not guaranteed to observe
the release precisely when it is released. For
example, cudaEventSynchronize() may only observe the
event trigger long after the associated kernel has
completed. This recording type is primarily meant for
establishing programmatic dependency between device
tasks. Note also this type of dependency allows, but
does not guarantee, concurrent execution of tasks.
The event supplied must not be an interprocess or
interop event. The event must disable timing (i.e.
must be created with the ::cudaEventDisableTiming flag
set).
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributePriority: cudaLaunchAttributeID
pub const cudaLaunchAttributePriority: cudaLaunchAttributeID
< Valid for streams, graph nodes, launches. See ::cudaLaunchAttributeValue::priority.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeMemSyncDomainMap: cudaLaunchAttributeID
pub const cudaLaunchAttributeMemSyncDomainMap: cudaLaunchAttributeID
< Valid for streams, graph nodes, launches. See ::cudaLaunchAttributeValue::memSyncDomainMap.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeMemSyncDomain: cudaLaunchAttributeID
pub const cudaLaunchAttributeMemSyncDomain: cudaLaunchAttributeID
< Valid for streams, graph nodes, launches. See ::cudaLaunchAttributeValue::memSyncDomain.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeLaunchCompletionEvent: cudaLaunchAttributeID
pub const cudaLaunchAttributeLaunchCompletionEvent: cudaLaunchAttributeID
< Valid for launches. Set
::cudaLaunchAttributeValue::launchCompletionEvent to record the
event.
Nominally, the event is triggered once all blocks of the kernel
have begun execution. Currently this is a best effort. If a kernel
B has a launch completion dependency on a kernel A, B may wait
until A is complete. Alternatively, blocks of B may begin before
all blocks of A have begun, for example if B can claim execution
resources unavailable to A (e.g. they run on different GPUs) or
if B is a higher priority than A.
Exercise caution if such an ordering inversion could lead
to deadlock.
A launch completion event is nominally similar to a programmatic
event with \c triggerAtBlockStart set except that it is not
visible to \c cudaGridDependencySynchronize() and can be used with
compute capability less than 9.0.
The event supplied must not be an interprocess or interop event.
The event must disable timing (i.e. must be created with the
::cudaEventDisableTiming flag set).
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
Sourcepub const cudaLaunchAttributeDeviceUpdatableKernelNode: cudaLaunchAttributeID
pub const cudaLaunchAttributeDeviceUpdatableKernelNode: cudaLaunchAttributeID
< Valid for graph nodes, launches. This attribute is graphs-only,
and passing it to a launch in a non-capturing stream will result
in an error.
:cudaLaunchAttributeValue::deviceUpdatableKernelNode::deviceUpdatable can
only be set to 0 or 1. Setting the field to 1 indicates that the
corresponding kernel node should be device-updatable. On success, a handle
will be returned via
::cudaLaunchAttributeValue::deviceUpdatableKernelNode::devNode which can be
passed to the various device-side update functions to update the node’s
kernel parameters from within another kernel. For more information on the
types of device updates that can be made, as well as the relevant limitations
thereof, see ::cudaGraphKernelNodeUpdatesApply.
Nodes which are device-updatable have additional restrictions compared to
regular kernel nodes. Firstly, device-updatable nodes cannot be removed
from their graph via ::cudaGraphDestroyNode. Additionally, once opted-in
to this functionality, a node cannot opt out, and any attempt to set the
deviceUpdatable attribute to 0 will result in an error. Device-updatable
kernel nodes also cannot have their attributes copied to/from another kernel
node via ::cudaGraphKernelNodeCopyAttributes. Graphs containing one or more
device-updatable nodes also do not allow multiple instantiation, and neither
the graph nor its instantiated version can be passed to ::cudaGraphExecUpdate.
If a graph contains device-updatable nodes and updates those nodes from the device
from within the graph, the graph must be uploaded with ::cuGraphUpload before it
is launched. For such a graph, if host-side executable graph updates are made to the
device-updatable nodes, the graph must be uploaded before it is launched again.
Source§impl cudaLaunchAttributeID
impl cudaLaunchAttributeID
< Valid for launches. On devices where the L1 cache and shared memory use the same hardware resources, setting ::cudaLaunchAttributeValue::sharedMemCarveout to a percentage between 0-100 signals sets the shared memory carveout preference in percent of the total shared memory for that kernel launch. This attribute takes precedence over ::cudaFuncAttributePreferredSharedMemoryCarveout. This is only a hint, and the driver can choose a different configuration if required for the launch.
Trait Implementations§
Source§impl Clone for cudaLaunchAttributeID
impl Clone for cudaLaunchAttributeID
Source§fn clone(&self) -> cudaLaunchAttributeID
fn clone(&self) -> cudaLaunchAttributeID
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more