Struct cudaError_t

Source
#[repr(transparent)]
pub struct cudaError_t(pub c_uint);
Expand description

CUDA Error types CUDA error types

Tuple Fields§

§0: c_uint

Implementations§

Source§

impl cudaError

Source

pub const cudaSuccess: cudaError

The API call returned with no errors. In the case of query calls, this also means that the operation being queried is complete (see ::cudaEventQuery() and ::cudaStreamQuery()).

Source§

impl cudaError

Source

pub const cudaErrorInvalidValue: cudaError

This indicates that one or more of the parameters passed to the API call is not within an acceptable range of values.

Source§

impl cudaError

Source

pub const cudaErrorMemoryAllocation: cudaError

The API call failed because it was unable to allocate enough memory or other resources to perform the requested operation.

Source§

impl cudaError

Source

pub const cudaErrorInitializationError: cudaError

The API call failed because the CUDA driver and runtime could not be initialized.

Source§

impl cudaError

Source

pub const cudaErrorCudartUnloading: cudaError

This indicates that a CUDA Runtime API call cannot be executed because it is being called during process shut down, at a point in time after CUDA driver has been unloaded.

Source§

impl cudaError

Source

pub const cudaErrorProfilerDisabled: cudaError

This indicates profiler is not initialized for this run. This can happen when the application is running with external profiling tools like visual profiler.

Source§

impl cudaError

Source

pub const cudaErrorProfilerNotInitialized: cudaError

\deprecated This error return is deprecated as of CUDA 5.0. It is no longer an error to attempt to enable/disable the profiling via ::cudaProfilerStart or ::cudaProfilerStop without initialization.

Source§

impl cudaError

Source

pub const cudaErrorProfilerAlreadyStarted: cudaError

\deprecated This error return is deprecated as of CUDA 5.0. It is no longer an error to call cudaProfilerStart() when profiling is already enabled.

Source§

impl cudaError

Source

pub const cudaErrorProfilerAlreadyStopped: cudaError

\deprecated This error return is deprecated as of CUDA 5.0. It is no longer an error to call cudaProfilerStop() when profiling is already disabled.

Source§

impl cudaError

Source

pub const cudaErrorInvalidConfiguration: cudaError

This indicates that a kernel launch is requesting resources that can never be satisfied by the current device. Requesting more shared memory per block than the device supports will trigger this error, as will requesting too many threads or blocks. See ::cudaDeviceProp for more device limitations.

Source§

impl cudaError

Source

pub const cudaErrorInvalidPitchValue: cudaError

This indicates that one or more of the pitch-related parameters passed to the API call is not within the acceptable range for pitch.

Source§

impl cudaError

Source

pub const cudaErrorInvalidSymbol: cudaError

This indicates that the symbol name/identifier passed to the API call is not a valid name or identifier.

Source§

impl cudaError

Source

pub const cudaErrorInvalidHostPointer: cudaError

This indicates that at least one host pointer passed to the API call is not a valid host pointer. \deprecated This error return is deprecated as of CUDA 10.1.

Source§

impl cudaError

Source

pub const cudaErrorInvalidDevicePointer: cudaError

This indicates that at least one device pointer passed to the API call is not a valid device pointer. \deprecated This error return is deprecated as of CUDA 10.1.

Source§

impl cudaError

Source

pub const cudaErrorInvalidTexture: cudaError

This indicates that the texture passed to the API call is not a valid texture.

Source§

impl cudaError

Source

pub const cudaErrorInvalidTextureBinding: cudaError

This indicates that the texture binding is not valid. This occurs if you call ::cudaGetTextureAlignmentOffset() with an unbound texture.

Source§

impl cudaError

Source

pub const cudaErrorInvalidChannelDescriptor: cudaError

This indicates that the channel descriptor passed to the API call is not valid. This occurs if the format is not one of the formats specified by ::cudaChannelFormatKind, or if one of the dimensions is invalid.

Source§

impl cudaError

Source

pub const cudaErrorInvalidMemcpyDirection: cudaError

This indicates that the direction of the memcpy passed to the API call is not one of the types specified by ::cudaMemcpyKind.

Source§

impl cudaError

Source

pub const cudaErrorAddressOfConstant: cudaError

This indicated that the user has taken the address of a constant variable, which was forbidden up until the CUDA 3.1 release. \deprecated This error return is deprecated as of CUDA 3.1. Variables in constant memory may now have their address taken by the runtime via ::cudaGetSymbolAddress().

Source§

impl cudaError

Source

pub const cudaErrorTextureFetchFailed: cudaError

This indicated that a texture fetch was not able to be performed. This was previously used for device emulation of texture operations. \deprecated This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

Source§

impl cudaError

Source

pub const cudaErrorTextureNotBound: cudaError

This indicated that a texture was not bound for access. This was previously used for device emulation of texture operations. \deprecated This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

Source§

impl cudaError

Source

pub const cudaErrorSynchronizationError: cudaError

This indicated that a synchronization operation had failed. This was previously used for some device emulation functions. \deprecated This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

Source§

impl cudaError

Source

pub const cudaErrorInvalidFilterSetting: cudaError

This indicates that a non-float texture was being accessed with linear filtering. This is not supported by CUDA.

Source§

impl cudaError

Source

pub const cudaErrorInvalidNormSetting: cudaError

This indicates that an attempt was made to read a non-float texture as a normalized float. This is not supported by CUDA.

Source§

impl cudaError

Source

pub const cudaErrorMixedDeviceExecution: cudaError

Mixing of device and device emulation code was not allowed. \deprecated This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

Source§

impl cudaError

Source

pub const cudaErrorNotYetImplemented: cudaError

This indicates that the API call is not yet implemented. Production releases of CUDA will never return this error. \deprecated This error return is deprecated as of CUDA 4.1.

Source§

impl cudaError

Source

pub const cudaErrorMemoryValueTooLarge: cudaError

This indicated that an emulated device pointer exceeded the 32-bit address range. \deprecated This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

Source§

impl cudaError

Source

pub const cudaErrorStubLibrary: cudaError

This indicates that the CUDA driver that the application has loaded is a stub library. Applications that run with the stub rather than a real driver loaded will result in CUDA API returning this error.

Source§

impl cudaError

Source

pub const cudaErrorInsufficientDriver: cudaError

This indicates that the installed NVIDIA CUDA driver is older than the CUDA runtime library. This is not a supported configuration. Users should install an updated NVIDIA display driver to allow the application to run.

Source§

impl cudaError

Source

pub const cudaErrorCallRequiresNewerDriver: cudaError

This indicates that the API call requires a newer CUDA driver than the one currently installed. Users should install an updated NVIDIA CUDA driver to allow the API call to succeed.

Source§

impl cudaError

Source

pub const cudaErrorInvalidSurface: cudaError

This indicates that the surface passed to the API call is not a valid surface.

Source§

impl cudaError

Source

pub const cudaErrorDuplicateVariableName: cudaError

This indicates that multiple global or constant variables (across separate CUDA source files in the application) share the same string name.

Source§

impl cudaError

Source

pub const cudaErrorDuplicateTextureName: cudaError

This indicates that multiple textures (across separate CUDA source files in the application) share the same string name.

Source§

impl cudaError

Source

pub const cudaErrorDuplicateSurfaceName: cudaError

This indicates that multiple surfaces (across separate CUDA source files in the application) share the same string name.

Source§

impl cudaError

Source

pub const cudaErrorDevicesUnavailable: cudaError

This indicates that all CUDA devices are busy or unavailable at the current time. Devices are often busy/unavailable due to use of ::cudaComputeModeProhibited, ::cudaComputeModeExclusiveProcess, or when long running CUDA kernels have filled up the GPU and are blocking new work from starting. They can also be unavailable due to memory constraints on a device that already has active CUDA work being performed.

Source§

impl cudaError

Source

pub const cudaErrorIncompatibleDriverContext: cudaError

This indicates that the current context is not compatible with this the CUDA Runtime. This can only occur if you are using CUDA Runtime/Driver interoperability and have created an existing Driver context using the driver API. The Driver context may be incompatible either because the Driver context was created using an older version of the API, because the Runtime API call expects a primary driver context and the Driver context is not primary, or because the Driver context has been destroyed. Please see \ref CUDART_DRIVER “Interactions with the CUDA Driver API” for more information.

Source§

impl cudaError

Source

pub const cudaErrorMissingConfiguration: cudaError

The device function being invoked (usually via ::cudaLaunchKernel()) was not previously configured via the ::cudaConfigureCall() function.

Source§

impl cudaError

Source

pub const cudaErrorPriorLaunchFailure: cudaError

This indicated that a previous kernel launch failed. This was previously used for device emulation of kernel launches. \deprecated This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

Source§

impl cudaError

Source

pub const cudaErrorLaunchMaxDepthExceeded: cudaError

This error indicates that a device runtime grid launch did not occur because the depth of the child grid would exceed the maximum supported number of nested grid launches.

Source§

impl cudaError

Source

pub const cudaErrorLaunchFileScopedTex: cudaError

This error indicates that a grid launch did not occur because the kernel uses file-scoped textures which are unsupported by the device runtime. Kernels launched via the device runtime only support textures created with the Texture Object API’s.

Source§

impl cudaError

Source

pub const cudaErrorLaunchFileScopedSurf: cudaError

This error indicates that a grid launch did not occur because the kernel uses file-scoped surfaces which are unsupported by the device runtime. Kernels launched via the device runtime only support surfaces created with the Surface Object API’s.

Source§

impl cudaError

Source

pub const cudaErrorSyncDepthExceeded: cudaError

This error indicates that a call to ::cudaDeviceSynchronize made from the device runtime failed because the call was made at grid depth greater than than either the default (2 levels of grids) or user specified device limit ::cudaLimitDevRuntimeSyncDepth. To be able to synchronize on launched grids at a greater depth successfully, the maximum nested depth at which ::cudaDeviceSynchronize will be called must be specified with the ::cudaLimitDevRuntimeSyncDepth limit to the ::cudaDeviceSetLimit api before the host-side launch of a kernel using the device runtime. Keep in mind that additional levels of sync depth require the runtime to reserve large amounts of device memory that cannot be used for user allocations. Note that ::cudaDeviceSynchronize made from device runtime is only supported on devices of compute capability < 9.0.

Source§

impl cudaError

Source

pub const cudaErrorLaunchPendingCountExceeded: cudaError

This error indicates that a device runtime grid launch failed because the launch would exceed the limit ::cudaLimitDevRuntimePendingLaunchCount. For this launch to proceed successfully, ::cudaDeviceSetLimit must be called to set the ::cudaLimitDevRuntimePendingLaunchCount to be higher than the upper bound of outstanding launches that can be issued to the device runtime. Keep in mind that raising the limit of pending device runtime launches will require the runtime to reserve device memory that cannot be used for user allocations.

Source§

impl cudaError

Source

pub const cudaErrorInvalidDeviceFunction: cudaError

The requested device function does not exist or is not compiled for the proper device architecture.

Source§

impl cudaError

Source

pub const cudaErrorNoDevice: cudaError

This indicates that no CUDA-capable devices were detected by the installed CUDA driver.

Source§

impl cudaError

Source

pub const cudaErrorInvalidDevice: cudaError

This indicates that the device ordinal supplied by the user does not correspond to a valid CUDA device or that the action requested is invalid for the specified device.

Source§

impl cudaError

Source

pub const cudaErrorDeviceNotLicensed: cudaError

This indicates that the device doesn’t have a valid Grid License.

Source§

impl cudaError

Source

pub const cudaErrorSoftwareValidityNotEstablished: cudaError

By default, the CUDA runtime may perform a minimal set of self-tests, as well as CUDA driver tests, to establish the validity of both. Introduced in CUDA 11.2, this error return indicates that at least one of these tests has failed and the validity of either the runtime or the driver could not be established.

Source§

impl cudaError

Source

pub const cudaErrorStartupFailure: cudaError

This indicates an internal startup failure in the CUDA runtime.

Source§

impl cudaError

Source

pub const cudaErrorInvalidKernelImage: cudaError

This indicates that the device kernel image is invalid.

Source§

impl cudaError

Source

pub const cudaErrorDeviceUninitialized: cudaError

This most frequently indicates that there is no context bound to the current thread. This can also be returned if the context passed to an API call is not a valid handle (such as a context that has had ::cuCtxDestroy() invoked on it). This can also be returned if a user mixes different API versions (i.e. 3010 context with 3020 API calls). See ::cuCtxGetApiVersion() for more details.

Source§

impl cudaError

Source

pub const cudaErrorMapBufferObjectFailed: cudaError

This indicates that the buffer object could not be mapped.

Source§

impl cudaError

Source

pub const cudaErrorUnmapBufferObjectFailed: cudaError

This indicates that the buffer object could not be unmapped.

Source§

impl cudaError

Source

pub const cudaErrorArrayIsMapped: cudaError

This indicates that the specified array is currently mapped and thus cannot be destroyed.

Source§

impl cudaError

Source

pub const cudaErrorAlreadyMapped: cudaError

This indicates that the resource is already mapped.

Source§

impl cudaError

Source

pub const cudaErrorNoKernelImageForDevice: cudaError

This indicates that there is no kernel image available that is suitable for the device. This can occur when a user specifies code generation options for a particular CUDA source file that do not include the corresponding device configuration.

Source§

impl cudaError

Source

pub const cudaErrorAlreadyAcquired: cudaError

This indicates that a resource has already been acquired.

Source§

impl cudaError

Source

pub const cudaErrorNotMapped: cudaError

This indicates that a resource is not mapped.

Source§

impl cudaError

Source

pub const cudaErrorNotMappedAsArray: cudaError

This indicates that a mapped resource is not available for access as an array.

Source§

impl cudaError

Source

pub const cudaErrorNotMappedAsPointer: cudaError

This indicates that a mapped resource is not available for access as a pointer.

Source§

impl cudaError

Source

pub const cudaErrorECCUncorrectable: cudaError

This indicates that an uncorrectable ECC error was detected during execution.

Source§

impl cudaError

Source

pub const cudaErrorUnsupportedLimit: cudaError

This indicates that the ::cudaLimit passed to the API call is not supported by the active device.

Source§

impl cudaError

Source

pub const cudaErrorDeviceAlreadyInUse: cudaError

This indicates that a call tried to access an exclusive-thread device that is already in use by a different thread.

Source§

impl cudaError

Source

pub const cudaErrorPeerAccessUnsupported: cudaError

This error indicates that P2P access is not supported across the given devices.

Source§

impl cudaError

Source

pub const cudaErrorInvalidPtx: cudaError

A PTX compilation failed. The runtime may fall back to compiling PTX if an application does not contain a suitable binary for the current device.

Source§

impl cudaError

Source

pub const cudaErrorInvalidGraphicsContext: cudaError

This indicates an error with the OpenGL or DirectX context.

Source§

impl cudaError

Source

pub const cudaErrorNvlinkUncorrectable: cudaError

This indicates that an uncorrectable NVLink error was detected during the execution.

Source§

impl cudaError

Source

pub const cudaErrorJitCompilerNotFound: cudaError

This indicates that the PTX JIT compiler library was not found. The JIT Compiler library is used for PTX compilation. The runtime may fall back to compiling PTX if an application does not contain a suitable binary for the current device.

Source§

impl cudaError

Source

pub const cudaErrorUnsupportedPtxVersion: cudaError

This indicates that the provided PTX was compiled with an unsupported toolchain. The most common reason for this, is the PTX was generated by a compiler newer than what is supported by the CUDA driver and PTX JIT compiler.

Source§

impl cudaError

Source

pub const cudaErrorJitCompilationDisabled: cudaError

This indicates that the JIT compilation was disabled. The JIT compilation compiles PTX. The runtime may fall back to compiling PTX if an application does not contain a suitable binary for the current device.

Source§

impl cudaError

Source

pub const cudaErrorUnsupportedExecAffinity: cudaError

This indicates that the provided execution affinity is not supported by the device.

Source§

impl cudaError

Source

pub const cudaErrorUnsupportedDevSideSync: cudaError

This indicates that the code to be compiled by the PTX JIT contains unsupported call to cudaDeviceSynchronize.

Source§

impl cudaError

Source

pub const cudaErrorInvalidSource: cudaError

This indicates that the device kernel source is invalid.

Source§

impl cudaError

Source

pub const cudaErrorFileNotFound: cudaError

This indicates that the file specified was not found.

Source§

impl cudaError

Source

pub const cudaErrorSharedObjectSymbolNotFound: cudaError

This indicates that a link to a shared object failed to resolve.

Source§

impl cudaError

Source

pub const cudaErrorSharedObjectInitFailed: cudaError

This indicates that initialization of a shared object failed.

Source§

impl cudaError

Source

pub const cudaErrorOperatingSystem: cudaError

This error indicates that an OS call failed.

Source§

impl cudaError

Source

pub const cudaErrorInvalidResourceHandle: cudaError

This indicates that a resource handle passed to the API call was not valid. Resource handles are opaque types like ::cudaStream_t and ::cudaEvent_t.

Source§

impl cudaError

Source

pub const cudaErrorIllegalState: cudaError

This indicates that a resource required by the API call is not in a valid state to perform the requested operation.

Source§

impl cudaError

Source

pub const cudaErrorLossyQuery: cudaError

This indicates an attempt was made to introspect an object in a way that would discard semantically important information. This is either due to the object using funtionality newer than the API version used to introspect it or omission of optional return arguments.

Source§

impl cudaError

Source

pub const cudaErrorSymbolNotFound: cudaError

This indicates that a named symbol was not found. Examples of symbols are global/constant variable names, driver function names, texture names, and surface names.

Source§

impl cudaError

Source

pub const cudaErrorNotReady: cudaError

This indicates that asynchronous operations issued previously have not completed yet. This result is not actually an error, but must be indicated differently than ::cudaSuccess (which indicates completion). Calls that may return this value include ::cudaEventQuery() and ::cudaStreamQuery().

Source§

impl cudaError

Source

pub const cudaErrorIllegalAddress: cudaError

The device encountered a load or store instruction on an invalid memory address. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorLaunchOutOfResources: cudaError

This indicates that a launch did not occur because it did not have appropriate resources. Although this error is similar to ::cudaErrorInvalidConfiguration, this error usually indicates that the user has attempted to pass too many arguments to the device kernel, or the kernel launch specifies too many threads for the kernel’s register count.

Source§

impl cudaError

Source

pub const cudaErrorLaunchTimeout: cudaError

This indicates that the device kernel took too long to execute. This can only occur if timeouts are enabled - see the device property \ref ::cudaDeviceProp::kernelExecTimeoutEnabled “kernelExecTimeoutEnabled” for more information. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorLaunchIncompatibleTexturing: cudaError

This error indicates a kernel launch that uses an incompatible texturing mode.

Source§

impl cudaError

Source

pub const cudaErrorPeerAccessAlreadyEnabled: cudaError

This error indicates that a call to ::cudaDeviceEnablePeerAccess() is trying to re-enable peer addressing on from a context which has already had peer addressing enabled.

Source§

impl cudaError

Source

pub const cudaErrorPeerAccessNotEnabled: cudaError

This error indicates that ::cudaDeviceDisablePeerAccess() is trying to disable peer addressing which has not been enabled yet via ::cudaDeviceEnablePeerAccess().

Source§

impl cudaError

Source

pub const cudaErrorSetOnActiveProcess: cudaError

This indicates that the user has called ::cudaSetValidDevices(), ::cudaSetDeviceFlags(), ::cudaD3D9SetDirect3DDevice(), ::cudaD3D10SetDirect3DDevice, ::cudaD3D11SetDirect3DDevice(), or ::cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by calling non-device management operations (allocating memory and launching kernels are examples of non-device management operations). This error can also be returned if using runtime/driver interoperability and there is an existing ::CUcontext active on the host thread.

Source§

impl cudaError

Source

pub const cudaErrorContextIsDestroyed: cudaError

This error indicates that the context current to the calling thread has been destroyed using ::cuCtxDestroy, or is a primary context which has not yet been initialized.

Source§

impl cudaError

Source

pub const cudaErrorAssert: cudaError

An assert triggered in device code during kernel execution. The device cannot be used again. All existing allocations are invalid. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorTooManyPeers: cudaError

This error indicates that the hardware resources required to enable peer access have been exhausted for one or more of the devices passed to ::cudaEnablePeerAccess().

Source§

impl cudaError

Source

pub const cudaErrorHostMemoryAlreadyRegistered: cudaError

This error indicates that the memory range passed to ::cudaHostRegister() has already been registered.

Source§

impl cudaError

Source

pub const cudaErrorHostMemoryNotRegistered: cudaError

This error indicates that the pointer passed to ::cudaHostUnregister() does not correspond to any currently registered memory region.

Source§

impl cudaError

Source

pub const cudaErrorHardwareStackError: cudaError

Device encountered an error in the call stack during kernel execution, possibly due to stack corruption or exceeding the stack size limit. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorIllegalInstruction: cudaError

The device encountered an illegal instruction during kernel execution This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorMisalignedAddress: cudaError

The device encountered a load or store instruction on a memory address which is not aligned. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorInvalidAddressSpace: cudaError

While executing a kernel, the device encountered an instruction which can only operate on memory locations in certain address spaces (global, shared, or local), but was supplied a memory address not belonging to an allowed address space. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorInvalidPc: cudaError

The device encountered an invalid program counter. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorLaunchFailure: cudaError

An exception occurred on the device while executing a kernel. Common causes include dereferencing an invalid device pointer and accessing out of bounds shared memory. Less common cases can be system specific - more information about these cases can be found in the system specific user guide. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorCooperativeLaunchTooLarge: cudaError

This error indicates that the number of blocks launched per grid for a kernel that was launched via either ::cudaLaunchCooperativeKernel or ::cudaLaunchCooperativeKernelMultiDevice exceeds the maximum number of blocks as allowed by ::cudaOccupancyMaxActiveBlocksPerMultiprocessor or ::cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors as specified by the device attribute ::cudaDevAttrMultiProcessorCount.

Source§

impl cudaError

Source

pub const cudaErrorNotPermitted: cudaError

This error indicates the attempted operation is not permitted.

Source§

impl cudaError

Source

pub const cudaErrorNotSupported: cudaError

This error indicates the attempted operation is not supported on the current system or device.

Source§

impl cudaError

Source

pub const cudaErrorSystemNotReady: cudaError

This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide.

Source§

impl cudaError

Source

pub const cudaErrorSystemDriverMismatch: cudaError

This error indicates that there is a mismatch between the versions of the display driver and the CUDA driver. Refer to the compatibility documentation for supported versions.

Source§

impl cudaError

Source

pub const cudaErrorCompatNotSupportedOnDevice: cudaError

This error indicates that the system was upgraded to run with forward compatibility but the visible hardware detected by CUDA does not support this configuration. Refer to the compatibility documentation for the supported hardware matrix or ensure that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES environment variable.

Source§

impl cudaError

Source

pub const cudaErrorMpsConnectionFailed: cudaError

This error indicates that the MPS client failed to connect to the MPS control daemon or the MPS server.

Source§

impl cudaError

Source

pub const cudaErrorMpsRpcFailure: cudaError

This error indicates that the remote procedural call between the MPS server and the MPS client failed.

Source§

impl cudaError

Source

pub const cudaErrorMpsServerNotReady: cudaError

This error indicates that the MPS server is not ready to accept new MPS client requests. This error can be returned when the MPS server is in the process of recovering from a fatal failure.

Source§

impl cudaError

Source

pub const cudaErrorMpsMaxClientsReached: cudaError

This error indicates that the hardware resources required to create MPS client have been exhausted.

Source§

impl cudaError

Source

pub const cudaErrorMpsMaxConnectionsReached: cudaError

This error indicates the the hardware resources required to device connections have been exhausted.

Source§

impl cudaError

Source

pub const cudaErrorMpsClientTerminated: cudaError

This error indicates that the MPS client has been terminated by the server. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorCdpNotSupported: cudaError

This error indicates, that the program is using CUDA Dynamic Parallelism, but the current configuration, like MPS, does not support it.

Source§

impl cudaError

Source

pub const cudaErrorCdpVersionMismatch: cudaError

This error indicates, that the program contains an unsupported interaction between different versions of CUDA Dynamic Parallelism.

Source§

impl cudaError

Source

pub const cudaErrorStreamCaptureUnsupported: cudaError

The operation is not permitted when the stream is capturing.

Source§

impl cudaError

Source

pub const cudaErrorStreamCaptureInvalidated: cudaError

The current capture sequence on the stream has been invalidated due to a previous error.

Source§

impl cudaError

Source

pub const cudaErrorStreamCaptureMerge: cudaError

The operation would have resulted in a merge of two independent capture sequences.

Source§

impl cudaError

Source

pub const cudaErrorStreamCaptureUnmatched: cudaError

The capture was not initiated in this stream.

Source§

impl cudaError

Source

pub const cudaErrorStreamCaptureUnjoined: cudaError

The capture sequence contains a fork that was not joined to the primary stream.

Source§

impl cudaError

Source

pub const cudaErrorStreamCaptureIsolation: cudaError

A dependency would have been created which crosses the capture sequence boundary. Only implicit in-stream ordering dependencies are allowed to cross the boundary.

Source§

impl cudaError

Source

pub const cudaErrorStreamCaptureImplicit: cudaError

The operation would have resulted in a disallowed implicit dependency on a current capture sequence from cudaStreamLegacy.

Source§

impl cudaError

Source

pub const cudaErrorCapturedEvent: cudaError

The operation is not permitted on an event which was last recorded in a capturing stream.

Source§

impl cudaError

Source

pub const cudaErrorStreamCaptureWrongThread: cudaError

A stream capture sequence not initiated with the ::cudaStreamCaptureModeRelaxed argument to ::cudaStreamBeginCapture was passed to ::cudaStreamEndCapture in a different thread.

Source§

impl cudaError

Source

pub const cudaErrorTimeout: cudaError

This indicates that the wait operation has timed out.

Source§

impl cudaError

Source

pub const cudaErrorGraphExecUpdateFailure: cudaError

This error indicates that the graph update was not performed because it included changes which violated constraints specific to instantiated graph update.

Source§

impl cudaError

Source

pub const cudaErrorExternalDevice: cudaError

This indicates that an async error has occurred in a device outside of CUDA. If CUDA was waiting for an external device’s signal before consuming shared data, the external device signaled an error indicating that the data is not valid for consumption. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

Source§

impl cudaError

Source

pub const cudaErrorInvalidClusterSize: cudaError

This indicates that a kernel launch error has occurred due to cluster misconfiguration.

Source§

impl cudaError

Source

pub const cudaErrorFunctionNotLoaded: cudaError

Indiciates a function handle is not loaded when calling an API that requires a loaded function.

Source§

impl cudaError

Source

pub const cudaErrorInvalidResourceType: cudaError

This error indicates one or more resources passed in are not valid resource types for the operation.

Source§

impl cudaError

Source

pub const cudaErrorInvalidResourceConfiguration: cudaError

This error indicates one or more resources are insufficient or non-applicable for the operation.

Source§

impl cudaError

Source

pub const cudaErrorUnknown: cudaError

This indicates that an unknown internal error has occurred.

Source§

impl cudaError

Source

pub const cudaErrorApiFailureBase: cudaError

Any unhandled CUDA driver error is added to this value and returned via the runtime. Production releases of CUDA should not return such errors. \deprecated This error return is deprecated as of CUDA 4.1.

Trait Implementations§

Source§

impl Clone for cudaError

Source§

fn clone(&self) -> cudaError

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for cudaError

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Hash for cudaError

Source§

fn hash<__H: Hasher>(&self, state: &mut __H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl PartialEq for cudaError

Source§

fn eq(&self, other: &cudaError) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl Copy for cudaError

Source§

impl Eq for cudaError

Source§

impl StructuralPartialEq for cudaError

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.