# Internal APIs
Internally monarch is implemented using a Rust library for actors called hyperactor.
[This book](books/hyperactor-book/src/introduction) provides more details about its design.
This page provides access to the Rust API documentation for Monarch.
The Monarch project consists of several Rust crates, each with specialized functionality:
### Core Framework
- **hyperactor** - Core actor framework for distributed computing
- **hyperactor_macros** - Procedural macros for the hyperactor framework
- **hyperactor_mesh** - Mesh networking for hyperactor clusters
- **hyperactor_mesh_macros** - Macros for hyperactor mesh functionality
- **hyperactor_config** - Configuration framework for hyperactor
- **hyperactor_telemetry** - Telemetry and monitoring for hyperactor
### CUDA and GPU Computing
- **nccl-sys** - NCCL (NVIDIA Collective Communications Library) bindings
- **torch-sys2** - Simplified PyTorch Python API bindings for Rust
- **torch-sys-cuda** - CUDA-specific PyTorch FFI bindings
- **monarch_tensor_worker** - High-performance tensor processing worker
### RDMA and High-Performance Networking
- **monarch_rdma** - Remote Direct Memory Access (RDMA) support for high-speed networking
- **rdmaxcel-sys** - Low-level RDMA acceleration bindings
### Monarch Python Integration
- **monarch_hyperactor** - Python bindings bridging hyperactor to Monarch's Python API
- **monarch_extension** - Python extension module for Monarch functionality
- **monarch_messages** - Message types for Monarch actor communication
### System and Utilities
- **hyper** - Mesh admin CLI and HTTP utilities
- **ndslice** - N-dimensional array slicing and manipulation
- **typeuri** - Type URI system for message serialization
- **wirevalue** - Wire-level value serialization for actor messages
- **serde_multipart** - Zero-copy multipart serialization
## Architecture Overview
The Rust implementation provides a comprehensive framework for distributed computing with GPU acceleration:
- **Actor Model**: Built on the hyperactor framework for concurrent, distributed processing
- **GPU Integration**: Native CUDA support for high-performance computing workloads
- **Mesh Networking**: Efficient communication between distributed nodes
- **Tensor Operations**: Optimized tensor processing with PyTorch integration
- **Multi-dimensional Arrays**: Advanced slicing and manipulation of n-dimensional data
For complete technical details, API references, and usage examples, explore the individual crate documentation above.