
Firefly CSB1-N10 Series AI Clusters Supports Scalable 60 to 1000 TOPS for Data Centers and Edge AI Deployments

The Firefly CSB1-N10 Series AI Clusters are designed for high-performance applications such as natural language processing, robotics, and image generation. These 1U rack-mounted servers are ideal for deployment in data centers, private servers, and edge environments. They come in multiple configurations, featuring processors like Rockchip RK3588, RK3576, SOPHON BM1688, or NVIDIA Jetson Orin modules, offering AI computing power ranging from 60 to 1000 TOPS.
Each server has 10 compute nodes and a control node, with memory options up to 32GB and storage up to 256GB per node. They support a variety of deep learning frameworks, including TensorFlow, PyTorch, and ONNX, and are equipped with advanced features like real-time management through a BMC system, Docker container support for large model deployment, and 10G Ethernet for fast data transfer. The servers are built for large-scale AI workloads, including language models like LLaMa3 and vision models like Stable Diffusion, making them well-suited for intensive AI and edge computing tasks.
Firefly CSB1-N10 Series AI Clusters Specifications
- Form Factor: 1U rack-mounted
- Processor Options:
- CSB1-N10S1688: Octa-core BM1688 (1.6 GHz)
- CSB1-N10R3588: Octa-core RK3588 (2.4 GHz)
- CSB1-N10R3576: Octa-core RK3576 (2.2 GHz)
- CSB1-N10NOrinNano: Hexa-core Jetson Orin Nano (1.5 GHz)
- CSB1-N10NOrinNX: Octa-core Jetson Orin NX (2.0 GHz)
- AI Computing Power (INT8):
- CSB1-N10S1688: 160 TOPS
- CSB1-N10R3588: 60 TOPS
- CSB1-N10R3576: 400 TOPS
- CSB1-N10NOrinNano: 1000 TOPS
- Number of Nodes: 10 compute nodes + 1 control node
- Control Node Processor: RK3588, 2.4 GHz
- RAM per Node:
- CSB1-N10S1688: 8GB LPDDR4 (up to 16GB)
- CSB1-N10R3588: 16GB LPDDR4 (up to 32GB)
- CSB1-N10R3576: 8GB LPDDR4 (up to 16GB)
- CSB1-N10NOrinNano: 8GB LPDDR5
- CSB1-N10NOrinNX: 16GB LPDDR5
- Storage per Node:
- CSB1-N10S1688: 32GB eMMC (up to 256GB)
- CSB1-N10R3588: 256GB eMMC (up to 16GB and up)
- CSB1-N10R3576: 64GB eMMC (up to 256GB)
- CSB1-N10NOrinNano/OrinNX: 256GB PCIe NVMe SSD
- Video Encoding/Decoding:
- CSB1-N10S1688: 160 channels of H.265/H.264 1080p@30fps decoding, 100 channels encoding
- CSB1-N10R3588: 10 channels of H.265 8K@30fps decoding, 10 channels encoding
- Storage Expansion: SATA 3.0/SSD hard drive slot
- Networking:
- CSB1-N10S1688: 2 × 10G Ethernet (SFP+), 2 × Gigabit Ethernet (RJ45)
- CSB1-N10R3588/R3576: 2 × Gigabit Ethernet (RJ45), 1 × Gigabit Ethernet (RJ45, MGNT for BMC management)
- Console Port: RJ45 console port
- Display: VGA up to 1080p60
- USB Ports: 2 × USB 3.0
- Buttons: Reset, UID, power button
- Miscellaneous:
- 1 × RS232 (DB9, baud rate 115200)
- 1 × RS485 (DB9, baud rate 115200)
- Cooling: 6 high-speed fans
- Power Supply: 550W (non-hot-swappable)
- Operating Temperature: 0°C to 45°C
The CSB1-N10 Series AI Clusters feature a BMC system with Redfish, VNC, NTP, and real-time monitoring for management. They support deep learning frameworks like TensorFlow, PyTorch, PaddlePaddle, ONNX, and Caffe, with cuDNN acceleration for some models. Docker containers are also supported for deploying large models like LLaMa3, Phi-3 Mini, EfficientVIT, and Stable Diffusion. For more details, refer to the documentation for the respective CPU modules.
Prices for the CSB1-N10 series range from $2,059 for the CSB1-N10R3576 to $14,709 for the high-performance CSB1-N10NOrinNX with 1000 TOPS. Purchasing links and datasheets for all models are available on the product page.