Gradient releases the Echo-2 RL framework, improving AI research efficiency by over 10 times
Feb 12, 2026 23:14:51
The distributed AI lab Gradient today released the Echo-2 distributed reinforcement learning framework, aimed at breaking the barriers of training efficiency in AI research. By achieving a complete decoupling of Learner and Actor at the architectural level, Echo-2 has reduced the post-training cost of a 30B model from $4,500 to $425. This results in over 10 times the research throughput under the same budget.
The framework utilizes compute-storage separation technology for asynchronous training (Async RL), offloading massive sampling computation to unstable GPU instances and Parallax-based heterogeneous GPUs. Coupled with breakthroughs in bounded staleness, instance fault-tolerant scheduling, and the self-developed Lattica communication protocol, it significantly enhances training efficiency while ensuring model accuracy. Along with the framework release, Gradient will soon launch the RLaaS platform Logits, promoting the shift in AI research from a "capital accumulation" paradigm to an "efficiency iteration" paradigm. Logits is now open for reservations for students and researchers worldwide.
Latest News
ChainCatcher
Feb 14, 2026 09:42:57
ChainCatcher
Feb 14, 2026 09:39:56
ChainCatcher
Feb 14, 2026 09:37:54
ChainCatcher
Feb 14, 2026 09:09:03
ChainCatcher
Feb 14, 2026 08:59:50












