Welcome to DEKUBE
DEKUBE is the world's first network enabling distributed training of large AI models, capable of transforming consumer-grade GPUs into enterprise-level AI computing power. It provides more cost-effective, flexible, and efficient computing resources for AI demands such as large language model training and fine-tuning. The project has been developed over three years by a 100+ member top-tier European technical team (core members include experts from national quantum labs, Google, Redhat, etc., comprising cryptographers, mathematicians, senior system architects, and cybersecurity specialists). Currently, the product development is nearing completion and has entered the internal testing phase. GPUs can be seamlessly integrated via a complete and user-friendly client, and distributed Llama2 70B fine-tuning tests have been successfully conducted. By April 2024, it will support distributed training for mainstream open-source large models. In Q3 2024, DEKUBE will launch its proprietary large model.