Description
Dria is the universal execution layer for AI, engineered for high-performance AI workloads on both decentralized and local networks. It is designed to optimize any model on any engine across all types of hardware. The project's services include Batch Inference, DNET, Research, and Edge AI. Dria’s Batch Inference API is an open-source, crowdsourced tool optimized for massive AI workloads, making it ideal for processing large amounts of data, reducing inference costs, and conducting large-scale offline evaluations. It supports a wide variety of models, including Claude, Gemini, Gemma, GPT, Llama, and Mistral, and provides key benefits like low-cost inference for large jobs, high throughput, and asynchronous processing.