Distributed training
Distributed training

Organizations by Tag: distributed-training — Organizations Using Distributed Training Frameworks for Scalable Multi-Node ML Infrastructure

Discover organizations tagged with distributed-training: a curated list of companies, research labs, and open-source teams that deploy distributed training frameworks (Horovod, PyTorch Distributed, TensorFlow MirroredStrategy) to scale multi-node GPU/TPU workloads. Use the tags filter to compare production-grade architectures, multi-node GPU training pipelines, model- and data-parallel strategies, and benchmarked throughput for large-language models, vision systems, and federated learning. Explore each organization's projects, repositories, and infrastructure choices, filter by long-tail keywords like "multi-node GPU training" and "distributed training orchestration," and take action to shortlist partners, request demos, or evaluate integration options.
Other Filters