AI safety
AI safety

Organizations Tagged with ai-safety: AI Safety Research, Governance, Robustness and Risk Mitigation

Explore organizations tagged with ai-safety to find teams, labs, and companies that prioritize AI safety research, governance frameworks, model robustness, interpretability, verification, adversarial testing, and alignment engineering. This curated list of organizations (filtered by the tags pillar) surfaces long-tail expertise such as large-scale model red-teaming, formal verification for ML systems, operational safety pipelines, policy and compliance audits, and grant-funded safety initiatives so you can compare technical approaches, funding sources, and open-source contributions. Use the filtering UI to narrow by region, expertise, funding stage, or technology stack; view detailed profiles, compare capabilities, and take actionable next steps to partner, hire, or apply for grants focused on ai-safety.
Other Filters