At Databricks, we've upheld principles of responsible development throughout our long-standing history of building innovative data and AI products. We are committed to blue-sky research and open source innovation — it's part of our culture and stems from our company's academic roots.
Building on this legacy, Databricks recently joined several industry and government efforts to promote innovation and advocate for the use of safe and trustworthy AI. Whether it's contributing insights from our Data Intelligence Platform for research or joining other industry players to educate policymakers on AI, Databricks is proud to be a part of initiatives to accelerate progress, improve safety, bolster security and harness trust in AI.
Here are some of the groups we've joined or are collaborating with:
The AI Alliance is a community of technology creators, developers, and adopters collaborating to advance safe, responsible AI rooted in open innovation. The group is focused on accelerating and disseminating open innovation across the AI technology landscape to improve foundational capabilities, safety, security, and trust in AI, and to responsibly maximize benefits to people and society everywhere. The AI Alliance brings together a critical mass of computing, data, tools, and talent to accelerate open innovation in AI.
National Institute of Standards and Technology (NIST)'s U.S. Artificial Intelligence Safety Institute Consortium (AISIC)
Databricks is collaborating with the National Institute of Standards and Technology (NIST) in the Artificial Intelligence Safety Institute Consortium to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote the development of trustworthy Artificial Intelligence (AI) and its responsible use.
NIST does not evaluate commercial products under this Consortium and does not endorse any product or service used. Additional information on this Consortium can be found at: https:/www.federalregister.gov/documents/2023/11/02/2023-24216/artificial-intelligence-safety-institute-consortium
National Science Foundation (NSF)'s National Artificial Intelligence Research Resource (NAIRR)
NAIRR is the NSF's concept for a shared national infrastructure to bridge the gap in AI research by connecting U.S. researchers to responsible and trustworthy AI resources, as well as the necessary computational, data, software, training, and educational resources to advance research, discovery, and innovation. The aim is to ensure that AI resources and tools are equitably accessible to the broad research and education communities in a manner that advances trustworthy AI and protects privacy, civil rights, and civil liberties. As part of our work with NAIRR, Databricks will contribute an instance of our Data Intelligence Platform, ultimately enabling the next generation of students to create new breakthroughs and build new businesses in AI.
As the original creators of popular open source AI tools and models, participating in these efforts presents an opportunity for us to further our mission to solve the world's toughest problems — and to support and enhance open innovation to democratize data and AI to benefit society at large. As other groups form, we'll continue to consider joining the organizations that share our vision for the safe, responsible, and collaborative use of AI.
Stay tuned for more details on our learnings.