IBM and Meta have joined with more than 50 founding members and collaborators to launch the AI Alliance.
The rapid development of generative AI applications is increasingly raising concerns about responsible development and the risk of harm to society — and it comes within the context of fast-paced commercial advancement and the need to balance innovation with safety,
The international alliance brings together organizations across the technology industry, start-ups, academia, research, and government to support open innovation and AI science, along with developing systems that prioritize fairness, transparency, and accountability.
The partnership announced:
“Open and transparent innovation is essential to empower a broad spectrum of AI researchers, builders, and adopters with the information and tools needed to harness these advancements in ways that prioritize safety, diversity, economic opportunity and benefits to all.
“While there are many individual companies, start-ups, researchers, governments, and others who are committed to open science and open technologies and want to participate in the new wave of AI innovation, more collaboration and information sharing will help the community innovate faster and more inclusively, and identify specific risks and mitigate those risks before putting a product into the world.”
How Will the AI Alliance Work?
“Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players,” according to the website.
Their work will encompass AI data sets, models, tools, and talent to build and support open technologies. They intend to advocate for the value of open innovation with organizational leaders, as well as policy and regulatory bodies and the public.
The AI Alliance plans to start or develop projects that:
- Develop and deploy resources such as benchmarks and evaluation standards for open model releases and model deployment into applications to enable the responsible development and use of AI systems globally. This will include creating and promoting a catalog of safety, security, and trust tools with the developer community.
- Advance an ecosystem of open multilingual, multi-modal, and science models that aim to help address social challenges such as climate change and education. Alliance members will cooperate to help build and promote open-source tools for model training, tuning, and inference.
- Support the development of an AI hardware accelerator ecosystem by increasing contributions and adopting enabling software technology. Members will collaborate on the benchmarking, optimizing, and adapting AI workloads to develop hardware capabilities. This will focus on scalability and energy consumption, including carbon modeling.
- Work with the academic community to build global AI education, skills, and exploratory research to address the AI skill gap. This will enable researchers and students to contribute to research projects, including AI algorithms, models, platforms, and techniques for limiting the power and resources AI consumes.
- Develop educational content and resources to inform the public and policymakers on the benefits, risks, and solutions for AI.
- Launch initiatives to support AI’s safe and open development and host events showcasing how members responsibly use open technology in AI.
The Alliance will start by forming working groups among members and establish a governing board and technical oversight committee to establish standards and guidelines to advance such projects.
In addition to bringing together AI leaders across business, development, and research, the alliance plans to partner with existing AI initiatives led by governments, non-profits, and civil society organizations.
The Alliance will continue to add new members, allowing them flexibility in how much they collaborate and contribute to maximize how many organizations and individuals participate.
The members of the AI Alliance include:
- The creators of tools and applications driving AI benchmarking, trust and validation metrics, and best practices such as MLPerf, Hugging Face, LangChain, LlamaIndex, and open-source AI toolkits for explainability, privacy, adversarial robustness, and fairness evaluation.
- Universities and science agencies are involved in the research and training of AI scientists and engineers through open science.
- Companies that build hardware and infrastructure for AI training and applications, such as graphics processing units (GPUs), custom AI accelerators, and cloud platforms.
- Developers of platform software frameworks including PyTorch, Transformers, Diffusers, Kubernetes, Ray, Hugging Face Text generation inference, and Parameter Efficient Fine Tuning.
- Creators of open AI models such as Llama2, Stable Diffusion, StarCoder, and Bloom.
These organizations include:
- Cleveland Clinic
- Cornell University
- Dell Technologies
- Hugging Face
- Imperial College London
- Linux Foundation
- MOC Alliance, operated by Boston University and Harvard University
- Partnership on AI
- Red Hat
- Sony Group
- Stability AI
- University of California Berkeley
- University of Illinois
- University of Notre Dame
- The University of Tokyo
- Yale University
Bringing together these academic and commercial organizations will give an extensive international network of scientists open access to AI innovation, training, and governance. It will also potentially contribute to expanding computing by combining supercomputing, quantum computing, semiconductors, and AI research.
“AI innovation must remain open to drive positive and equitable societal impact, foster continued progress, and address potential risks collaboratively. There is no room for a winner-take-all approach; the development of responsible, secure LLMs comes in many forms,” CJ Desai, president and chief operating officer (COO) at cloud software firm ServiceNow, said in the statement.
Robert Nishihara, chief executive officer (CEO) of AI application platform developer Anyscale, stated: “AI will have a positive impact on our daily lives and address some of the world’s most pressing challenges, but like with any new technology or innovation, we need to consider the risks.
“To ensure that open source communities can continue to flourish, innovate, deliver rich technological progress, and advance the broader AI ecosystem, it’s imperative that we advance AI ethics, governance, and safety. The AI Alliance is an important step to ensuring that our society can benefit from AI responsibly and equitably.”
The Importance of Open-Source AI Development
Much of the work on developing AI over the past few decades has been based on open-source research and development. There are experts in the industry who believe AI models should remain open source to limit barriers to entry and repetitive model training.
However, companies such as OpenAI, Microsoft, and Google prioritize developing proprietary models, and Apple is reportedly testing its generative AI tools to compete with OpenAI’s ChatGPT. Those companies are notably absent from the AI Alliance’s initial membership.
“Open source is the backbone of all leading artificial intelligence software. With open source, the entire community collaborates to solve the toughest problems, the most effective solutions rise to the top, and everyone benefits,” stated Jeremy Howard, founding researcher at deep learning non-profit Fast.ai, in the announcement.
Open-source development has several advantages, including collaboration and knowledge sharing among researchers, developers, and organizations. That enables experts worldwide to collaborate to share ideas, solve common problems, and avoid duplication of effort. Developers can learn from each other, fostering innovation and increasing efficiency and flexibility.
“We will pool resources and knowledge to address safety concerns while providing a platform for sharing and developing solutions that fit the researchers, developers, and adopters around the world,” the Alliance statement said.
Open-source projects also provide transparency into the underlying algorithms, models, and code that help build trust among users and the broader public. This is critical for accountability, mainly when AI is used in applications that can affect individuals’ lives, such as finance, healthcare, and criminal justice.
Engagement encompassing a broad range of voices is also essential to ensure that AI technologies are developed with diverse perspectives that consider ethical considerations and avoid unintentional biases that can happen in closed, proprietary systems.
Challenges of Open-Source Development
Open-source AI development offers clear advantages, but it also comes with challenges that have prompted some organizations to favor proprietary approaches.
- Contributing to open-source projects can raise concerns about intellectual property rights. Disagreements among participants over licensing terms or the ownership of contributions can affect the stability of a project and even result in legal challenges.
- While open-source development fosters transparency, it also exposes code to potential security vulnerabilities. Large, distributed codebases can make identifying and addressing security issues difficult.
- Conflicts and difficulties in reaching consensus among contributors can affect a project’s decision-making processes, direction, and goals or slow down development.
- The decentralized nature of open sourcing can result in a project becoming fragmented, with multiple versions or forks that create compatibility issues. This makes it essential for developers to ensure interoperability between software versions and components.
- Some organizations will also prioritize commercializing AI tools and applications to generate profit.
- Open-source projects may lag in gaining widespread adoption, especially if competing proprietary projects have better marketing or user experience and support.
Despite such challenges, many open-source projects aim to address these issues. Recognizing and mitigating them will be key to the success of open-source AI development.