Apple stays still as Google, Microsoft commit to control AI and combat cancer
A new AI-focused association has been formed by OpenAI, Google, Microsoft and Anthropic (an AI safety and research company based in California). They’re calling it the Frontier Model Forum. Notice a missing tech giant among these four? Yup, Apple is not going to (join) that party, at least not for now.
Before covering what this new consortium will do (via AppleInsider), it’s important to note it’s the second time Apple has not joined a major AI-related consortium. Last time (July 21), the Cupertino giant was missing from the White House’s announcement that it has secured voluntary commitments from several leading companies in the AI field to manage the risks posed by artificial intelligence.
Here’s what a Google blog post about the new AI-focused association announces:
As explained in the same Google blog article, frontier models are defined as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”.
There are four core objectives, according to members of the Frontier Model Forum. Here they are in their own words:
Here’s what Anna Makanju, Vice President of Global Affairs, OpenAI says:
Emily Bender, a computational linguist at the University of Washington and AI expert, looks at these pledges from tech giants more like “an attempt to avoid regulation; to assert the ability to self-regulate, which I’m very skeptical of”. According to her, “The regulation needs to come externally. It needs to be enacted by the government representing the people to constrain what these corporations can do”.
Here’s what a Google blog post about the new AI-focused association announces:
Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models. The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations, benchmarks and developing a public library of solutions to support industry best practices and standards.
What’s a frontier model?
As explained in the same Google blog article, frontier models are defined as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”.
There’s a membership criteria for organizations that want to join: they have to develop and deploy frontier models (as defined by the Forum), demonstrate a strong commitment to frontier model safety, including through technical and institutional approaches and are willing to contribute to advancing the Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.
What’s their goal, again?
There are four core objectives, according to members of the Frontier Model Forum. Here they are in their own words:
- Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
- Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
Here’s what Anna Makanju, Vice President of Global Affairs, OpenAI says:
Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.
Some are skeptic
Emily Bender, a computational linguist at the University of Washington and AI expert, looks at these pledges from tech giants more like “an attempt to avoid regulation; to assert the ability to self-regulate, which I’m very skeptical of”. According to her, “The regulation needs to come externally. It needs to be enacted by the government representing the people to constrain what these corporations can do”.
Things that are NOT allowed: