Focus on AI safety: Apple, Google, Meta, and Microsoft unite to make sure AI doesn’t outsmart us
New flagship smartphones, like the recent Galaxy S24 series or Pixel 8 series, are already packed with AI features. While generative AI that can generate text, photos, and videos based on open-ended prompts, brings excitement, it also raises concerns.
Some worry it could replace jobs, influence elections, and even surpass human abilities. That is why US Commerce Secretary Gina Raimondo introduced a new initiative called the US AI Safety Institute Consortium (AISIC).
According to Reuters, over 200 companies, including big tech names like Apple, Amazon, Google, Meta, Microsoft, OpenAI, Qualcomm, and Nvidia, joined the AISIC initiative at the request of the White House.
The group's job is to focus on key tasks laid out in President Biden's October AI executive order. This includes creating guidelines for red-teaming, evaluating capabilities, managing risks, ensuring safety and security, and adding watermarks to synthetic content.
The consortium, being the most extensive assembly of test and evaluation teams, aims to establish the groundwork for a "new measurement science in AI safety." AISIC seems to be the first move towards meeting the majority of the order's requirements. It is still unclear whether there are different levels of membership or if there are involvement requirements for participants.
All of the tech companies mentioned are working on generative AI in different ways. Google is one of the leaders in AI innovation (it just rolled out its new chatbot, Gemini, for Android). Microsoft, in collaboration with OpenAI, made ChatGPT a major talking point last year. Qualcomm, with its Snapdragon 8 Gen 3, opened the door for AI-powered features in smartphones. Apple is also reportedly working on enhancing Siri using AI technology.
Some worry it could replace jobs, influence elections, and even surpass human abilities. That is why US Commerce Secretary Gina Raimondo introduced a new initiative called the US AI Safety Institute Consortium (AISIC).
The group's job is to focus on key tasks laid out in President Biden's October AI executive order. This includes creating guidelines for red-teaming, evaluating capabilities, managing risks, ensuring safety and security, and adding watermarks to synthetic content.
Last year, major AI companies made a commitment to add watermarks to AI-generated content as a safety measure. In line with this, Meta is gearing up to start labeling AI-generated content on Facebook, Instagram, and Threads.
The consortium, being the most extensive assembly of test and evaluation teams, aims to establish the groundwork for a "new measurement science in AI safety." AISIC seems to be the first move towards meeting the majority of the order's requirements. It is still unclear whether there are different levels of membership or if there are involvement requirements for participants.
Things that are NOT allowed: