Opera starts offering unique, experimental AI-related feature
Opera has been making a lot of investments in the AI department. Today, the company revealed yet another AI-related feature that no other browser provides at the moment, the ability to access local AI models.
By adding experimental support for 150 local LLM (Large Language Model) variants from about 50 families of models, the developer makes it possible for users to access and manage local LLMs directly from its browser.
According to Opera, the local AI models are a complimentary addition to Opera’s online Aria AI service, which is also available in the Opera browser on iOS and Android. The supported local LLMs include names like Llama (Meta), Vicuna, Gemma (Google), Mixtral (Mistral AI), and many more.
By adding experimental support for 150 local LLM (Large Language Model) variants from about 50 families of models, the developer makes it possible for users to access and manage local LLMs directly from its browser.
The new feature is very important for those who’d like to keep their browsing as private as possible. The ability to access local LLMs from Opera means that users’ data is kept locally on their device, allowing them to use generative AI without the need to send information to a server.
Starting today, the new feature is available for Opera One Developer users, although it’s necessary to upgrade to the newest version of Opera Developer and follow a step-by-step guide to activate it.
Starting today, the new feature is available for Opera One Developer users, although it’s necessary to upgrade to the newest version of Opera Developer and follow a step-by-step guide to activate it.
Keep in mind that after you select a certain LLM, it will be downloaded on your device. It’s also important to add that a local LLM typically requires 2-10GB of storage space per variant. Once it’s downloaded on your device, the new LLM will be used instead of Aria, Opera’s native browser AI.
Things that are NOT allowed: