Highlights:

  • According to the business, the new feature is available through the Opera browser’s developer stream and is a part of the AI Feature Drops Program.
  • Opera One supports local models from Mistral AI, Google LLC’s Gemma, Meta Platforms Inc.’s Llama, and Vicuna.

Web browser developer Opera adds 150 local LLM variants as experimental support from about 50 categories of models to its Opera One browser.

With local LLMs, users may obtain AI models straight from their PC, ensuring that the prompt processing never leaves the device. As a result, there is less latency because the internet does not transfer the data. It also implies that another model cannot be trained using it.

The new LLMs can be tested by users who want the extra privacy of processing prompts locally, even when they are not linked to a network.

According to the business, the new feature is available through the Opera browser’s developer stream and is a part of the AI Feature Drops Program. To access local LLMs, interested users will need to update to the most recent version of Opera Developer.

Krystian Kolondra, Executive Vice President of gaming and browsers at Opera, said, “Introducing Local LLMs in this way allows Opera to start exploring ways of building experiences and know-how within the fast-emerging local AI space.”

Opera One supports local models from Mistral AI, Google LLC’s Gemma, Meta Platforms Inc.’s Llama, and Vicuna. Aria, Opera’s built-in browser AI, is replaced by most local LLMs, which demand two to ten gigabytes of local storage space per version.

Users will need to launch Aria Chat, select “local mode” from a drop-down menu, and select a model to download from settings to download and activate one of the new versions. It will subsequently be possible to download and activate different models from that location.

There are many other local LLMs to try out, such as Meta’s Code Llama, an extension of Llama that lets users talk about coding in Python, C , Java, PHP, and C#. Additionally, Microsoft Corp. offers Phi-2, a 2.7B parameter model with language features for chat, coding assistance, and question-and-answer sessions. Furthermore, Mixtral is a model particularly good at natural language processing. It can generate text in various ways, including classifying, producing poetry, responding to emails, and creating tweets.