Artificial IntelligenceGoogleNews/PR

Google launches Gemini 1.5 – the faster and secure AI model with better reasoning and understanding

2 Mins read
gemini 1.5 pro

In a swift follow-up to the launch of Gemini, Google’s ambitious language model aiming to dominate the AI industry, the tech giant is introducing Gemini 1.5. Today, the upgraded version, Gemini 1.5 Pro, is being released to developers and enterprise users, offering enhanced performance and paving the way for a broader consumer rollout in the near future.

Gemini 1.5 Pro marks a paradigm shift in Google’s approach, incorporating cutting-edge research and engineering innovations. The introduction of the Mixture-of-Experts (MoE) architecture makes it efficient to train and serve. Built upon Transformer and MoE architecture, the model employs smaller “expert” neural networks, ensuring swift learning of complex tasks with maintained quality.

Gemini 1.5 Pro has an expanded context window capacity, surpassing the original 32,000 tokens of Gemini 1.0. This advancement allows the model to process up to 1 million tokens in production.

gemini 1.5 pro context window

Benefits of Gemini 1.5 Pro to developers and enterprise users

Gemini 1.5 Pro empowers users with the following capabilities:

  • Versatile Problem-Solving Abilities: 1.5 Pro exhibits seamless analysis, classification, and summarization of large content volumes. It excels in understanding and reasoning across various modalities, including video, and provides more relevant problem-solving tasks, particularly in lengthy blocks of code.
  • Enhanced Data Processing: 1.5 Pro effortlessly handles extensive data loads, encompassing tasks such as processing 1 hour of video, 11 hours of audio, codebases exceeding 30,000 lines of code, or over 700,000 words.
  • Outstanding Performance Metrics: In extensive evaluations encompassing text, code, image, audio, and video benchmarks, 1.5 Pro outperforms its predecessor, 1.0 Pro, on 87% of the metrics used for large language models (LLMs). Furthermore, it demonstrates comparable performance to 1.0 Ultra on the same benchmark. Moreover, in the Needle In A Haystack (NIAH) evaluation, 1.5 Pro consistently finds embedded text, achieving an impressive 99% success rate in blocks of data upto 1 million tokens.
  • In-Context Learning: Gemini 1.5 Pro showcases remarkable “in-context learning” skills, enabling it to acquire new skills from extensive prompts without requiring additional fine-tuning.

Availability and pricing

A limited preview of Gemini 1.5 Pro is made available to developers and enterprise users through AI Studio and Vertex AI. Upon the model’s readiness for a broader release, Gemini 1.5 Pro will be introduced with a standard 128,000 token context window.

Soon, Google will unveil pricing tiers starting at the standard 128,000 context window and expanding up to 1 million tokens, reflecting ongoing improvements to the model.

For early testers, the option to explore the 1 million token context window is accessible at no cost during the testing phase. As per Google, the experimental feature may incur longer latency times however, significant speed enhancements are in the pipeline as it continues to refine the model.

Developers keen on testing Gemini 1.5 Pro can enlist in the program through AI Studio, while enterprise customers are encouraged to connect with their Vertex AI account team for further details and assistance.

As Google races to build the best AI tool amid global business considerations, Gemini 1.5 emerges as an impressive contender, particularly for those within Google’s ecosystem. However, with continuous advancements such as OpenAI’s recent announcement of “memory” for ChatGPT, the competition intensifies, signaling a dynamic landscape in the evolving AI industry.

Read next: Top 5 AI trends to look out for in 2024

Leave a Reply

Your email address will not be published. Required fields are marked *

+ 26 = 36