Google Cloud Next ‘24 Hints at ‘AI Across Its Ecosystem’ Superpower

Why Trust Techopedia

This week, Google kicked off its annual Google Cloud Next 2024 conference in Las Vegas — and quelle surprise, generative AI sits proudly at the top of the agenda.

Not only has Google announced that Gemini 1.5 Pro will be able to process audio input, but it has also made the large language model (LLM) available to Google Cloud customers via its Vertex AI platform.

In addition, the tech giant also announced the launch of Google Vids, an AI-driven video editing, writing, and production assistant. This comes roughly two months after OpenAI showcased its text-to-image tool Sora.

Together, these announcements highlight that Google isn’t just looking to enhance its language models but is actively seeking new ways to distribute them to users as part of its product ecosystem.

Bringing Gemini to the Cloud

One of the biggest trends from the event is that Google Cloud is going to be implementing Gemini as part of different virtual assistants that are customized toward particular use cases.

For example, Gemini Cloud Assistant is designed to provide users with personalized guidance on how to manage Google Cloud resources. This includes deploying workloads, managing applications, troubleshooting, and optimizing performance or costs.

Another tool implementing this model is Gemini Code Assist, an AI-powered assistance tool designed to complete code as you write and provide recommendations to help build applications. It supports over 20 programming languages, including Java, JavaScript, Python, C, C++, Go, PHP, and SQL.

A cybersecurity assistant, Gemini in Security Operations, was also launched. This assistant is accessible via the cloud-native security operations suite Chronicle and provides security professionals with assisted investigations, summarizing threat data and recommending remediation actions.

The key takeaway here is that Google is not just deploying Gemini as a general use case model now – it’s drilling down into providing domain and use-case-specific virtual assistants across its product ecosystem.

Building a Multimodal Product Ecosystem

One of the biggest trends in AI development for the past year has been the shift toward multimodality. Multimodal AI solutions are models that can process multiple formats, including text, image, voice, and video.

Google Cloud Next highlights that Google is on a mission to establish a multimodal product ecosystem.

At the heart of this approach is Gemini, its flagship model, which can process text, audio, video, and code. Other tools like ImageFX, and now Google Vids allow users to create images and video outputs.

Vertex AI then provides a solution for users and developers to customize and manage AI models in the cloud, while Google Cloud’s underlying AI hypercomputer architecture combines TPUs and GPUs to enable organizations to train and run models at a lower cost.

In short, Google has the models and infrastructure to deliver them in the cloud to enterprise users.

Rethinking the LLM Market

At the moment, the LLM market is ruthlessly competitive. Although OpenAI may still be considered the de facto leader due to the wave of hype surrounding the release of ChatGPT, the gap between these solutions and the competition has largely closed.

Today, models like Gemini and Anthropic’s Claude 3 are either approaching GPT-4 level performance or exceeding it in certain areas. In terms of context windows, GPT-4 Turbo supports inputs of up to 128,000, but Gemini 1.5 Pro supports between 128,000 to 1 million tokens.

However, having a high-performance model is just the beginning. The LLM race could be decided based on who has the most comprehensive product ecosystem to integrate these solutions.

Google excels here, reporting that more than 60% of all funded generative AI startups and nearly 90% of generative AI “unicorns” are Google Cloud customers.

The opportunity to integrate with popular solutions like Google Cloud and Google Workspace gives Google the edge over competitors like OpenAI. These companies have partnerships with third-party vendors like Microsoft to deliver their models via solutions like Bing Chat but lack direct ownership of these assets.

In this sense, Google’s main competitor in the market is Microsoft, with AI solutions delivered via products like Bing ChatChat Office 365 and Azure with tools like Microsoft Copilot.

This means Microsoft and Google must become highly specialized in delivering AI-driven products to enterprises and tailoring them to specific use cases and challenges.

The Bottom Line

Google Cloud Next highlights that generative AI is still rapidly evolving.

There is an arms race to develop not just the best model but also the infrastructure and product ecosystem necessary to get those models in front of target users.

Very few companies can flex this muscle, so what Google does next will be interesting to see.

Related Terms

Related Article