Alphabet, the parent company of Google, recently demonstrated how it is enhancing artificial intelligence (AI) across its various services. This includes improvements to its Gemini chatbot and its main search engine, as it competes with other AI developers like OpenAI.
During its annual I/O developer event in Mountain View, California, Google showcased several new AI features. One is Flash, part of the Gemini 1.5 AI models, which is faster and more cost-effective. Another is Project Astra, which allows users to interact with their environment in real time using their smartphone camera. Google also introduced AI-generated headlines for search results.
Alphabet's CEO, Sundar Pichai, expressed optimism about these AI updates, highlighting their potential to grow the business. Google's efforts are part of a broader race to match or exceed the capabilities shown by OpenAI's ChatGPT, which has impressed users with its human-like responses.
Google DeepMind, another division of Alphabet, is working on AI technologies that can assist in daily tasks. For example, Project Astra was demonstrated to identify a speaker and locate misplaced glasses. The company also hinted at combining Project Astra with Gemini Live, aiming to create a more natural-sounding voice and text assistant than the current Google Assistant.
In the area of video generation, Google previewed Veo, an AI model that creates high-quality videos, a feature also being explored by OpenAI in the film industry.
Google announced improvements to its Gemini Pro 1.5 model, doubling its data processing capacity to 2 million tokens. This enhancement means the AI can handle larger amounts of data, such as thousands of pages of text or extensive video content.
Additionally, Alphabet shared updates on its new computing chips and changes to its search engine. A sixth-generation tensor processing unit (TPU) was unveiled, offering an alternative to Nvidia's processors. This chip will be available to Google Cloud customers in late 2024.
For U.S. users, Google Search will soon use AI to organize search results in categories like dining, recipes, and eventually movies and books. The AI Overviews feature, tested since last year, will help answer more complex queries by synthesizing information.
Jacob Bourne, an analyst from eMarketer, mentioned that the reception to AI Overviews will indicate how well Google can adapt its search engine to the demands of the AI era. He emphasized the importance of turning AI innovations into profitable products and services.
Google assured that ads would continue to appear on its web pages, and AI Overviews will be expanded to over a billion people by the end of the year. In 2023, Alphabet reported revenues of $307.4 billion, mainly from ads on Google Search and other platforms.
Lastly, Google is experimenting with a feature that allows users to ask questions about videos they upload to Google Search, similar to how they can interact with images today. This was demonstrated with a broken record player, showing how the feature could help diagnose problems.
Key Points
Alphabet is enhancing its AI technology, including a faster and cheaper chatbot named Flash and a new feature, Project Astra, that interacts with the world through a smartphone camera.
Google is improving its search engine by using AI to categorize results and answer complex queries, and it's introducing a new computing chip to better compete in the AI market.
FAQs
Q1. What improvements has Google made to its search engine?
Google has introduced AI-generated headlines for search results and a new feature, AI Overviews, to help organize and answer more complex queries.
Q2. How does Project Astra work?
Project Astra allows users to interact with their environment in real time using their smartphone camera, identifying objects, and providing relevant information.
Q3. What is the Gemini Pro 1.5 model?
The Gemini Pro 1.5 model is an AI that can process large amounts of data, like thousands of pages of text or over an hour of video, to provide more accurate answers.
Reference
Comments