Google’s Bard is an experimental conversational AI service designed to revolutionize the way people interact with machines. By leveraging the power of natural language processing and machine learning, Bard can understand spoken commands and respond appropriately. For example, a user can ask Bard questions and receive answers in natural language, instead of having to type in commands or use data-driven methods. Furthermore, by leveraging LAMDA’s powerful algorithms, Bard is able to detect intents and respond accordingly, making conversations with machines more natural for the user.
A message from the CEO OF Google and Alphabet – Sundar Pichai
Bard seeks to bring together the extent of the world’s knowledge with its large language models’ power, intelligence, and creativity. It draws on information from the web to offer crisp, high-quality retorts. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old or learn more about the best strikers in football right now, and then get drills to build your skills, Mr. Pichai said.
The world of digital technology is constantly evolving, with new and exciting technologies emerging every day. But what if we could go beyond the boundaries of what we already know – to unlock a new level of artificial intelligence, one that’s capable of understanding natural language and responding appropriately?
Two years ago, Google unveiled next-generation language and conversation capabilities powered by our Language Model for Dialogue Applications (or LaMDA for short), Sundar Pichai, CEO of Google and Alphabet said in his message.
The team has been working on an experimental conversational AI service, powered by LaMDA, that has been termed Bard. And today, Google took another step ahead by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks, Mr. Pichai said.
Google is releasing it initially with its lightweight model version of LaMDA. This much smaller model involves considerably less computing power, enabling it to scale to more users, and allowing for more feedback. Google combines external feedback with its internal testing to ensure Bard’s responses meet a high bar for safety, quality, and circumspect in real-world information. The company is excited for this testing phase to help them continue learning and improve Bard’s quality and speed.
Bringing the advantages of AI into everyday products
Google has a long history of using AI to improve Search for billions of people. BERT, one of its first Transformer models, was revolutionary in realizing the intricacies of human language. Two years ago, the company introduced MUM, which is 1,000 times more powerful than BERT and has the next-level and multi-lingual understanding of information that can pick out key moments in videos and provide vital information, involving crisis support, in more languages.
Now, its newest AI technologies — like LaMDA, PaLM, Imagen, and MusicLM — are building on this, creating entirely new ways to engage with information, from language and images to video and audio. The company is working to bring these latest AI advancements into its products, beginning with Search.
One of the most exciting opportunities is how AI can deepen its understanding of information and turn it into useful knowledge more efficiently — making it easier for people to get to the heart of what they’re looking for and get things done. When people think of Google, they often think of turning to us for quick factual answers, like “how many keys does a piano have?” But increasingly, people are turning to Google for deeper insights and understanding — like, “is the piano or guitar easier to learn, and how much practice does each need?”
Learning about a topic like this can take a lot of effort to figure out what you really need to know, and people often want to explore a diverse range of opinions or perspectives, Mr. Pichai elaborated.
AI can be useful in these instants, integrating insights for queries where there’s no one right answer. Soon, people would see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so users can quickly understand the big picture and learn more from the web: whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner. These new AI features will start commencing on Google Search soon.