Google’s 2018 I/O keynote, and event was packed with news. CEO Sundar Pichai announced some cool tech and developments. Here is everything happened on Google I/O Conference at Shoreline Amphitheater on May 8, 2018 in Mountain View, California. The big announcements made on Google Assistant, Gmail, and AI was a big theme throughout.
Google goes all in to artificial intelligence, rebranding its research division to Google AI Just before the event. Google has increasingly focused R&D on computer vision, natural language processing, and neural networks.
Gmail can now draft emails for you (almost) by itself
Google is expanding on its helpful Smart Reply feature with a more ambitious idea: Smart Compose. Smart Compose uses AI “to help you draft emails from scratch, faster.” Does having Gmail create emails without your involvement sound scary? Don’t worry, as the company isn’t going quite that far (yet). Instead, the new feature will make suggestions for complete sentences as you’re typing. Smart Compose is coming to the new Gmail for consumers first over the next few weeks; G Suite users will have to wait a few months.
New Google Assistant voices
Google’s virtual assistant is getting some more voice variety. Users will get to pick from six additional, natural sounding voices in addition to the original one you’re probably familiar with. Google calls that original voice “Holly.” Oh, and a John Legend voice is also coming to Assistant later this year. Seriously.
Google Assistant gaining new features to better compete against and surpass Amazon’s Alexa in AI smarts.
Continued conversation – Say bye to “hey Google”
One of the Google script announcement was “continued conversation” update to Google Assistant that makes talking to the Assistant feel more natural. Now, instead of having to say “Hey Google” or “OK Google” every time you want to say a command, you’ll only have to do so the first time. The company also is adding a new feature that allows you to ask multiple questions within the same request. All this will roll out in the coming weeks.
When you’re having a typical conversation, odds are you are asking follow-up questions if you didn’t get the answer you wanted. But it can be jarring to have to say “Hey Google” every single time, and it breaks the whole flow and makes the process feel pretty unnatural. If Google wants to be a significant player when it comes to voice interfaces, the actual interaction has to feel like a conversation and not just a series of queries.
Perhaps the most jaw-dropping moment of today’s keynote came when Sundar Pichai played back a recording of Google Assistant calling a hair salon and making an appointment in a conversation that legitimately sounded like two humans talking to each other. There was no hint of a robotic voice or that the salon employee recognized they were talking to AI.
At Google IO Google Assistant by itself made an appointment for a women haircut.
When your appointment is successfully done by your Google Assistant it shows you in notification panel.
Google announces a new generation for its TPU machine learning hardware
As the war for creating customized AI hardware heats up, Google said that it is rolling out its third generation of silicon, the Tensor Processor Unit 3.0. Google CEO Sundar Pichai said the new TPU is 8x more powerful than last year per pod, with up to 100 petaflops in performance. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations.
There’s a race to create the best machine learning tools for developers. Whether that’s at the framework level with tools like TensorFlow or PyTorch or at the actual hardware level, the company that’s able to lock developers into its ecosystem will have an advantage over its competitors. It’s especially important as Google looks to build its cloud platform, GCP, into a massive business while going up against Amazon’s AWS and Microsoft Azure. Giving developers — who are already adopting Tensor Flow en masse — a way to speed up their operations can help Google continue to woo them into Google’s ecosystem.
Google Maps is getting way more social
Maps is growing into a full-on social experience that’s squarely targeting help and Foursquare. A new For You tab lets you follow specific neighborhoods to see new restaurants and business that are trending among other users. And you can even coordinate with friends in real time to make a “shortlist” when choosing a place to eat at.
And its adding augmented reality directions
If maps works accurately and reliably, this will be a huge help for people navigating a new city. Point your camera in a direction, and Google will pair AI with Street View data to give you an interactive, AR turn-by-turn experience when you’re on the move. There’s even a cute little fox to help keep you on course.
Google Assistant is coming to Google Maps
Google Assistant is coming to Google Maps, available on iOS and Android this summer. The addition is meant to provide better recommendations to users. Google has long worked to make Maps seem more personalized, but since Maps is now about far more than just directions, the company is introducing new features to give you better recommendations for local places.
The maps integration also combines the camera, computer vision technology, and Google Maps with Street View. With the camera/Maps combination, it really looks like you’ve jumped inside Street View. Google Lens can do things like identify buildings, or even dog breeds, just by pointing your camera at the object in question. It will also be able to identify text.
Maps are one of Google’s biggest and most important products. There’s a lot of excitement around augmented reality — you can point to phenomena like Pokémon Go — and companies are just starting to scratch the surface of the best use cases for it. Figuring out directions seems like such a natural use case for a camera, and while it was a bit of a technical feat, it gives Google yet another perk for its Maps users to keep them inside the service and not switch over to alternatives. Again, with Google, everything comes back to the data, and it’s able to capture more data if users stick around in its apps.
Smart Displays with Google Assistant coming this summer
Amazon’s Echo Show is about to face some competition from similar devices running Google’s software. Google announced that the first Smart Displays with Assistant built in will begin shipping in July. A demonstration on stage showed the display pulling up Jimmy Kimmel Live on YouTube TV as just one example of content that Google offers that Amazon can’t you know, since the companies still hate each other.
Google Assistant and YouTube are coming to Smart Displays
Smart Displays were the talk of Google’s CES push this year, but we haven’t heard much about Google’s Echo Show competitor since. At I/O, we got a little more insight into the company’s smart display efforts. Google’s first Smart Displays will launch in July, and of course will be powered by Google Assistant and YouTube. It’s clear that the company’s invested some resources into building a visual-first version of Assistant, justifying the addition of a screen to the experience.
Users are increasingly getting accustomed to the idea of some smart device sitting in their living room that will answer their questions. But Google is looking to create a system where a user can ask questions and then have an option to have some kind of visual display for actions that just can’t be resolved with a voice interface. Google Assistant handles the voice part of that equation and having YouTube is a good service that goes alongside that.
Google Photos gets even smarter editing powers with AI boost
Google Photos is gaining new features like the ability to separate subjects from the background in photos and pop the color or turn the background black and white. Photos can also now colorize your older photos — even if they weren’t shot in color to begin with. Both of these capabilities use AI. And when you’re just swiping through your gallery, Photos will analyze your pictures and make recommendations for quick fixes like “fix brightness.”
Google News — now curated by AI
Google’s news app is being overhauled and its editorial focus is now powered largely by AI. The company says “it uses artificial intelligence to analyze all the content published to the web at any moment, and organize all of those articles, videos, and more into storylines. It spots the ones you might be interested in and puts them in your briefing.” News will also deliver “a range of perspectives” to bring you a little bit outside your bubble.
Google News gets an AI-powered redesign
Watch out, Facebook. Google is also planning to leverage AI in a revamped version of Google News. The AI-powered, redesigned news destination app will “allow users to keep up with the news they care about, understand the full story, and enjoy and support the publishers they trust.” It will leverage elements found in Google’s digital magazine app, Newsstand and YouTube, and introduces new features like “newscasts” and “full coverage” to help people get a summary or a more holistic view of a news story.
Facebook’s main product is literally called “News Feed,” and it serves as a major source of information for a non-trivial portion of the planet. But Facebook is embroiled in a scandal over personal data of as many as 87 million users ending up in the hands of a political research firm, and there are a lot of questions over Facebook’s algorithms and whether they surface up legitimate information. That’s a huge hole that Google could exploit by offering a better news product and, once again, lock users into its ecosystem.
Google unveils ML Kit app for iOS and Android
ML Kit, an SDK that makes it easy to add AI smarts to iOS and Android apps.
Google unveiled ML Kit, a new software development kit for app developers on iOS and Android that allows them to integrate pre-built, Google-provided machine learning models into apps. The models support text recognition, face detection, barcode scanning, landmark recognition, and image labeling.
Google ML Kit official Logo.
Machine learning tools have enabled a new wave of use cases that include use cases built on top of image recognition or speech detection. But even though frameworks like Tensor Flow have made it easier to build applications that tap those tools, it can still take a high level of expertise to get them off the ground and running. Developers often figure out the best use cases for new tools and devices, and development kits like ML Kit help lower the barrier to entry and give developers without a ton of expertise in machine learning a playground to start figuring out interesting use cases for those applications.
Google Lens can copy text from the real world into your phone
This is something Google has demonstrated before, but now it sounds like the feature is ready and actually coming to Google Lens. You’ll be able to point your phone’s camera at text in the real world and say, a written down Wi-Fi password and grab that text, and then paste it into a text field on your smartphone.
And now it’s also going to help you buy fashionable things. Google Lens still isn’t perfect at identifying precise items of clothing, but Google thinks it can get close enough. The company is introducing a new “style match” feature that will scan something the camera is pointed at and help you buy it from internet retailers.
Goodbye, three-button navigation. Hello, digital wellbeing Dashboard. Android P shakes up a lot of what we’ve come to know about Google’s mobile OS. It’s got a refreshed look. Key interactions like changing apps are now accomplished through iPhone X-like gestures. And there’s a new Dashboard that’s meant to plainly show “how you’re spending time on your device, including time spent in apps, how many times you’ve unlocked your phone, and how many notifications you’ve received.” You can even set time limits for individual apps if you want to cut back on compulsively staring at your phone at any moment of downtime.
Android P is coming later this summer, but a public beta is available today for a handful of smartphones from Google, Essential, Sony, Nokia, and more.
Android P comes with many new features that you have never seen like Rotation confirmation, Crash dialog, Work profiles etc.
Android P is coming with better battery saver known as Adaptive Battery.
Google IO marked a big change for how we adjust the volume on our smartphone. Now the hardware button changes the media volume alone.
So when will you be able to actually play with all these new features? The Android P beta is available now, and you can find the upgrade here. Stay tuned for our next tech news.