Each year, bonafide tech geeks and enthusiasts gather or tune in for one of the biggest events of the year: Google I/O, the search giant’s annual developer conference.
It’s a learning opportunity for many, with sessions and talks creating what Google describes as “an immersive experience focused on exploring the next generation of tech.”
But it’s the annual opening keynote that really has everyone paying the most attention. That’s when the company’s leadership, from the CEO to various VPs, unveils and describes the newest technologies, devices, and product features released by Google.
If you missed this year’s opening keynote, fear not: We’ve got you covered with the nine biggest announcements from it. And each month, we’ll continue to bring you a digest of what big Google news you may have missed. So read on — and stay tuned.
Anyone else remember this video from July 2015?
As “La Bamba” plays in the background, mobile device cameras hover over various words that are then translated into another language. It was a preview of something huge — something that’s finally come to fruition: Google Lens.
There are those moments when you see something that you don’t recognize — like a bird or plant, or perhaps a new cafe somewhere — but can’t identify specifically what it is. Now, with Google Lens, all you have to do is point your camera at it to get the details you want. Check out this super short video to see how that works with a storefront:
But it doesn’t stop with plant species and restaurant information. With this technology, you can also join a home WiFi network by hovering the camera over the name and password. From there, you’ll be prompted with the option to automatically connect.
According to TechCrunch, Lens will be integrated with Google Assistant — “users will be able to launch Lens and insert a photo into the conversation with the Assistant, where it can process the data the photo contains.” That’s a pretty concise summary of what the Lens technology is able to do: understand what a photo means. During the keynote, Google’s VP of Engineering, Scott Huffman, used the example of being able to add concert information to your calendar by taking a Lens photo of the marquee.
Anyone who’s ever undertaken a job search knows that there’s an overwhelming number of outlets where openings are listed. “Wouldn’t it be nice,” many job seekers asked, “if all of this information were readily available in one, central place?”
Ask, and ye shall receive. Google set out to synthesize job listings from a number of posting sites — as it’s wont to do, after all — and display it within search results. From there, writes Jessica Guynn for USA Today, “job hunters will be able to explore the listings across experience and wage levels by industry, category and location, refining these searches to find full or part-time roles or accessibility to public transportation.”
Google for Jobs addresses “the challenge,” said Google CEO Sundar Pichai during the keynote, “of connecting job seekers to better information on job availability.” It helps to make the application process that much more seamless, by pulling listings from both third-party boards and employers, and sending users who find a listing that interests them directly to the site where they can apply for it.
Artificial intelligence (AI) is one of those inevitably cool areas of technology that’s talked about by many, but thoroughly understood by — or available to — few. That was part of the motivation behind the launch of Google.ai, or what TechCrunch describes as an “initiative to democratize the benefits of the latest in machine learning research.”
In a way, the site serves as a centralized resource for much of Google’s work in the realm of AI, from news and documentation on its latest projects and research, to opportunities to “play with” some of the experimental technology. Much like the open source software TensorFlow, which allows aspiring AI developers to create new applications, a major point of Google.ai is open access to the documentation that helps professionals from a variety of industries — like medicine and education — use AI to improve the work they do.
— Google (@Google)
May 17, 2017
Some of the features announced during the I/O opening keynote either require or are heavily enhanced by Google Assistant — technology that previously wasn’t available to iPhone users. Now, that’s all changed. Google Assistant is, in fact, at the disposal of iPhone users, and available for download in the iTunes store.
Many are comparing the iOS version of Google Assistant to a slightly better, but underwhelming version of Siri. We took it for a spin, and here’s how it went:
Not bad, but it might also require a bit more tinkering with to discover all of the features. Its biggest advantage over Siri, writes Romain Dillet for TechCrunch, is its ability to let users “ask more complicated queries,” as well as its third-party integrations and connected device control capabilities.
A number of new features available on Google Home were also unveiled during the I/O opening keynote — here are the ones that stood out.
Recently, it was announced that the Google Home had new voice recognition capabilities that could distinguish one user’s commands from another. That technology is now aiding its new hands-free calling feature, which allows you to call any U.S. or Canadian landline or mobile phone, by linking your mobile phone to your Google Home profile and asking the device to make the call. And, because of that voice recognition, it knows whose mother to call with the command, “Call Mom.”
Like the best human personal assistance, Google Home can now proactively bring important things to your attention, without having to be asked. For example, if your next meeting requires a commute and traffic is bad, the device will suggest leaving a bit earlier. (Google Calendar users might recognize this feature from the more primitive “leave at X:00 to arrive on time” mobile alerts.)
They say that “a picture is worth a thousand words” — because sometimes, information is better explained visually than verbally. Now, Google Home can do that, by redirecting a visual response to your mobile device or TV (via Chromecast). So if you ask the device for directions, for example, they’ll be sent directly to your phone.
Android O is a new version of the Android operating system which, while nothing too fancy, “focuses mostly on the nuts and bolts of making the software work better, faster and save battery,” according to CNET.
The publication does a nice job of breaking down the most important features of the new operating system, but to us, there’s one major highlight: picture-in-picture. We’ve all had those moments when we’re watching a video on YouTube and realize that there’s something else you’re supposed to be doing. Now, with Android O, instead of having to exit out of the app, just press the home button and the video will collapse into a smaller, movable window, but continue playing while you attend to the other task you have to complete.
When you’re lost, or can’t figure out how to get somewhere, GPS has been there to save dozens of us. But what about misplaced objects — like when we’ve misplaced our keys, headphones, or sunglasses?
Now, there’s technology for that: the Visual Positioning Service, or VPS. Using Google’s Tango augmented reality (AR) platform, it’s a “mapping system that uses augmented reality on phones and tablets to help navigate indoor locations,” writes Raymond Wong for Mashable, using the example of holding up a Tango-enabled phone in a large warehouse store to locate a specific product.
One of the best parts of the VPS, Google noted, is its potential use to individuals who are visually impaired to help them find their way around places that are historically difficult to navigate.
— Google (@Google)
May 17, 2017
When we return from vacation, one of the most daunting tasks is sifting through and responding to the deluge of emails that came in while we were out. Of course, there’s always the option of indicating to senders via auto-response that you’ll be deleting everything when you come back. But for those occasional urgent emails that arrive during our time of leave, many of us long for a more automated way to address them.
Now, there’s Smart Reply for that: a new Gmail feature that uses smart technology to suggests quick responses based on the text of the email you received. Here’s a look at how it works:
Right now, it’s only available in Inbox by Gmail and Allo, but according to Google’s official blog, the technology is slated to “roll out globally on Android and iOS in English first, and Spanish will follow in the coming weeks.”
— Google (@Google)
May 17, 2017
Google is no stranger to the world of VR. It started with Cardboard, some might say, and expanded into more advanced and expensive headsets. Now, in partnership with HTC and Lenovo, Google is developing its first standalone VR headset.
What does that mean, exactly? Previously, becoming fully immersed in Google’s VR experiences required the power of a computer or smartphone. Now, using something called WorldSense technology, these new standalone headsets can “track your precise movements in space,” according to VRScout, “without any external sensors to install.”
We’ll be keeping an eye on all things Google, including the rest of the big announcements from I/O 2017. Next month, we’ll bring you those top news items, algorithm updates, and other trends that can aid your marketing.
Until then, enjoy those May flowers — we’ll see you in June.
Which I/O announcements are you most excited about? Let us know in the comments.
Source: New feed