The past three months, it seems, have been a nonstop parade of major tech events and the new product announcements that come with them.
One of the latest was last week’s AWS re:Invent, where Amazon announced a new suite of AI-enabled technology designed with businesses in mind.
And while these new products, features, and tools come with a host of opportunities for the developers, marketers, and others who plan to use them, they also raise a few questions. What are they designed to do? How can they help you? How do they work?
There’s also one less-official question that occurred to me as I learned of these developments. Is Amazon trying to creep into Google’s territory?
Let’s take a look at some of these AI announcements from AWS re:Invent and dig deeper into just what they mean.
AWS re:Invent is what Amazon describes as “a learning conference” produced by Amazon Web Services — that’s what AWS stands for. The intended audience is what it calls the “the global cloud computing community,” but the event features content for anyone who wants to learn how cloud technology can help grow and scale a business. From AdTech, to content delivery, to the internet of things, the conference ranges in learning opportunities from keynotes to certification sessions.
While a handful of new capabilities were unveiled by AWS, there are four that I’d like to focus on (with a bit of teaser text below on what each one is capable of doing):
If some of these sound familiar — or seem reminiscent of similar capabilities previously made somewhat famous by a certain search engine giant — chances are, it’s because they are familiar. I’ll delve deeper into this below, but many of these capabilities reflect those for which Google has been known to be a leader, especially in the realm of translation. After all, who could forget this jaunty promotional video on the topic?
The reason why I want to establish the difference between these two technologies (and what they are, in the first place) is that most of these capabilities use one or both.
Machine learning essentially describes the ability of a machine to learn things — habits, language, behaviors, and patterns, to name a few — without having been programmed to do so, or pre-loaded with that knowledge. It doesn’t describe artificial intelligence in entirety, but rather, is one very important AI capability.
Deep learning is a type of machine learning — and is a bit trickier to explain. Basically, it takes the next (big) step in that it’s designed to imitate the way the human brain works by way of something called neural networks. In our brains, we have biological neural networks in which the action of one neuron creates a series of subsequent actions that ultimately result in our different behaviors.
In technology, artificial neural networks seek to replicate that phenomenon by becoming “trained” to comprehend data based on a certain set of criteria. One of the more notable examples of deep learning in practice is in image recognition, in which neural networks are able to recognize different data patterns or cues to learn what, for example, a cat looks like.
Amazon Comprehend uses something called natural language processing (NLP) to better understand and determine the meaning within text. It’s an instance of machine learning, in which the technology learns how to comprehend — if you will — and process language as it was intended by the human being speaking or writing it.
Comprehend performs this capability with a series of steps:
So, how does this technology apply to the real world? Well, it’s particularly helpful in an instance of, say, analyzing written customer feedback. By feeding these comments to an API like Comprehend, marketers can use this technology to synthesize data from their audiences to determine something like thematic areas of improvement.
Simply put, DeepLens is a high definition video camera that was designed with developers in mind. It was built with deep learning capabilities and what AWS Chief Evangelist Jeff Barr describes as “pre-trained models for image detection and recognition.”
In other words, it’s a very smart camera: one that can recognize objects, faces, motions, and creatures (e.g., a dog from a cat). And while that’s very cool — not to mention, somewhat reminiscent of the recently-announced Google Clips camera — there’s a reason why it could prove so helpful to businesses.
To start, DeepLens comes with a number of “templates,” or recognition technologies that users can build upon for their own projects. Object and action recognition, for example, can help to more seamlessly create something like product tutorials or demonstrations, by developing a system or algorithm that learns to recognize how the two are paired for different outcomes.
For example, if you’re demonstrating how a certain cooking appliance can be applied to different scenarios, it seems that DeepLens can be utilized in building a system to recognize the appliance itself (like a standing mixer), the actions the user can take with it (like mixing cake batter), and the resulting outcome (a delicious cake).
Source: Amazon Web Services
Anyone with a journalism background is more than familiar with the headache joy of transcribing spoken interviews. We want to get it just right, be sure not to misquote the interviewee, and communicate what was said in the right context.
If only, back in my earliest days of reporting, there were advanced transcription services available to the common writer.
But now, there’s Transcribe: an AWS service that uses machine learning technologies to recognize the spoken word and transcribe it into text.
While the function itself is fairly intuitive, the benefits might not be. So let’s lay out two instances where technology like this can be applied to a marketer’s world:
These are only two of the more prominent examples of how such technology could be applied, but there are many more, from transcribing podcasts, to documenting notes from an important meeting.
This development might be my favorite.
Around here, we talk a great deal about approaching marketing with a global mindset. While I might use JetBlue as a remarkable example of marketing, it might not resonate as much with audiences in countries where this airline doesn’t operate.
To put it simply, the internet is a global, international destination. The people reading your content might not regularly engage with the same brands you do, and they might not speak the same language.
That’s why a growing number of developers and marketers are building a multilingual web presence — one where their online properties and content can be seamlessly viewed in the language preferred by the user. It’s a trend that, as HubSpot’s own global presence continues to grow, I take inordinate glee in seeing.
It’s also why I love seeing tools become available that make it easier to approach marketing with a “global first” mindset. Translate is one such tool: a service that uses machine learning to more naturally translate text from one language to another.
Here’s a look at how it worked when translating a French paragraph to English:
Source: Amazon Web Services
So, here’s the million-dollar question: Is Amazon creeping into Google’s territory?
It’s not the definitive answer I hoped to have, but in these early release days, it might be too soon to tell. While most of Google’s headline-making AI developments are largely consumer-centric (like the previous example of Clips), it is true that the company has been working on its own stack of machine learning capabilities for businesses. Look no further than Google.ai, for example, where the mission is to bring “the benefits of AI to everyone” — including, I assume, marketers.
This is only the beginning.
What are you most excited about? What confuses or scares you, and what fills you with delight? Feel free to weigh in with your thoughts on these AWS AI developments on Twitter, or let me know if you have a question about it.
Source: New feed