Voice assistants, search and the future of advertising

Over the past few years, voice activated search has come a long way.

When Apple first integrated its voice assistant, Siri, into the iPhone 4S in 2011, it was considered more of a gimmick than anything else. Six years on, and a report by ClickZ and Marin Software reveals that 7% of marketers now mark voice search and digital assistants as top priorities in their marketing plans.

Interestingly, 4% of marketers reviewed in the same report also stated that they would be prioritising ‘smart hubs’ in 2017.

Since the launch of Amazon’s Alexa, so called ‘smart hubs’ have grown in popularity with consumers. Even more so, there is now a demand from consumers to have these as part of their ‘connected’ homes.

As AI technology gets smarter and smarter, it’s evident that we are shifting into a voice led revolution. ComScore said that by 2020, 50% of all searches will be voice searches and Google’s recent statistics show that 83% of people surveyed agreed that voice search will make it easier to search for things anytime they want.

Speaking to a machine may have felt unnatural and futuristic only a few years ago, but consumers are now embracing the revolution. Smart hubs have championed the growing possibilities of search, and they have now become genuine channels for daily activities, as consumers are excited and impressed by the speed and efficiency with which these devices can help them complete day-to-day tasks.

With this in mind, it’s clear that there is potential for advertisers and brand marketers to make use of voice assistants.

The opportunity for marketers and advertisers

In terms of search functionality, marketers need to be aware of the varying capabilities of each smart hub on the market, as each one works slightly differently and is powered by a different search engine. With each brand’s product portfolio continuously growing, this becomes even more of a challenge.

Amazon’s Echo, which has been on the market the longest, operates with Bing, whereas Google Home relies on Google to answer questions. Apple’s highly anticipated HomePod, due out in December, will have Siri integrated into the device.

The efficiencies of each search engine vary, and for marketers, these characteristics are crucial in deciding how their brands can attract the right attention.

Understandably, we need to remember that marketers are still testing the waters on how smart hubs can be implemented in marketing plans in the most seamless way. After all, as these voice assistants become part of a consumer’s connected home – and at the centre of the family – it’s natural that consumers may be slightly reticent when it comes to inviting advertisers and brands into this personal space.

This was certainly the case for Google, who was immediately hit with criticism after playing what sounded like an advert for Disney’s Beauty and the Beast film, during Google Home’s ‘What’s My Day Like?’ feature.

Similarly, there was disdain after Amazon introduced sponsored audio messages before and after conversations with Alexa. It’s inevitable that there will eventually be paid opportunities on voice assistants, but they need to be able to integrate these messages in a way that doesn’t interfere with the user experience.

How brands and marketers are tapping in

Voice assistants are now part of the omnichannel consumer experience. If used correctly, they are an effective – and natural – conduit between consumer and brand.

Although Burger King’s ‘Whopper’ TV advert caused a stir by hijacking Google Home devices by prompting the speaker to search for the definition of the Whopper burger, it won a Grand Prix at this year’s Cannes Lions, and also helped the brand win overall Creative Marketer of the Year.

This nifty hack was hailed ‘the best abuse of technology’ for generating a direct response between consumer and company, and sparked conversation and awareness around the brand and campaign.

This was clearly a stunt ad, and not a long-term use of the voice activated technology. However, its success highlights the opportunities available to advertisers – and interest from consumers – in engaging with this technology.

Could this be a sign that the future of advertising and marketing is heading in the direction of voice search?

So, what could the future look like? At mporium, we know that many marketers have mastered search-based advertising, and are reaping the rewards. Soon, we could see brands bidding for the top spots on voice-activated results.

We may even see brands collaborating with the technology companies to integrate special offers that would be delivered via voice assistants, or suggest alternative solutions to specific queries.

What the future holds remains to be seen. However, it’s clear that as the technology behind voice activated search undeniably progresses, marketers will find a way to adapt to this new search reality that presents itself in the form of voice assistants.

Source: marketingtechnews.net; 4 Sep 2017

The Next Generation of Emoji Will Be Based on Your Facial Expressions

There’s no faking your feelings with these social icons.

An app called Polygram uses AI to automatically capture your responses to friends’ photos and videos.

A new app is trying to make it simpler to help you react to photos and videos that your friends post online—it’s using AI to capture your facial expressions and automatically translate them into a range of emoji faces.

Polygram, which is free and available only for the iPhone for now, is a social app that lets you share things like photos, videos, and messages. Unlike on, say, Facebook, though, where you have a small range of pre-set reactions to choose from beyond clicking a little thumbs-up icon, Polygram uses a neural network that runs locally on the phone to figure out if you’re smiling, frowning, bored, embarrassed, surprised, and more.

Marcin Kmiec, one of Polygram’s cofounders, says the app’s AI works by capturing your face with the front-facing camera on the phone and analysing sequences of images as quickly as possible, rather than just looking at specific points on the face like your pupils and nose. This is done directly on the phone, using the iPhone’s graphics processing unit, he says.

When you look at a post in the app (for now the posts seem to consist of a suspicious amount of luxury vacation spots, fancy cars, and women in tight clothing), you see a small yellow emoji on the bottom of the display, its expression changing along with your real one. There’s a slight delay—20 milliseconds, which is just barely noticeable—between what you’re expressing on your face and what shows up in the app. The app records your response (or responses, if your expression changes a few times) in a little log of emoji on the side of the screen, along with those of others who’ve already looked at the same post.

The app is clearly meant to appeal to those who really care about how they’re perceived on social media: users can see a tally of the emoji reactions to each photo or video they post to the app, as well as details about who looked at the post, how long they looked at it, and where they’re located. This might be helpful for some mega-users, but could turn off those who are more wary about how their activity is tracked, even when it’s anonymized.

And, as many app makers know, it’s hard to succeed in social media; for every Instagram or Snapchat there are countless ones that fail to catch on. (Remember Secret? Or Path? Or Yik Yak? Or Google+?) Polygram’s founders say they’re concentrating on using the technology in their own app for now, but they also think it could be useful in other kinds of apps, like telemedicine, where it could be used to gauge a patient’s reaction to a doctor or nurse, for instance. Eventually, they say, they may release software tools that let other developers come up with their own applications for the technology.

Source: technologyreview.com; 28 August 2017

Here’s What You Need to Know About Voice AI, the Next Frontier of Brand Marketing

67 million voice-assisted devices will be in use in the U.S. by 2019

Soon enough, your breakfast bar could be your search bar. Your lamp could be how you shop for lightbulbs. Your Chevy or Ford might be your vehicle for finding a YouTube video, like the classic SNL skit of Chevy Chase’s send-up of President Gerald Ford, to make a long drive less tedious. And you won’t have to lift a finger—all you’ll need to do is turn toward one of those inanimate objects and say something. Welcome to a future where your voice is the main signal for the elaborate data grid known as your life.

Two decades ago when Amazon and Google were founded, only a seer could have predicted that those companies would eventually start turning the physical world into a vast, voice-activated interface. Artificial intelligence-powered voice is what perhaps makes Amazon and Google the real duopoly to watch (sorry, Facebook), as their smart speakers—2-year-old Amazon Echo and 8-month-old Google Home—are gaining traction. Forty-five million voice-assisted devices are now in use in the U.S., according to eMarketer, and that number will rise to 67 million by 2019. Amazon Echo, which utilizes the ecommerce giant’s voice artificial intelligence called Alexa, owns roughly 70 percent of the smart speaker market, per eMarketer.

“Our vision is that customers will be able to access Alexa whenever and wherever they want,” says Steve Rabuchin, vp of Amazon Alexa. “That means customers may be able to talk to their cars, refrigerators, thermostats, lamps and all kinds of devices in and outside their homes.”
While brand marketers are coming to grips with a consumer landscape where touch points mutate into listening points, search marketing pros are laying important groundwork by focusing on what can be done with Amazon Echo and Google Home (the latter of which employs a voice AI system called Assistant). With voice replacing fingertips, search is ground zero right now when it comes to brands.

But how will paid search figure into it all?

Gummi Hafsteinsson, Google Assistant product lead, says that, for now, the goal is to create a personalized user experience that “can answer questions, manage tasks, help get things done and also have some fun with music and more. We’re starting with creating this experience and haven’t shared details on advertising within the Assistant up to this point.”

While Hafsteinsson declined further comment about ads, agencies are now preparing for them. “In the near term, [organic search] is going to be the way to get your brands represented for Google Home,” says 360i president Jared Belsky, who points to comScore data that forecasts 50 percent of all search will be via voice tech by 2020. “Then ultimately, the ads auction will follow. You’ll be bidding to get your brand at the top of searches. I believe that’s the way it will go. Think about it—it has to.”

Jeremy Lockhorn, vp of emerging media at SapientRazorfish, remarks, “The specificity of voice search combined with what any of the platforms are already able to surmise about your specific context [such as current location] should ultimately result in more personalized results—and for advertisers, more narrowly targeted.”

Brands—which are accustomed to being relatively satisfied when showing up in the top five query results on desktops and phones—should brace for a new search reality, Belsky warns, where they may have to worry about being in either the first or second voice slot or else risk not being heard at all. “There’s going to be a battle for shelf space, and each slot should theoretically be more expensive,” he says. “It’s the same amount of interest funnelling into a smaller landscape.”

Scott Linzer, vp of owned media at iCrossing, adds that consumers may not “find it an inviting or valuable-enough experience to listen to two, three or more search results.”

Marketer interest in voice search is downright palpable in some circles. Fresh off a tour of client visits in Miami, Dallas, Chicago, Washington, D.C., and St. Louis, 360i’s Belsky reports that “every CMO, every vp of marketing and, especially, every ecommerce client is asking about this subject first and foremost. And they have three questions. ‘What should I do to prepare for when voice is the driver of ecommerce?’ The second one is, ‘What content do I have to think about to increase my chances to be the preferred answer with these devices?’ And, ‘Will all my search budget one day migrate onto these devices?’ There’s not obvious answers to any of these questions. Being early to all of this means you get the spoils.”

National Public Radio may not seem like an obvious media brand when it comes to being an early AI adopter, but the 47-year-old news organization has moved quickly in the space. Its news, music and storytelling app, NPR One, is evidently infused with machine learning, offering listeners curated options based on their shown preferences. And the organization extended the reach of NPR One’s content by inking a deal with Amazon Alexa in February. Since then, the phrase, “Alexa, play NPR News Now,” has been regularly heard in 400,000 homes that use the service. Additionally, NPR One has functionalities that its corporate sponsors have appreciated for some time; in particular, letting brands get extended audio mobile messaging when consumers want to hear more.

“They can press a button on the phone to hear more about what [carmaker] Kia is doing around innovation,” says Meg Goldthwaite, NPR’s marketing chief. Is voice-enabled digital advertising—where the listener asks for more Kia content and perhaps even fills out a test-drive form—right around the bend for NPR? “It’s absolutely possible,” she says.

Goldthwaite is probably onto something. Last year, IBM launched Watson Ads, which lets viewers “talk” with a brand’s ad and request additional info. Toyota, Campbell Soup and Unilever have tested the units, often averaging between one and two minutes of engagement, per Big Blue. “We have already begun to see that consumers are spending more time with these cognitive ads than with other digital ads,” says Carrie Seifer, CRO for the IBM Watson content and IoT platform.

Imagine being able to talk to video ads via Amazon Echo Show, a smart screen that comes juiced with Alexa voice functionalities. Costing $230, the device shipped in late June and provides a glimpse into next-gen households of the 2020s with talking screens, holograms, appliances and vehicles that Amazon exec Rabuchin alluded to.

“Voice is a productivity tool,” remarks Linda Boff, CMO of GE. Her company provides an intriguing business-to-business example of how AI-based utilities will soon help enormously expensive products like locomotives, jet engines and turbines sell themselves. For instance, GE has developed a prototype that lets an otherwise insentient locomotive send a voice message to a repair technician describing what needs to be fixed. It’s part of GE’s Digital Twin program, which creates 3-D representations of industrial assets and can process information gathered from individual machines globally to better inform decisions. It’s essentially machine-based crowdsourcing for when trains need to fill up on digital feedback instead of diesel fuel.

“The twin can call you and tell you, ‘I have an issue with my rotor,’” explains Colin Parris, vp, GE Software Research, who reveals that his brand’s voice AI features will roll out widely in 2018. “It can provide basic information that a service rep would want. Like, ‘What was the last journey you took? How many miles did you travel?’”

General Electric’s Digital Twin program has turbines and trains learning how to speak to their customers.

Staples also plans to flip the switch on its b-to-b voice AI initiative early next year. In partnership with IBM Watson, the retailer’s Easy Button—a fixture in its TV spots—will add an intelligent, voice-activated ordering service, which already has been tested in recent months by office managers with enterprise clients.

The business supplies chain is practically providing a step-by-step tutorial on how to build such an Echo-like device. Teaming with conversational design and software company Layer, Staples first built an AI-anchored chat app and carefully analysed the conversations.

“It helped us narrow down our use cases and create more robust experiences around the core challenges we were trying to solve,” says Ian Goodwin, head of Staples’ applied innovation team. “Once we got a really good idea of what our customers were asking on chat channels, then we built the voice experience with the Easy Button. It was really a gradual ramp-up rather than just going out and doing it.”

Indeed, voice AI probably shouldn’t be rushed to market. One company that understands that is Trunk Club, a Nordstrom-owned men’s clothier that recently rejected an offer from Amazon to be a fashion partner for Echo. Justin Hughes, Trunk Club’s vp of product development, isn’t AI-averse—he’s hoping to use voice activation in the next 18 months to spur his company’s subscriptions-based sales. But the timing with Amazon wasn’t right for his brand.

“If you are going to purchase between $300 and $1,000 of clothes, we don’t want it to be a weird experience—we want it to be something you return to often,” Hughes says. “There is so much imperfection in voice AI right now; it can be clunky.”

The vp also pointed to an elephant in the room—data ownership—when it comes to retailers partnering with Amazon or Google. “We just don’t want to give them all of our talk tracks,” Hughes says.

What’s more, hackers in the future might find a way to siphon off data collected from various “listening points” in homes and offices. Just last spring, Burger King caught considerable heat from privacy advocates after its television spot—which, in spite of the brouhaha, won a Cannes Grand Prix—hacked Google Home devices around the country.

The last thing voice-focused marketers want is Uncle Sam on the case. “As in-home, voice-controlled AI technology becomes even more prevalent and evolves in terms of substance—more capable of offering real answers to real questions—marketers will need to be increasingly careful to properly follow FTC disclosure and advertising guidelines,” notes advertising lawyer Ronald Camhi.

And since voice AI could greatly impact search-driven commerce, it’d probably be wise for Amazon and Google to encourage industry best practices. Then again, they might want to actually form a larger circle that also includes Facebook, Apple, Samsung and Microsoft. Facebook last week purchased AI start up Ozlo and is rumoured to be developing speaker technology of its own. Apple has star power in Siri, and Samsung in late July debuted voice capabilities for its assistant, Bixby. Also, Microsoft will continue to market its AI assistant, Cortana, in hopes of getting a significant piece of voice real estate.

Says Abhisht Arora, general manager of search and AI marketing at Microsoft, “Our focus is to go beyond voice search to create an assistant that has your back.”

A Rube Goldberg apparatus for the marketing history books

What do you get when you jerry-rig six Google Homes, six Alexa Dots, six laptops, six soundproof boxes and four extension cords? You get an apparatus that’d make Back to the Future’s Doc Brown proud.

Along with a bottle of Puerto Rican rum for the technicians, those are all of the ingredients that were poured into 360i’s VSM, or Voice Search Monitor, which is the brainchild of Mike Dobbs, the agency’s long-time vp of SEO. This makeshift system is designed to give clients like Norwegian Cruise Line an edge as smart speakers increasingly influence consumers’ purchase decisions.

VSM asks Google Home and Alexa Dot (Echo’s little sister device) thousands of questions around the clock, and whether the queries are answered automatically gets recorded on an Excel spreadsheet. An early round of testing found that for the travel category, Google answered 72 percent of questions while Amazon responded to 13 percent of the queries. In the second round of testing for finance, Google answered 68 percent of questions while Amazon answered roughly 14 percent.

Dobbs thinks unanswered questions present brands with an opportunity to become the search result with targeted, voice-minded digital content. “That’s one interesting thing that we believe is going to be white space that marketers need to explore,” he says. “They can raise their hands to take conversations on when major systems don’t have the data sources or depths to [provide an answer].”

Right now, his team is focused on speakers. But how long before they have to pivot and figure out how voice-powered refrigerators and other appliances impact clients’ needs?

“I don’t think it’s 2025—I honestly think it will be in the next two to five years,” Dobbs predicts. “Those technologies are not as elegant as they need to be right now. But, yeah, in five years—we’ll be there.”

Source: adweek.com; 6 Aug 2017

Google Lens Is A Peek Into The Future Of Computing

Squint and you can see how Google plans to bypass the Search box–and the screen entirely.

Just minutes into Google I/O–the company’s biggest event of the year–CEO Sundar Pichai announced what could be the future of Google as you know it: Google Lens.

Google Lens is an AI-powered interface that’s coming to Google Photos and Google Assistant. It’s “a set of vision-based computing abilities that can understand what you’re looking at and help you take action based upon that information,” as Pichai put it during his keynote today.

What does that mean in practice? Using computer image recognition–which Pichai reminded us is currently better than that of humans–Google Lens can recognize what’s in your camera’s view, and actually do something meaningful with that information, rather than just tagging your friends–which is how Facebook uses image recognition.

Image 4

Pichai showed off three examples. In one, a camera aimed at a flower identified that flower, in what appeared to be a Google reverse image search in real time. In the second, a camera aimed at a Wi-Fi router’s SKU–a long list of numbers and a barcode which would take time to type–automatically snagged the login information and then automatically connected to the internet. And in the third, a camera aimed around a street full of shops and cafes pulled up reviews of each restaurant, placing an interface element directly over each facade in your field of view.
Later in the keynote, another presenter came on stage to show how, with a tap inside Google Assistant, Google Lens could translate a Japanese menu and pull up an image of the dish–in what looks like a riff on the technology Google acquired with Word Lens.

Image 5
Alone, each of these ideas is a bit of a novelty. But all built into one platform–whether that’s a smartphone or an AR headset in the future–you can see how Google is imagining breaking free of the Search box to be even more integrated with our lives, living intimately in front of our retinas.
“We are beginning to understand video and images,” says Pichai, casually. “All of Google was built because we started to understand webpages. So the fact that we can understand images and videos has profound impact on our core vision.”

Source: fastcodedesign.com; 17 May 2017