The Next Great Media Channel Is the Self-Driving Car. Will Brands Be Ready?

The average driver spends 48 minutes behind the wheel

GM has increased production of self-driving Chevy Bolts. Volvo is partnering with self-driving car chipmaker Nvidia. Volkswagen is working to power its cars with A.I.

The conversation has fully shifted from “if” driverless cars will become the new normal to “when.” While the general public is eager for improved productivity, better safety and hopes for reduced traffic, marketers should be looking at the autonomous vehicle from a different angle: as the next great media channel.

As technology continues its advance, the car will become a hot spot for interaction, entertainment and information. It will also be a treasure trove of data.

While the exact time frame is hard to predict, the advent of 5G–with 100 times the data transfer speeds of 4G, plus better connectivity between devices and internet-enabled objects—will unlock huge opportunity for both the auto industry and marketers who could exploit the new media experiences.

This opportunity will open up in two ways: first, as a precision marketing tool, using all the data the car will soon produce; secondly, as mass-reaching vehicle (pardon the pun), as self-driving cars become mini entertainment centres.

In the near future, autonomous cars will process staggering amounts of data: current and past destinations, speed of travel, demographics and biometrics of the riders, present and future weather, traffic conditions, and nearby landmarks and commercial locations—all of which marketers could access to achieve an unprecedented level of precision in consumer messaging.

Let’s consider what might soon be possible from a marketing perspective in this new channel for say, a coffee chain.

A passenger in a smart car passes a chain coffee shop on the way to work every morning. They have the coffee brand’s app. When they’re close, we programmatically target the rider, asking if they’d like to stop to pick up a medium soy latte—their preferred order at this time of day. If the rider says yes, their car is directed to the store, where their coffee is ready to be picked up at the counter, since payment has already been processed through the app.

In this example, you can see the confluence of benefits: Time of day meets exact location meets buyer-behaviour meets real-time messaging.

On the other side of our consumer’s day, recognizing the route and time stamp from work to home, a supermarket could serve up cooking content and then share an offer on ingredients. Or if passenger biometric data recognizes that the passenger is generally too hungry to wait, he or she could be served with ads and offers for nearby restaurants. Still on the way home but moving out of the food category, a network, premium channel or streaming service marketer could serve tune-in messages for that evening based on the driver’s historical preferences.

The opportunity doesn’t only lie within in-car targeting. Consider also that a car that drives itself gives users, who used to be drivers, time back in their day (an average of 48 minutes per day according to AAA). More time means more chance to consume ad-funded or branded content, turning the car into another “opportunity to see,” either on mobile or in-car screens.

That means those tune-in messages on the drive home can actually be trailers and behind-the-scenes extras. On the weekends, travel brands could leverage past purchase data to predict preferred vacation times and locations to send targeted destination content ideally timed to when the consumer is in consideration mode, and more receptive to leisure-oriented messages.

Car companies will need to decide what role they wish to play as the producer of this new media “device.” Will they follow the model the video game consoles have already hewn, licensing their tech to various content providers while fighting for valuable exclusive titles? How much control can and should they exert over the “pipes” of their cars?

Regardless of where those chips fall, they will be in the position to collect an amazing amount of data from the cars and those who ride in them. In a category in which the traditional ownership and profitability dynamics are shifting, licensing APIs and selling data will become an increasingly bigger aspect of how car brands make money.

Source: adweek.com; 31 Oct 2017

As Voice Has Its Moment, Amazon, Google and Apple Are Giving Brands a Way Into the Conversation

Few of their devices’ skills or apps are branded, but that’s changing

For decades, listening to ‘the voice of the customer’ has been the Holy Grail for marketers. Now, thanks to technology, they can do it millions of times each day.

According to Google and Bing, one in four searches is conducted by talking, not typing, a figure comScore predicts will reach 50 percent by 2020. That same year Echo alone will account for $7 billion in voice transactions—or vcommerce—per investment firm Mizuho Bank.

Voice is having its moment. People are talking, devices are listening and brands are attempting to insert themselves into the conversation, using Amazon Alexa voice skills and Google Home apps.

With a few choice phrases, consumers can order an Uber or Domino’s pizza on either device. Echo fans can also ask Patrón to help them make a margarita, consult Tide on how to remove stubborn stains, or get Campbell’s or Nestlé to serve up dinner recipes, among other skills.

Currently, only a small percentage of Alexa’s 25,000 voice skills are branded (Amazon won’t reveal how many). You’ll find even fewer in Google’s few hundred voice apps.

But that’s changing. Over the next few years, brand voices are about to get a lot louder.

Shots in the dark

Admittedly, many of those 25,000-odd voice apps are gimmicky—good for getting attention but not much else, noted Layne Harris, head of innovation technology for digital marketing agency 360i. But forward-thinking brands are embracing the technology now, he added, making voice skills a key element of their marketing strategy. Just last week, 360i launched a new practice solely focused on Amazon to help brands navigate the world of voice marketing.

When Patrón launched its voice skill in July 2016, it was part of a broader marketing initiative called the Cocktail Lab, involving 50 bartenders around the globe crafting new tequila-infused drinks, said Adrian Parker, vp of marketing for Patrón Spirits. (The distiller also just debuted an augmented reality app called the Patrón Experience for Apple’s iOS 11.)

Some 350,000 consumers have participated in the Cocktail Lab, said Parker, with more than 10 percent coming via the Alexa Skill. Since launching the lab, traffic to Patrón’s website has increased by 43 percent, thanks in part to Alexa users who spend more time on site and download more recipes.

“Voice was the first platform that allowed us to take what would traditionally be a face-to-face experience in a bar and make that virtually accessible,” Parker said. “Alexa is not only giving us the capability to engage with customers on their terms, it’s also preparing us for the voice-led future.”

Utility is key, said Greg Hedges, vp of emerging experiences at Rain, a digital consultancy that helped create Alexa apps for Campbell’s and Tide. The voice skill can’t merely be memorable; it must also be useful.

“The skills that see the most engagement are not just advertising,” he explained. “They take a step further towards connecting with consumers. They give people a reason to come back, because consumers know they can get the answers they’re looking for.”

For brands like Patrón and Campbell’s, getting consumers to drink more tequila and consume more chicken soup isn’t the only goal, said Charles Golvin, a research director for Gartner.

“They’re also trying to establish themselves as the voice of authority or curator across the broader product category that they serve,” he said. “It’s not just about selling Patrón tequila, it’s about being your mixologist expert. It’s not about selling Campbell’s soup, it’s about being your epicurean guide.”

A focus group of one

With the emergence of Alexa touchscreen devices like Echo Show and the new Echo Spot, brands also need to prepare for a voice+ world where results can be seen as well as heard, said Jonathan Patrizio, head of technical advisory at Mobiquity, a digital agency that developed Nestlé’s GoodNes recipe skill.

Using GoodNes on the Echo Show, home chefs can not only hear step-by-step instructions on how to make Baked Pesto Chicken or Korean Beef Bulgogi, but also see them displayed alongside images. Recipe users can also view the images via a GoodNes visual guide on their laptop’s or tablet’s browser.

“It’s a much more frictionless and natural way of interacting,” Patrizio said. “And if a brand can understand how to play in that domain, they’ve gained a great advantage over their competitors.”

But perhaps the most valuable thing brands glean from voice skills is data. Smart brands are building analytics into their skills and using the data to help drive new products and revenue streams.

“You can learn a lot from the things customers say,” said Hedges. “If Tide learns someone is asking about a specific stain and fabric combination, and it’s not one they’ve encountered before, maybe a new product comes out of that. With voice, it’s almost like a focus group of one.”

A key reason for building a voice skill is to gather data on customer usage and intent, said Patrizio.

“We built analytics into the GoodNes skill, and this lets Nestlé monitor Skill usage in aggregate since the developer doesn’t have access to the actual spoken recording,” he said. “For example, ‘Alexa, ask GoodNes to browse recipes’ is mapped to an intent, and we can track how many people used that intent, or how many times a single user requested this specific intent.”

Analytics can also reveal if the skill is working as the brand hoped it would. At this early stage, that’s not always the case.

Adam Marchick, CEO and co-founder of analytics company VoiceLabs, says that only 30 to 50 percent of conversational interactions are successful.

“It’s like we’re in year two of building web pages,” noted Marchick. “But right now, just giving brands conversational understanding—where they can actually see different voice paths and what’s working and what’s not—is a big step forward.”

The conversation is just beginning

Brands have been forced to react to similar technological upheavals before—notably with the shift to web and then to mobile. This time, though, they’re being more deliberate about it, said Joel Evans, co-founder and vp, digital transformation at Mobiquity.

“In the dot-com days websites were more like glorified brochures. We saw something similar happen when companies started doing mobile apps—they were just a check-off item,” he said. “Thankfully we’re not seeing that in the skills universe. Brands have realized it’s got to be the right experience when it actually gets out there.”

The next few years will see a huge acceleration of the technologies driving computer-human interaction—like artificial intelligence, natural language processing, chatbots and augmented reality. The voice apps we hear (and sometimes see) today may be nothing like the ones we encounter tomorrow. Smart brands are preparing for that now.

“Right now we’re creating the horse and carriage of voice technology,” said Patrón’s Parker. “Give it another 18 to 24 months, and we’ll be building Maseratis.”

Source: adweek.com; 10 Oct 2017

Consumers Have More Time Thanks to Technology, and Marketers Have More Opportunities to Fill It

Trends like ride sharing and IoT have seemed to lengthen the day


The day is now apparently 30 percent longer than it used to be.

Recently, I was taking an Uber while speaking to a colleague on the phone and simultaneously switching between Instagram, news headlines and texting a partner in Singapore on WhatsApp. Ten years ago, these activities did not and could not coexist; today, I can do them all simultaneously.

Most individuals love to think that time is scarce. There are only 24 hours in a day, right? Well, maybe not.

In Activate’s Tech and Media Outlook 2016, there is an interesting statistic derived from data at the Bureau of Labor Statistics, Nielsen and other leading sources suggesting that the average U.S. employed adult’s daily activity is equivalent to 31 hours and 28 minutes. In other words, the day is now apparently 30 percent longer than it used to be—and this will only continue to increase with the rapidly advancing trends of Internet of Things, the normalization of the “sharing economy,” autonomous vehicles, wearables, automation and artificial intelligence.

Time is a fascinating component of humanity: It must be used every day and is a resource one cannot hoard. But what if you challenged the notion that time is actually finite and scarce and instead began to look at the potential for its emerging abundance due to these trends? The winners in this scenario will be those marketers who can fill this new time—these new opportunities—in meaningful ways. More time does not equal less busy.

In 1955, Cyril Parkinson wrote an essay in which he stated, “Work expands so as to fill the time available for its completion.” This notion, which is known as Parkinson’s Law, is simple human nature. The same can be said about how we live our lives outside of work as well.

Mobile devices are the easiest way to grasp the notion of maximizing time as they provide a more frictionless way to read more books, to listen to more music, to send more emails, and to speak to more friends during moments that were previously not as convenient. Similarly, the rise in autonomous and shared vehicles provides consumers with time to enjoy more of what they want while giving smart brands and marketers opportunities to engage more deeply with them. For example, to Google, these self-driving vehicles are merely an accessory to one’s mobile device and enable more searches.

But what about other industries that may not be so obvious? In August 2016, Morgan Stanley published a piece of research titled Shared Autonomous Mobility: Potential Growth Opportunity for Beverage and Restaurant Firms.

Recognizing the serious social, public health and safety implications of such a study, Morgan Stanley noted that the introduction of ride-sharing services has also coincided with the reduction of DUI arrests in certain cities in the U.S. San Diego, for example, a city with one of the highest DUIs per capita, saw a 14 percent decrease in arrests between 2011 and 2013 when Uber began its operations. Conversely, the study also points out that when the city of Austin, Texas, temporarily banned Uber, there was a reduction in alcohol sales.

From a consumer and public safety perspective, the implications of the sharing economy in this instance are immense. And the impact for brands and marketers in it is real as well.

At the time of this study, the total global alcohol market was roughly $1.5 trillion. It was calculated that the incremental growth opportunities presented from shared mobility increased the total addressable marketplace by over $31 billion in 10 years based on a +1 increase in drink consumption per month. This is immense growth for a change in behaviour that would otherwise be barely noticeable.

But this does not only occur in the sharing economy. Advances in IoT, wearables, automation and AI will all continue to increase the ease of multitasking for consumers—while the world will continue to recalibrate to these new norms with the perception that nothing feels different and everyone continues to believe they are stressed for time.

Marketing only succeeds in these extra moments when it can provide increased value or native utility to the consumer’s expectation. In his book The End of Advertising, Andrew Essex articulates this as the need to move away from producing “the thing that interrupts the thing.”

This is why Taco Bell’s Cinco de Mayo lens on Snapchat was viewed 224 million times in one day last year; why consumers have such an affinity toward Citi Bike in New York City; or why SSGA’s Fearless Girl has had over 2.3 billion social media impressions.

“Time is money” is an adage that Benjamin Franklin is credited with saying, and, in an era of time abundance, the marketers who navigate this moment properly will be the true winners.

Keith Grossman is the global CRO at Bloomberg Media and has been part of two teams that have previously won Adweek Project Isaac Awards.

Source: adweek.com; 11 Sep 2017

The Next Generation of Emoji Will Be Based on Your Facial Expressions

There’s no faking your feelings with these social icons.

An app called Polygram uses AI to automatically capture your responses to friends’ photos and videos.

A new app is trying to make it simpler to help you react to photos and videos that your friends post online—it’s using AI to capture your facial expressions and automatically translate them into a range of emoji faces.

Polygram, which is free and available only for the iPhone for now, is a social app that lets you share things like photos, videos, and messages. Unlike on, say, Facebook, though, where you have a small range of pre-set reactions to choose from beyond clicking a little thumbs-up icon, Polygram uses a neural network that runs locally on the phone to figure out if you’re smiling, frowning, bored, embarrassed, surprised, and more.

Marcin Kmiec, one of Polygram’s cofounders, says the app’s AI works by capturing your face with the front-facing camera on the phone and analysing sequences of images as quickly as possible, rather than just looking at specific points on the face like your pupils and nose. This is done directly on the phone, using the iPhone’s graphics processing unit, he says.

When you look at a post in the app (for now the posts seem to consist of a suspicious amount of luxury vacation spots, fancy cars, and women in tight clothing), you see a small yellow emoji on the bottom of the display, its expression changing along with your real one. There’s a slight delay—20 milliseconds, which is just barely noticeable—between what you’re expressing on your face and what shows up in the app. The app records your response (or responses, if your expression changes a few times) in a little log of emoji on the side of the screen, along with those of others who’ve already looked at the same post.

The app is clearly meant to appeal to those who really care about how they’re perceived on social media: users can see a tally of the emoji reactions to each photo or video they post to the app, as well as details about who looked at the post, how long they looked at it, and where they’re located. This might be helpful for some mega-users, but could turn off those who are more wary about how their activity is tracked, even when it’s anonymized.

And, as many app makers know, it’s hard to succeed in social media; for every Instagram or Snapchat there are countless ones that fail to catch on. (Remember Secret? Or Path? Or Yik Yak? Or Google+?) Polygram’s founders say they’re concentrating on using the technology in their own app for now, but they also think it could be useful in other kinds of apps, like telemedicine, where it could be used to gauge a patient’s reaction to a doctor or nurse, for instance. Eventually, they say, they may release software tools that let other developers come up with their own applications for the technology.

Source: technologyreview.com; 28 August 2017

Facebook and Apple Are About to Take AR Mainstream. Here’s How Marketers Are Gearing Up

The UN, Ikea and the PGA Tour hone their augmented-reality chops

Apple’s ARKit platform at launch will be used by major brands like Ikea.

This past weekend in New York, the United Nations created a Facebook Live filter for World Humanitarian Day that let users overlay their real-time clips with augmented reality, particularly scrolling copy that told stories about civilians who have been affected by conflict. In Times Square, AR-enhanced videos aired on one of the iconic, commercial intersection’s large billboards. The endeavour was powered by Facebook’s 4-month-old AR system, dubbed Camera Effects Studio, which is getting the attention of brand marketers.

“For us, Facebook is an amazing platform to develop AR on because people are inherently using it already,” said Craig Elimeliah, managing director of creative technology at VML, the UN’s agency. “It includes Instagram as well. It includes Live and regular camera—so the sheer scale is unbelievable.”

While AR is still exploratory territory for marketers and media companies, its pixelated push to the mainstream has gotten a series of boosts this year from some of the biggest digital players. Snapchat—with its wacky filters and other virtual overlays—has continued to be popular among teens (even if Wall Street doesn’t like its pace). Apple, which has long been seen as a potential AR game changer due to the popularity of its iPhone and iPad, seems primed to give AR the turbocharge it needs to attract older demographics. When the Cupertino, Calif.-based company releases its iOS 11 mobile operating system in September, hundreds of millions of Apple-device owners will have augmented reality at their fingertips with a set of features called ARKit.

“Apple and Facebook will make augmented reality an everyday reality,” said David Deal, a digital marketing consultant. “We’ll see plenty of hit and miss with AR as we did when Apple opened up the iPhone to app developers, but ultimately both Apple and Facebook are in the best position to steamroll Snapchat with AR.”

Ikea, which will be one of the first major brands on Apple’s AR platform at launch, is developing an app that allows customers to see what furniture and other household items would look like in a three-dimensional view inside their homes. Ikea also plans to introduce new products in the AR app before they hit store shelves.

An AR visit to a Japanese-Canadian internment camp aims to inspire empathy. Jam3

Other brands and their agency partners are working on prototypes and developing spatial storytelling by layering detailed imagery, pertinent information and other brand-centric assets into physical spaces. For instance, PGA Tour has tapped Possible Mobile to develop an immersive rendering of 3-D golf course models in ARKit that should go live in the next six months. It would seem the initiative is generally driven by the sport of golf’s recent problems courting millennials. But Luis Goicouria, svp of digital platforms and media strategy at PGA Tour, contended that the 3-D experience is being built for all demographics.

“It really isn’t a gimmick designed to appeal to a single age group,” Goicouria said. “Apple is notable because their installed base is so large and the platform so consistent that it allows us to bring something to a large group very quickly, and that gives us immediate feedback from our fans on what is working.”

Apple’s ARKit should be ripe for innovative storytelling. With that in mind, Toronto-based digital production firm Jam3 is using the platform to create a historical and educational narrative app called East of the Rockies in collaboration with Joy Kogawa, an author and former internee at a Japanese-Canadian internment camp. The experiential app will illuminate aspects of detainee life in three chapters on the user’s smartphone screen, with each new episode triggered as users virtually “walk around” the environment and get close to different locations. “[It will] create a lot of empathy in ways that I didn’t think was possible in a digital medium,” noted Jason Legge, producer at Jam3.

The possibilities will only become more self-evident as augmented reality grows. The number of users is expected to jump by 36 percent between now and 2020, when 54.4 million Americans will regularly engage with AR, according to eMarketer.

Source: adweek.com; 20 August 2017

Here’s What You Need to Know About Voice AI, the Next Frontier of Brand Marketing

67 million voice-assisted devices will be in use in the U.S. by 2019

Soon enough, your breakfast bar could be your search bar. Your lamp could be how you shop for lightbulbs. Your Chevy or Ford might be your vehicle for finding a YouTube video, like the classic SNL skit of Chevy Chase’s send-up of President Gerald Ford, to make a long drive less tedious. And you won’t have to lift a finger—all you’ll need to do is turn toward one of those inanimate objects and say something. Welcome to a future where your voice is the main signal for the elaborate data grid known as your life.

Two decades ago when Amazon and Google were founded, only a seer could have predicted that those companies would eventually start turning the physical world into a vast, voice-activated interface. Artificial intelligence-powered voice is what perhaps makes Amazon and Google the real duopoly to watch (sorry, Facebook), as their smart speakers—2-year-old Amazon Echo and 8-month-old Google Home—are gaining traction. Forty-five million voice-assisted devices are now in use in the U.S., according to eMarketer, and that number will rise to 67 million by 2019. Amazon Echo, which utilizes the ecommerce giant’s voice artificial intelligence called Alexa, owns roughly 70 percent of the smart speaker market, per eMarketer.

“Our vision is that customers will be able to access Alexa whenever and wherever they want,” says Steve Rabuchin, vp of Amazon Alexa. “That means customers may be able to talk to their cars, refrigerators, thermostats, lamps and all kinds of devices in and outside their homes.”
While brand marketers are coming to grips with a consumer landscape where touch points mutate into listening points, search marketing pros are laying important groundwork by focusing on what can be done with Amazon Echo and Google Home (the latter of which employs a voice AI system called Assistant). With voice replacing fingertips, search is ground zero right now when it comes to brands.

But how will paid search figure into it all?

Gummi Hafsteinsson, Google Assistant product lead, says that, for now, the goal is to create a personalized user experience that “can answer questions, manage tasks, help get things done and also have some fun with music and more. We’re starting with creating this experience and haven’t shared details on advertising within the Assistant up to this point.”

While Hafsteinsson declined further comment about ads, agencies are now preparing for them. “In the near term, [organic search] is going to be the way to get your brands represented for Google Home,” says 360i president Jared Belsky, who points to comScore data that forecasts 50 percent of all search will be via voice tech by 2020. “Then ultimately, the ads auction will follow. You’ll be bidding to get your brand at the top of searches. I believe that’s the way it will go. Think about it—it has to.”

Jeremy Lockhorn, vp of emerging media at SapientRazorfish, remarks, “The specificity of voice search combined with what any of the platforms are already able to surmise about your specific context [such as current location] should ultimately result in more personalized results—and for advertisers, more narrowly targeted.”

Brands—which are accustomed to being relatively satisfied when showing up in the top five query results on desktops and phones—should brace for a new search reality, Belsky warns, where they may have to worry about being in either the first or second voice slot or else risk not being heard at all. “There’s going to be a battle for shelf space, and each slot should theoretically be more expensive,” he says. “It’s the same amount of interest funnelling into a smaller landscape.”

Scott Linzer, vp of owned media at iCrossing, adds that consumers may not “find it an inviting or valuable-enough experience to listen to two, three or more search results.”

Marketer interest in voice search is downright palpable in some circles. Fresh off a tour of client visits in Miami, Dallas, Chicago, Washington, D.C., and St. Louis, 360i’s Belsky reports that “every CMO, every vp of marketing and, especially, every ecommerce client is asking about this subject first and foremost. And they have three questions. ‘What should I do to prepare for when voice is the driver of ecommerce?’ The second one is, ‘What content do I have to think about to increase my chances to be the preferred answer with these devices?’ And, ‘Will all my search budget one day migrate onto these devices?’ There’s not obvious answers to any of these questions. Being early to all of this means you get the spoils.”

National Public Radio may not seem like an obvious media brand when it comes to being an early AI adopter, but the 47-year-old news organization has moved quickly in the space. Its news, music and storytelling app, NPR One, is evidently infused with machine learning, offering listeners curated options based on their shown preferences. And the organization extended the reach of NPR One’s content by inking a deal with Amazon Alexa in February. Since then, the phrase, “Alexa, play NPR News Now,” has been regularly heard in 400,000 homes that use the service. Additionally, NPR One has functionalities that its corporate sponsors have appreciated for some time; in particular, letting brands get extended audio mobile messaging when consumers want to hear more.

“They can press a button on the phone to hear more about what [carmaker] Kia is doing around innovation,” says Meg Goldthwaite, NPR’s marketing chief. Is voice-enabled digital advertising—where the listener asks for more Kia content and perhaps even fills out a test-drive form—right around the bend for NPR? “It’s absolutely possible,” she says.

Goldthwaite is probably onto something. Last year, IBM launched Watson Ads, which lets viewers “talk” with a brand’s ad and request additional info. Toyota, Campbell Soup and Unilever have tested the units, often averaging between one and two minutes of engagement, per Big Blue. “We have already begun to see that consumers are spending more time with these cognitive ads than with other digital ads,” says Carrie Seifer, CRO for the IBM Watson content and IoT platform.

Imagine being able to talk to video ads via Amazon Echo Show, a smart screen that comes juiced with Alexa voice functionalities. Costing $230, the device shipped in late June and provides a glimpse into next-gen households of the 2020s with talking screens, holograms, appliances and vehicles that Amazon exec Rabuchin alluded to.

“Voice is a productivity tool,” remarks Linda Boff, CMO of GE. Her company provides an intriguing business-to-business example of how AI-based utilities will soon help enormously expensive products like locomotives, jet engines and turbines sell themselves. For instance, GE has developed a prototype that lets an otherwise insentient locomotive send a voice message to a repair technician describing what needs to be fixed. It’s part of GE’s Digital Twin program, which creates 3-D representations of industrial assets and can process information gathered from individual machines globally to better inform decisions. It’s essentially machine-based crowdsourcing for when trains need to fill up on digital feedback instead of diesel fuel.

“The twin can call you and tell you, ‘I have an issue with my rotor,’” explains Colin Parris, vp, GE Software Research, who reveals that his brand’s voice AI features will roll out widely in 2018. “It can provide basic information that a service rep would want. Like, ‘What was the last journey you took? How many miles did you travel?’”

General Electric’s Digital Twin program has turbines and trains learning how to speak to their customers.

Staples also plans to flip the switch on its b-to-b voice AI initiative early next year. In partnership with IBM Watson, the retailer’s Easy Button—a fixture in its TV spots—will add an intelligent, voice-activated ordering service, which already has been tested in recent months by office managers with enterprise clients.

The business supplies chain is practically providing a step-by-step tutorial on how to build such an Echo-like device. Teaming with conversational design and software company Layer, Staples first built an AI-anchored chat app and carefully analysed the conversations.

“It helped us narrow down our use cases and create more robust experiences around the core challenges we were trying to solve,” says Ian Goodwin, head of Staples’ applied innovation team. “Once we got a really good idea of what our customers were asking on chat channels, then we built the voice experience with the Easy Button. It was really a gradual ramp-up rather than just going out and doing it.”

Indeed, voice AI probably shouldn’t be rushed to market. One company that understands that is Trunk Club, a Nordstrom-owned men’s clothier that recently rejected an offer from Amazon to be a fashion partner for Echo. Justin Hughes, Trunk Club’s vp of product development, isn’t AI-averse—he’s hoping to use voice activation in the next 18 months to spur his company’s subscriptions-based sales. But the timing with Amazon wasn’t right for his brand.

“If you are going to purchase between $300 and $1,000 of clothes, we don’t want it to be a weird experience—we want it to be something you return to often,” Hughes says. “There is so much imperfection in voice AI right now; it can be clunky.”

The vp also pointed to an elephant in the room—data ownership—when it comes to retailers partnering with Amazon or Google. “We just don’t want to give them all of our talk tracks,” Hughes says.

What’s more, hackers in the future might find a way to siphon off data collected from various “listening points” in homes and offices. Just last spring, Burger King caught considerable heat from privacy advocates after its television spot—which, in spite of the brouhaha, won a Cannes Grand Prix—hacked Google Home devices around the country.

The last thing voice-focused marketers want is Uncle Sam on the case. “As in-home, voice-controlled AI technology becomes even more prevalent and evolves in terms of substance—more capable of offering real answers to real questions—marketers will need to be increasingly careful to properly follow FTC disclosure and advertising guidelines,” notes advertising lawyer Ronald Camhi.

And since voice AI could greatly impact search-driven commerce, it’d probably be wise for Amazon and Google to encourage industry best practices. Then again, they might want to actually form a larger circle that also includes Facebook, Apple, Samsung and Microsoft. Facebook last week purchased AI start up Ozlo and is rumoured to be developing speaker technology of its own. Apple has star power in Siri, and Samsung in late July debuted voice capabilities for its assistant, Bixby. Also, Microsoft will continue to market its AI assistant, Cortana, in hopes of getting a significant piece of voice real estate.

Says Abhisht Arora, general manager of search and AI marketing at Microsoft, “Our focus is to go beyond voice search to create an assistant that has your back.”

A Rube Goldberg apparatus for the marketing history books

What do you get when you jerry-rig six Google Homes, six Alexa Dots, six laptops, six soundproof boxes and four extension cords? You get an apparatus that’d make Back to the Future’s Doc Brown proud.

Along with a bottle of Puerto Rican rum for the technicians, those are all of the ingredients that were poured into 360i’s VSM, or Voice Search Monitor, which is the brainchild of Mike Dobbs, the agency’s long-time vp of SEO. This makeshift system is designed to give clients like Norwegian Cruise Line an edge as smart speakers increasingly influence consumers’ purchase decisions.

VSM asks Google Home and Alexa Dot (Echo’s little sister device) thousands of questions around the clock, and whether the queries are answered automatically gets recorded on an Excel spreadsheet. An early round of testing found that for the travel category, Google answered 72 percent of questions while Amazon responded to 13 percent of the queries. In the second round of testing for finance, Google answered 68 percent of questions while Amazon answered roughly 14 percent.

Dobbs thinks unanswered questions present brands with an opportunity to become the search result with targeted, voice-minded digital content. “That’s one interesting thing that we believe is going to be white space that marketers need to explore,” he says. “They can raise their hands to take conversations on when major systems don’t have the data sources or depths to [provide an answer].”

Right now, his team is focused on speakers. But how long before they have to pivot and figure out how voice-powered refrigerators and other appliances impact clients’ needs?

“I don’t think it’s 2025—I honestly think it will be in the next two to five years,” Dobbs predicts. “Those technologies are not as elegant as they need to be right now. But, yeah, in five years—we’ll be there.”

Source: adweek.com; 6 Aug 2017

Amazon India Gets RBI Approval To Launch E-Wallet Services

After Flipkart and Snapdeal, Amazon is set to enter the digital payments market in India.

Amazon India, on Thursday, said it has received Reserve Bank of India (RBI) approval to operate a Pre-Paid Instrument (PPI), or its own digital wallet, giving the Seattle based e-commerce giant an opportunity to grab a part of the growing but increasingly crowded digital payments pie.

The development was first reported by Medianama.
“We are pleased to receive our PPI license from the RBI. Our focus is providing customers a convenient and trusted cashless payments experience.”
Sriram Jagannathan, VP Payments, Amazon India.

In March, the RBI issued draft guidelines suggesting higher capital requirement for entities offering PPIs and tougher know your customer (KYC) norms for customers using these services. According to the draft proposals, all entities seeking fresh approvals to launch PPIs will need to have an audited net worth of Rs 25 crore, which must be maintained at all times.

The RBI also proposed to tweak an earlier provision where you could hold up to Rs 10,000 in a ‘minimum information’ PPI account. PPI issuers will now have to convert every ‘minimum information’ account into a fully KYC compliant account within 60 days. In the interim, customers can hold up to Rs 20,000 in the account.

In his statement, Jagannathan said that the company hopes the regulator will continue with the existing simpler KYC norms to ensure that the use-case for wallets does not diminish.

“RBI is in the process of finalizing the guidelines for PPIs. We look forward to seeing a continuation of the low limit wallet dispensation with simplified KYC and authentication,” said Jagannathan.

Amazon has been inching towards an entry into digital payments for the last few months.

In December Amazon had launched its Pay Balance service to encourage cashless transactions, where customers can fund their prepaid balance using internet banking or credit and debit card. However this service was restricted to transactions on Amazon.

Last February, Amazon had acquired Noida based payments solution provider EMVANTAGE Payments Pvt Ltd for an undisclosed amount. The same month it also roped in former Citibank executive, Sriraman Jagannathan to head its payments business.

Amazon had applied for a wallet license last year in the name of Amazon Online Distribution Services Private Limited. The license was issued to Amazon on March 22 and is valid till the March 31, 2022.

The approval for Amazon’s e-wallet comes at a time when all major e-tailers, Flipkart, Paytm and Snapdeal, have launched their own wallets. The volume of transaction through PPIs has also risen, particularly in the months after demonetisation. According to RBI data, volume of payments through PPIs rose from Rs 1320 crore in November to Rs 2150 crore in March.

Amazon venturing into the digital wallet space could provide some competition to Paytm, said Harish HV, partner at Grant Thornton.
“This will not only boost their sales, but can give good competition to Paytm which has been ruling the wallets game in India, if they plan to run it as an independent business. How serious is Amazon about the wallet business remains to be seen.”
Harish HV, Partner, Grant Thornton

PPIs are one part of the broader digital payments space, which also include newly launched services built on platforms like the United Payments Interface (UPI). The government promoted BHIM is one such service. BharatQR, a QR-code based payment service, is another competitor to PPIs. According to a recent report by advisory firm IMAP, the volume of payments on digital platforms in India could hit $500 billion by 2020.

Source: bloombergquint.com; 12 April 2017

Google Thinks It Has Cracked the VR Adoption Problem

It’s launching a high-end wireless headset and new software improvements that might finally make you want to try virtual reality.

For most consumers, virtual reality is still a technology of the future.

Google hopes that by making the virtual world more convenient and accessible, more people will want to dive in.

This was the overarching message at the company’s annual developer conference this week in Mountain View, California, where executives like Clay Bavor, who leads virtual and augmented reality efforts, laid out what’s coming next for its Daydream VR platform—including powerful wireless headsets that can track your head position and orientation without special external sensors, and software changes that encourage users to spend more time in VR while sharing what they’re doing with others and not missing out on other things they may want to know about.

Altogether, the updates, coupled with Google’s clout as a leader in many technology spaces (search, Web browsing, mobile, to name a few) could mark a huge change in the visibility and uptake of virtual reality over time. And if it doesn’t work, it could be a huge setback for virtual reality—if not the end of it entirely.

Google has already put in a lot of work and money in hopes of bringing virtual-reality technology to the mainstream, through efforts like Google Cardboard—a foldable VR viewer that works with a smartphone—and, more recently, the more capable but still phone-dependent Daydream VR platform.

Yet while over 10 million of the Cardboard viewers have shipped since it was released in 2014, and there are about 150 apps out for the Google-made Daydream headset that shipped late last year, consumer adoption has been slow going. About 10 million headsets shipped globally last year, according to IDC, a market researcher, which is just a tiny fraction of the 1.5 billion smartphones that shipped in the same time frame.

There are lots of reasons why people aren’t buying into the technology. It’s isolating, and there aren’t a ton of things to do. All of the high-end headsets need to be connected to a computer or gaming console, and it’s annoying to feel cords flying around on your back and shoulders when you’re trying to forget about actual reality and explore, say, Mars in VR. And it’s expensive—a typical headset and its required computing platform will cost you anywhere from about $750 to well over $1,000, depending on what you’re buying.

Google is chipping away at the clunky and expensive issues by working with chipmaker Qualcomm to come up with a reference design for a wireless VR headset, and the company says that HTC and Lenovo are working with Google and building VR headsets like this.

The first of these headsets is expected to be available later this year, and Mike Jazayeri, Daydream’s director of product management, expects they will be priced similarly to desktop-connected VR devices today, minus the price of a PC—so, probably around $600 to $800.

“There’s just less friction. Put the headset on and you’re ready to go,” Jazayeri said.

I got to try an older prototype of one of these headsets this week, watching a short scene from the Star Wars film Rogue One to show off another innovation Google will roll out—a software tool called Seurat that can show desktop-computer-quality graphics on mobile VR devices by simplifying a given scene, computation-wise. The headset was fairly comfortable, with a wheel-style adjustment on the back of my head, and the imagery looked impressively crisp, even when I spun around or kneeled down on the ground to see the reflections better on the shiny virtual floor. When I moved too far in one direction or another, the world around me darkened to let me know I shouldn’t go any farther.

Google’s wireless plan could be a big deal for the VR industry. While several wireless virtual-reality headsets have been shown off that seek to marry high-end visuals and head tracking with a wireless design, they haven’t come out yet, and the ones that are wireless, like Google’s existing Daydream View and Samsung’s Gear VR, aren’t nearly as capable and won’t work without a really good smartphone.

Google is also making its software simpler and more comfortable. For instance, Daydream will add a dashboard so you can see Android notifications and settings without leaving VR, said Jazayeri. And he said it will start letting you share a two-dimensional view on, say, a TV screen of what you’re seeing in your VR headset with other people in the room—an effort to make the technology feel less isolating for those who are using it and more inclusive of those who aren’t.

Google hopes that these moves, plus others like a group 360-video-watching experience that YouTube will launch later this year and a plan to launch Google’s Chrome browser in virtual reality, could make users more interested in trying virtual reality and make it feel more familiar, too.

Source: technologyreview.com; 18 May 2017

Google Lens Is A Peek Into The Future Of Computing

Squint and you can see how Google plans to bypass the Search box–and the screen entirely.

Just minutes into Google I/O–the company’s biggest event of the year–CEO Sundar Pichai announced what could be the future of Google as you know it: Google Lens.

Google Lens is an AI-powered interface that’s coming to Google Photos and Google Assistant. It’s “a set of vision-based computing abilities that can understand what you’re looking at and help you take action based upon that information,” as Pichai put it during his keynote today.

What does that mean in practice? Using computer image recognition–which Pichai reminded us is currently better than that of humans–Google Lens can recognize what’s in your camera’s view, and actually do something meaningful with that information, rather than just tagging your friends–which is how Facebook uses image recognition.

Image 4

Pichai showed off three examples. In one, a camera aimed at a flower identified that flower, in what appeared to be a Google reverse image search in real time. In the second, a camera aimed at a Wi-Fi router’s SKU–a long list of numbers and a barcode which would take time to type–automatically snagged the login information and then automatically connected to the internet. And in the third, a camera aimed around a street full of shops and cafes pulled up reviews of each restaurant, placing an interface element directly over each facade in your field of view.
Later in the keynote, another presenter came on stage to show how, with a tap inside Google Assistant, Google Lens could translate a Japanese menu and pull up an image of the dish–in what looks like a riff on the technology Google acquired with Word Lens.

Image 5
Alone, each of these ideas is a bit of a novelty. But all built into one platform–whether that’s a smartphone or an AR headset in the future–you can see how Google is imagining breaking free of the Search box to be even more integrated with our lives, living intimately in front of our retinas.
“We are beginning to understand video and images,” says Pichai, casually. “All of Google was built because we started to understand webpages. So the fact that we can understand images and videos has profound impact on our core vision.”

Source: fastcodedesign.com; 17 May 2017

Facebook Is Working on Technology That Lets You Type and Control VR Devices With Your Mind

Could a ‘brain mouse’ be coming?

Think Facebook’s plans for virtual and augmented reality push the boundaries of how we interact? The social network is now working on technology that lets you type with your mind.

The company revealed it’s working on a “brain-to-computer interface” that will let humans potentially type five times faster with their mind than they currently can with their fingers. The innovation is part of a secretive sector of Facebook called Building 8, a unit that’s devoted to “moonshot” projects that are often as expensive as they are ambitious.

The projects were unveiled today at Facebook’s F8 developer conference in San Jose, Calif., by Regina Dugan, who heads up Building 8. With Facebook’s mind-reading technology, people will be able to wear non-invasive sensors that will hopefully allow them to type at a rate of 100 words per minute by decoding neural activity devoted to speech. Dugan said the technology could be used to help the disabled, but it could also become a way to input thoughts and commands directly into virtual reality and augmented reality devices. On stage, she showed a video of the technology being used to help a woman with ALS type at a rate of eight words per minute.

“It sounds impossible, but it’s closer than you realize,” she told thousands of developers during the final talk of the two-day event. “And it’s just the kind of fluid, human-computer interface needed for AR. Even something as simple as a yes-no brain click would fundamentally change our capability. A brain mouse for AR.”

According to Dugan, the human brain moves far faster than anyone can talk, which creates limits to how we communicate. She explained that the mind is capable of producing around 1 terabit of information per second—roughly the same amount of data as streaming 40 high-definition movies every second. However, speaking is the rate of about 100 bits per second, or about the same bandwidth as an ’80s internet modem.

“Speech is essentially a compression algorithm and a lousy one at best,” she said. “That’s why we love great writers and poets, because they’re just a little bit better at compressing the fullness of a thought into words.”

Along with the mind-typing tech, Facebook is also working on a way to let people hear through their skin. By creating an artificial cochlea, Facebook is working on what it calls a “haptic vocabulary” that lets people wear something on their sleeve to understand words based on vibrations in their arm.

Prior to joining Facebook last year, Dugan led Google’s Advanced Technologies and Projects Lab and before that ran the U.S. military’s R&D lab DARPA. She said Building 8 is modelled after DARPA, while building products “that recognize we are both mind and body, that our world is both digital and physical” and that “seek to connect us with the power and possibility of what’s new while honouring the intimacy of what’s timeless.”

“Your brain contains more information than what a word sounds like or how it is spelled,” she said. “It also contains semantic information that tells us what those words mean … Understanding semantics means that one day you may be able to choose to share your thoughts independent of language. English, Spanish or Mandarin may become the same.”

Facebook isn’t the only technology company investing in mind-reading. Last month, The Wall Street Journal reported that Elon Musk is creating his own brain-computer interface with a new venture called Neuralink, which is developing a way to implant brain electrodes to upload and download thoughts.

Source: adweek.com; 19 Apr 2017