Microsoft launches Custom Vision and Bing Entity Search

A series of AI offerings from Microsoft target advertisers beginning digital transformations

Microsoft launches Custom Vision and Bing Entity Search

With Amazon, Google, and IBM as competitors in cloud computing, big data, and artificial intelligence (AI), Microsoft is moving to triple down on what it sees as its strengths.

The company has announced advances in several tools that fall under a ‘Cognitive Services’ rubric, including Custom Vision Service, the Face API, and Bing Entity Search.

In a company blog post, Joseph Sirosh, corporate VP of AI at Microsoft, said that Cognitive Services are defined as “a collection of cloud-hosted APIs that let developers easily add AI capabilities for vision, speech, language, knowledge and search into applications, across devices and platforms such as iOS, Android and Windows.”

The purpose of the announcement is to extend reach, offering these tools to data scientists, developers, and advertisers interested to delve into AI with an existing Microsoft ecosystem. Businesses interested in introducing intuitive digital business models need not engage in myriad testing phases in order to find the best fit AI, and can instead rely on Cognitive Services, according to the company.

Custom Vision Service, which has moved from free preview to paid preview, allows advertisers to train a classifier with their own data, while exporting their own models in order to embed them into active applications, testing them in real time regardless of device operating systems.

A functionality that is well known to anyone using Facebook or Snapchat or an Android or iOS device, the Face API helps identify specific people, allowing developers working with advertisers with legacy systems around Microsoft to create groups of facial datasets in the millions.

Microsoft says that unlike existing variations, the Face API is scalable and not limited to a handful of faces.

Also available now is the Bing Entity Search API, which allows advertisers to embed search results from Bing into any application, going so far as retrieving results within an image or a site. Utilizing latent semantic indexing, the API can offer advertisers context on people, places, things, and local businesses, including TV shows, games, books, and movies.

“A social media app could augment users’ photos with information about the locations of each photo,” said Sirosh. “A news app could provide entity snapshots for entities in the article.”

Advertisers can include location information in photos that appear in social media stories as well.

Source; campaignasia.com; 5 Mar 2018

2018: Chatbots will unlock new era for marketers with unseen data potential

A new untapped marketing channel, effective content distribution, always online and personalisation at scale. These are the four primary benefits people often speak of when promoting chatbots. While I agree wholeheartedly, I do think one of the most significant benefits often remains overlooked.

Unparalleled business intelligence and data.

With a chatbot, brands have more information than ever at their fingertips. In fact, they have so much data it will quickly become overwhelming if they do not know what to look for and what to do with it.

There are five core areas of data a chatbot can provide:

Sentiment analysis

Chatbots are talking to people all day every day. Some people will be grumpy, and some will be happy. Wouldn’t it be great to know what topics of conversation, products, places, circumstances or “things” make a particular brand’s audience grumpy? Wouldn’t it also be great to see what promotions, prices, products, situations and events make them happy? This enables brands to start working towards doing less of the things that frustrate their customers and more of the things they like.

Analysing and measuring the sentiment of inbound messages a chatbot receives gives you an instant, always available, pulse of consumer sentiment.

No more “how would you rate us out of 10?” or “what did you think of XYZ product?” A chatbot’s conversations give insight into what people are thinking.

Sentiment analysis can help across lots of areas of a business too. Product managers understand successful features, Marketing can assess the impact of marcomms and promotions, and Customer Services are able to isolate and identify problems and produce relevant support material.

Understanding busy times

When is a bot working the hardest? What countries and territories have most conversations? What times of the day are people talking about certain things? Knowing when people are talking to a chatbot, where they are and what they are talking about provides insight into their behaviour.

This is valuable for preparing content, service and sales. If brands know when people need help, they understand the best time to deliver support and/or marketing collateral pre-emptively.

It is invaluable for marketing (right person at the right time with the right message), useful for service and even for development teams to know when to push updates to the platform or services.

Knowing who are the biggest brand fans

A lot of money and time is spent identifying and segmenting power users. Brands want to be able to understand the audiences that interact with them the most, spend the most money or bring in the most referrals.

The chatbot gives insight into the type of people who talk the longest, interact the most and give the highest feedback scores. Brands can analyse and collect the data these power users provide to find and attract more people who behave in a similar way to them.

The insight also helps to improve the chatbot experience because seeing what the typical power user experience looks like means it can be recreated for others. It is a bit like that famous case study by Facebook identifying that power users add seven friends within ten days.

Assessing usefulness

The data a chatbot also provides gives insight into whether it is pulling its weight and doing its job. They’re designed and launched to be a useful tool for your consumers and the business so put that to the test.

Standard metrics like retention and engagement are very useful; for example how long people talk with the bot for, how often they come back and if they switch channels. However, combine these with more bot-centric metrics like fall-backs to human, conversation blocks and “I do not understand” errors gives a unique ability to assess how useful a bot is.

These metrics help to ensure the chatbot is meeting its one true goal better than any other channel could.

Time to end-result

While we’re talking about goals, the time taken to reach the chatbot’s goal is also a good insight. If a chatbot is designed to sell, how quickly did it sell something? Does changing the flow/persona/style of conversation make it better? Do some upstream channels or marketing efforts lead to a faster time to sell, do some upstream channels not convert?

It gives insight into what channels create sales-ready-leads and which create cold leads.

If a chatbot is designed to help people, perhaps in a customer service capacity, then how quickly did it help? Did the user have to go through a 15-minute conversation before they got an answer? Or did they get it in 2 seconds? Is the chatbot getting faster at helping over time? Or is it getting worse?

The importance of chatbot data

These insights, data and analytics not only provide the power to A/B test, improve and iterate a chatbot, but they give never-before-seen insight into a brand’s audience.

This insight is not a quantitative “8 out of 10 people would refer you to a friend”, but a qualitative “I liked X, I disliked Y and I’d really like you more if you did Z” type data.

Of course, getting this data is nothing new, we have all launched surveys, focus groups and consumer research campaigns. The difference is, a chatbot offers this data at scale. All day, every day and all the time. For free.

After launching a chatbot, brands have so much data they will not know how to analyse it all. Often brands will be drowning in conversations, sentiments and “what went wrong there” type information. The goal is to turn that into actionable intelligence

Much like all channels, it is crucial not to pivot, iterate and react based on hunches or personal feelings. Use the analytics and data to record, measure, understand and test.

Source: marketingtechnews.net; 19 Dec 2017

Cyborg-ism and what it means for media

Moon Ribas has an implant in her foot that responds to seismic activity. Why should media professionals care about that?

Moon Ribas is a self-described “elective cyborg”. The Spanish artist has a device implanted in her foot that vibrates in tandem with a worldwide network of seismographs. This tremor sense is just one of a series of experiments Ribas has done to augment her senses with technology—choices that she says have made her more human, rather than less.

In this video, Campaign speaks with Ribas about her experience, and how certain people react to it. We also hear from Chris Stephenson, PHD’s regional head of strategy, who argues that creatives and media professionals can’t afford to ignore the small but growing trend toward cyborg-isation.

Source: campaignasia.com; 21 Nov 2017

Hear What Voice Means for Brands

If a picture paints 1,000 words, how long will it take for Alexa to reel off my flight search results? Is that better than seeing them on my screen? To understand what a technology means for society, business and brands, it’s vital that we understand its limitations as well as its profound new possibilities. Voice is a naturally fast and sometimes magical way to input information, but as it currently stands it’s not always the best means to getting valuable outcomes.

Of course, that hasn’t stopped us from embracing voice. While many now avoid talking on the phone like it’s the plague in favour of exclusively texting, we are at the same time excitedly talking into smart machines at every chance we get. Regardless of the logic, voice now seems to be a natural and desirable way for consumers to navigate information, so we must ask: What does it mean for brands?

I think we need to take both a short- and long-term look at voice technologies. Amara’s Law declares that the short-term effects of technology are often overstated, while its long-term impact is often underestimated. It is in this important long-term context that I believe the world of voice has a lot of promise.

Short-term Disillusionment

The current obsession with voice changing everything is a little naive. It shows a degree of familiarity with technology, but only a passing understanding of humanity.

Quite honestly, I just don’t buy the predictions. Gartner thinks that by 2020, 30% of searches will be done without a screen (Gartner, October 2016). Activate predicts that the connected speaker will be the fastest adopted product ever, reaching 50% of U.S. households in the next three years (Activate, October 2017). Well, I’ve used and loved multiple voice-activated devices for two years, and I still just don’t buy it.

As Executive Vice President and Head of Innovation at Zenith USA innovation is literally in my job title, but watching me use voice is a bit like watching my parents use a mouse for the first time — you can see a degree of wonder, but also that I’m using all sorts of brain muscles differently, and for the first time. The reality is that, for most people trained in the old way, voice is hard and the payoff not great.

Most articles about voice commerce seem to be written by those who have never used it. Buying things from a device is not easy. It’s not a faster way to get a pizza or a better way to procure shampoo; it’s a long and uncertain journey filled with concerns and friction. If all we ever knew was voice technology, the invention of the website would be our saviour. There are great use cases — the weather, alarm clocks, the news, music (if you are not picky or can actually remember songs names) and a few more, but most of the time, commerce simply isn’t it.

In the next three years or so, I just don’t see life changing much thanks to voice technologies, despite the fact that millions of households may have bought them. Some people will of course continue to shout out demands for Alexa to order laundry detergent off Amazon, and others will dabble with ordering an Uber while they get dressed — just to see what it’s like. Perhaps a tiny percentage of households, for a tiny percentage of their purchases, will order this way, but to say that it’s the entire future of commerce or branding shows a lack of understanding.

Long-term Potential

If we take whole swathes of industries today, from hailing taxis to paying energy bills, ordering food to buying clothes, finding flights to booking hotels, the reality is that seeing is better than hearing. Voice is great for micro transactions and a magical way to check your credit card bill, find out the remaining data on your monthly phone plan, or pay off bills quickly, but in its current state, it simply isn’t the future of retail.

Combined with other emerging technologies, however, I see where the long-term potential of voice will come into play. A bigger disrupter to the overall retail industry is subscription shopping, the ultimate frictionless experience: The idea that you won’t ever need to engage anyone or anything because you merely have items arriving automatically each month or week. The magical piece to this — awareness — is where voice can truly come into play.

This is ultimately the same quandary faced by all brands: First, making sure the product is top of mind and that awareness is high; second, that the brand is liked and understood, and finally, that the products are good enough to be sure people want to use them time and time again.

As emerging technologies like Artificial Intelligence begin to reach their true potential, it will power a whole new world of voice. In the long-term, these technologies will become a new operating system, a new gateway and glue to all of the devices that we own. We will see screens operate around us, with TV ads that ask us to speak to our Alexa, and then allow us to ask for maps to stores and locations to update magically in our cars.

In the future, the next generation will have grown ups using voice as the primary way to interact with devices, and therefore people will build business models and products based on this construct. General AI will make voice devices feel human and rich and, above all else, smart. We will trust our devices, and therefore automation will remove many of our brand choices. But even in this crazy new world, we still need to know that a W Hotel is trendy, that a Cadillac is a great car, that Domino’s makes great pizza. The importance of branding won’t change. The importance of advertising and marketing remain. Like with all new ad technologies, we will just need to find that added layer of humanity to reach through these new devices to the consumer.

Tom Goodwin is the Executive Vice President and Head of Innovation at Zenith in the U.S. His role is to understand new technology, behaviours and platforms, and ideate and implement solutions for clients that take advantage of the new opportunities these make possible.

Source: mediavillage.com; 13 Nov 2017

The Next Great Media Channel Is the Self-Driving Car. Will Brands Be Ready?

The average driver spends 48 minutes behind the wheel

GM has increased production of self-driving Chevy Bolts. Volvo is partnering with self-driving car chipmaker Nvidia. Volkswagen is working to power its cars with A.I.

The conversation has fully shifted from “if” driverless cars will become the new normal to “when.” While the general public is eager for improved productivity, better safety and hopes for reduced traffic, marketers should be looking at the autonomous vehicle from a different angle: as the next great media channel.

As technology continues its advance, the car will become a hot spot for interaction, entertainment and information. It will also be a treasure trove of data.

While the exact time frame is hard to predict, the advent of 5G–with 100 times the data transfer speeds of 4G, plus better connectivity between devices and internet-enabled objects—will unlock huge opportunity for both the auto industry and marketers who could exploit the new media experiences.

This opportunity will open up in two ways: first, as a precision marketing tool, using all the data the car will soon produce; secondly, as mass-reaching vehicle (pardon the pun), as self-driving cars become mini entertainment centres.

In the near future, autonomous cars will process staggering amounts of data: current and past destinations, speed of travel, demographics and biometrics of the riders, present and future weather, traffic conditions, and nearby landmarks and commercial locations—all of which marketers could access to achieve an unprecedented level of precision in consumer messaging.

Let’s consider what might soon be possible from a marketing perspective in this new channel for say, a coffee chain.

A passenger in a smart car passes a chain coffee shop on the way to work every morning. They have the coffee brand’s app. When they’re close, we programmatically target the rider, asking if they’d like to stop to pick up a medium soy latte—their preferred order at this time of day. If the rider says yes, their car is directed to the store, where their coffee is ready to be picked up at the counter, since payment has already been processed through the app.

In this example, you can see the confluence of benefits: Time of day meets exact location meets buyer-behaviour meets real-time messaging.

On the other side of our consumer’s day, recognizing the route and time stamp from work to home, a supermarket could serve up cooking content and then share an offer on ingredients. Or if passenger biometric data recognizes that the passenger is generally too hungry to wait, he or she could be served with ads and offers for nearby restaurants. Still on the way home but moving out of the food category, a network, premium channel or streaming service marketer could serve tune-in messages for that evening based on the driver’s historical preferences.

The opportunity doesn’t only lie within in-car targeting. Consider also that a car that drives itself gives users, who used to be drivers, time back in their day (an average of 48 minutes per day according to AAA). More time means more chance to consume ad-funded or branded content, turning the car into another “opportunity to see,” either on mobile or in-car screens.

That means those tune-in messages on the drive home can actually be trailers and behind-the-scenes extras. On the weekends, travel brands could leverage past purchase data to predict preferred vacation times and locations to send targeted destination content ideally timed to when the consumer is in consideration mode, and more receptive to leisure-oriented messages.

Car companies will need to decide what role they wish to play as the producer of this new media “device.” Will they follow the model the video game consoles have already hewn, licensing their tech to various content providers while fighting for valuable exclusive titles? How much control can and should they exert over the “pipes” of their cars?

Regardless of where those chips fall, they will be in the position to collect an amazing amount of data from the cars and those who ride in them. In a category in which the traditional ownership and profitability dynamics are shifting, licensing APIs and selling data will become an increasingly bigger aspect of how car brands make money.

Source: adweek.com; 31 Oct 2017

As Voice Has Its Moment, Amazon, Google and Apple Are Giving Brands a Way Into the Conversation

Few of their devices’ skills or apps are branded, but that’s changing

For decades, listening to ‘the voice of the customer’ has been the Holy Grail for marketers. Now, thanks to technology, they can do it millions of times each day.

According to Google and Bing, one in four searches is conducted by talking, not typing, a figure comScore predicts will reach 50 percent by 2020. That same year Echo alone will account for $7 billion in voice transactions—or vcommerce—per investment firm Mizuho Bank.

Voice is having its moment. People are talking, devices are listening and brands are attempting to insert themselves into the conversation, using Amazon Alexa voice skills and Google Home apps.

With a few choice phrases, consumers can order an Uber or Domino’s pizza on either device. Echo fans can also ask Patrón to help them make a margarita, consult Tide on how to remove stubborn stains, or get Campbell’s or Nestlé to serve up dinner recipes, among other skills.

Currently, only a small percentage of Alexa’s 25,000 voice skills are branded (Amazon won’t reveal how many). You’ll find even fewer in Google’s few hundred voice apps.

But that’s changing. Over the next few years, brand voices are about to get a lot louder.

Shots in the dark

Admittedly, many of those 25,000-odd voice apps are gimmicky—good for getting attention but not much else, noted Layne Harris, head of innovation technology for digital marketing agency 360i. But forward-thinking brands are embracing the technology now, he added, making voice skills a key element of their marketing strategy. Just last week, 360i launched a new practice solely focused on Amazon to help brands navigate the world of voice marketing.

When Patrón launched its voice skill in July 2016, it was part of a broader marketing initiative called the Cocktail Lab, involving 50 bartenders around the globe crafting new tequila-infused drinks, said Adrian Parker, vp of marketing for Patrón Spirits. (The distiller also just debuted an augmented reality app called the Patrón Experience for Apple’s iOS 11.)

Some 350,000 consumers have participated in the Cocktail Lab, said Parker, with more than 10 percent coming via the Alexa Skill. Since launching the lab, traffic to Patrón’s website has increased by 43 percent, thanks in part to Alexa users who spend more time on site and download more recipes.

“Voice was the first platform that allowed us to take what would traditionally be a face-to-face experience in a bar and make that virtually accessible,” Parker said. “Alexa is not only giving us the capability to engage with customers on their terms, it’s also preparing us for the voice-led future.”

Utility is key, said Greg Hedges, vp of emerging experiences at Rain, a digital consultancy that helped create Alexa apps for Campbell’s and Tide. The voice skill can’t merely be memorable; it must also be useful.

“The skills that see the most engagement are not just advertising,” he explained. “They take a step further towards connecting with consumers. They give people a reason to come back, because consumers know they can get the answers they’re looking for.”

For brands like Patrón and Campbell’s, getting consumers to drink more tequila and consume more chicken soup isn’t the only goal, said Charles Golvin, a research director for Gartner.

“They’re also trying to establish themselves as the voice of authority or curator across the broader product category that they serve,” he said. “It’s not just about selling Patrón tequila, it’s about being your mixologist expert. It’s not about selling Campbell’s soup, it’s about being your epicurean guide.”

A focus group of one

With the emergence of Alexa touchscreen devices like Echo Show and the new Echo Spot, brands also need to prepare for a voice+ world where results can be seen as well as heard, said Jonathan Patrizio, head of technical advisory at Mobiquity, a digital agency that developed Nestlé’s GoodNes recipe skill.

Using GoodNes on the Echo Show, home chefs can not only hear step-by-step instructions on how to make Baked Pesto Chicken or Korean Beef Bulgogi, but also see them displayed alongside images. Recipe users can also view the images via a GoodNes visual guide on their laptop’s or tablet’s browser.

“It’s a much more frictionless and natural way of interacting,” Patrizio said. “And if a brand can understand how to play in that domain, they’ve gained a great advantage over their competitors.”

But perhaps the most valuable thing brands glean from voice skills is data. Smart brands are building analytics into their skills and using the data to help drive new products and revenue streams.

“You can learn a lot from the things customers say,” said Hedges. “If Tide learns someone is asking about a specific stain and fabric combination, and it’s not one they’ve encountered before, maybe a new product comes out of that. With voice, it’s almost like a focus group of one.”

A key reason for building a voice skill is to gather data on customer usage and intent, said Patrizio.

“We built analytics into the GoodNes skill, and this lets Nestlé monitor Skill usage in aggregate since the developer doesn’t have access to the actual spoken recording,” he said. “For example, ‘Alexa, ask GoodNes to browse recipes’ is mapped to an intent, and we can track how many people used that intent, or how many times a single user requested this specific intent.”

Analytics can also reveal if the skill is working as the brand hoped it would. At this early stage, that’s not always the case.

Adam Marchick, CEO and co-founder of analytics company VoiceLabs, says that only 30 to 50 percent of conversational interactions are successful.

“It’s like we’re in year two of building web pages,” noted Marchick. “But right now, just giving brands conversational understanding—where they can actually see different voice paths and what’s working and what’s not—is a big step forward.”

The conversation is just beginning

Brands have been forced to react to similar technological upheavals before—notably with the shift to web and then to mobile. This time, though, they’re being more deliberate about it, said Joel Evans, co-founder and vp, digital transformation at Mobiquity.

“In the dot-com days websites were more like glorified brochures. We saw something similar happen when companies started doing mobile apps—they were just a check-off item,” he said. “Thankfully we’re not seeing that in the skills universe. Brands have realized it’s got to be the right experience when it actually gets out there.”

The next few years will see a huge acceleration of the technologies driving computer-human interaction—like artificial intelligence, natural language processing, chatbots and augmented reality. The voice apps we hear (and sometimes see) today may be nothing like the ones we encounter tomorrow. Smart brands are preparing for that now.

“Right now we’re creating the horse and carriage of voice technology,” said Patrón’s Parker. “Give it another 18 to 24 months, and we’ll be building Maseratis.”

Source: adweek.com; 10 Oct 2017

Consumers Have More Time Thanks to Technology, and Marketers Have More Opportunities to Fill It

Trends like ride sharing and IoT have seemed to lengthen the day


The day is now apparently 30 percent longer than it used to be.

Recently, I was taking an Uber while speaking to a colleague on the phone and simultaneously switching between Instagram, news headlines and texting a partner in Singapore on WhatsApp. Ten years ago, these activities did not and could not coexist; today, I can do them all simultaneously.

Most individuals love to think that time is scarce. There are only 24 hours in a day, right? Well, maybe not.

In Activate’s Tech and Media Outlook 2016, there is an interesting statistic derived from data at the Bureau of Labor Statistics, Nielsen and other leading sources suggesting that the average U.S. employed adult’s daily activity is equivalent to 31 hours and 28 minutes. In other words, the day is now apparently 30 percent longer than it used to be—and this will only continue to increase with the rapidly advancing trends of Internet of Things, the normalization of the “sharing economy,” autonomous vehicles, wearables, automation and artificial intelligence.

Time is a fascinating component of humanity: It must be used every day and is a resource one cannot hoard. But what if you challenged the notion that time is actually finite and scarce and instead began to look at the potential for its emerging abundance due to these trends? The winners in this scenario will be those marketers who can fill this new time—these new opportunities—in meaningful ways. More time does not equal less busy.

In 1955, Cyril Parkinson wrote an essay in which he stated, “Work expands so as to fill the time available for its completion.” This notion, which is known as Parkinson’s Law, is simple human nature. The same can be said about how we live our lives outside of work as well.

Mobile devices are the easiest way to grasp the notion of maximizing time as they provide a more frictionless way to read more books, to listen to more music, to send more emails, and to speak to more friends during moments that were previously not as convenient. Similarly, the rise in autonomous and shared vehicles provides consumers with time to enjoy more of what they want while giving smart brands and marketers opportunities to engage more deeply with them. For example, to Google, these self-driving vehicles are merely an accessory to one’s mobile device and enable more searches.

But what about other industries that may not be so obvious? In August 2016, Morgan Stanley published a piece of research titled Shared Autonomous Mobility: Potential Growth Opportunity for Beverage and Restaurant Firms.

Recognizing the serious social, public health and safety implications of such a study, Morgan Stanley noted that the introduction of ride-sharing services has also coincided with the reduction of DUI arrests in certain cities in the U.S. San Diego, for example, a city with one of the highest DUIs per capita, saw a 14 percent decrease in arrests between 2011 and 2013 when Uber began its operations. Conversely, the study also points out that when the city of Austin, Texas, temporarily banned Uber, there was a reduction in alcohol sales.

From a consumer and public safety perspective, the implications of the sharing economy in this instance are immense. And the impact for brands and marketers in it is real as well.

At the time of this study, the total global alcohol market was roughly $1.5 trillion. It was calculated that the incremental growth opportunities presented from shared mobility increased the total addressable marketplace by over $31 billion in 10 years based on a +1 increase in drink consumption per month. This is immense growth for a change in behaviour that would otherwise be barely noticeable.

But this does not only occur in the sharing economy. Advances in IoT, wearables, automation and AI will all continue to increase the ease of multitasking for consumers—while the world will continue to recalibrate to these new norms with the perception that nothing feels different and everyone continues to believe they are stressed for time.

Marketing only succeeds in these extra moments when it can provide increased value or native utility to the consumer’s expectation. In his book The End of Advertising, Andrew Essex articulates this as the need to move away from producing “the thing that interrupts the thing.”

This is why Taco Bell’s Cinco de Mayo lens on Snapchat was viewed 224 million times in one day last year; why consumers have such an affinity toward Citi Bike in New York City; or why SSGA’s Fearless Girl has had over 2.3 billion social media impressions.

“Time is money” is an adage that Benjamin Franklin is credited with saying, and, in an era of time abundance, the marketers who navigate this moment properly will be the true winners.

Keith Grossman is the global CRO at Bloomberg Media and has been part of two teams that have previously won Adweek Project Isaac Awards.

Source: adweek.com; 11 Sep 2017

The Next Generation of Emoji Will Be Based on Your Facial Expressions

There’s no faking your feelings with these social icons.

An app called Polygram uses AI to automatically capture your responses to friends’ photos and videos.

A new app is trying to make it simpler to help you react to photos and videos that your friends post online—it’s using AI to capture your facial expressions and automatically translate them into a range of emoji faces.

Polygram, which is free and available only for the iPhone for now, is a social app that lets you share things like photos, videos, and messages. Unlike on, say, Facebook, though, where you have a small range of pre-set reactions to choose from beyond clicking a little thumbs-up icon, Polygram uses a neural network that runs locally on the phone to figure out if you’re smiling, frowning, bored, embarrassed, surprised, and more.

Marcin Kmiec, one of Polygram’s cofounders, says the app’s AI works by capturing your face with the front-facing camera on the phone and analysing sequences of images as quickly as possible, rather than just looking at specific points on the face like your pupils and nose. This is done directly on the phone, using the iPhone’s graphics processing unit, he says.

When you look at a post in the app (for now the posts seem to consist of a suspicious amount of luxury vacation spots, fancy cars, and women in tight clothing), you see a small yellow emoji on the bottom of the display, its expression changing along with your real one. There’s a slight delay—20 milliseconds, which is just barely noticeable—between what you’re expressing on your face and what shows up in the app. The app records your response (or responses, if your expression changes a few times) in a little log of emoji on the side of the screen, along with those of others who’ve already looked at the same post.

The app is clearly meant to appeal to those who really care about how they’re perceived on social media: users can see a tally of the emoji reactions to each photo or video they post to the app, as well as details about who looked at the post, how long they looked at it, and where they’re located. This might be helpful for some mega-users, but could turn off those who are more wary about how their activity is tracked, even when it’s anonymized.

And, as many app makers know, it’s hard to succeed in social media; for every Instagram or Snapchat there are countless ones that fail to catch on. (Remember Secret? Or Path? Or Yik Yak? Or Google+?) Polygram’s founders say they’re concentrating on using the technology in their own app for now, but they also think it could be useful in other kinds of apps, like telemedicine, where it could be used to gauge a patient’s reaction to a doctor or nurse, for instance. Eventually, they say, they may release software tools that let other developers come up with their own applications for the technology.

Source: technologyreview.com; 28 August 2017

Facebook and Apple Are About to Take AR Mainstream. Here’s How Marketers Are Gearing Up

The UN, Ikea and the PGA Tour hone their augmented-reality chops

Apple’s ARKit platform at launch will be used by major brands like Ikea.

This past weekend in New York, the United Nations created a Facebook Live filter for World Humanitarian Day that let users overlay their real-time clips with augmented reality, particularly scrolling copy that told stories about civilians who have been affected by conflict. In Times Square, AR-enhanced videos aired on one of the iconic, commercial intersection’s large billboards. The endeavour was powered by Facebook’s 4-month-old AR system, dubbed Camera Effects Studio, which is getting the attention of brand marketers.

“For us, Facebook is an amazing platform to develop AR on because people are inherently using it already,” said Craig Elimeliah, managing director of creative technology at VML, the UN’s agency. “It includes Instagram as well. It includes Live and regular camera—so the sheer scale is unbelievable.”

While AR is still exploratory territory for marketers and media companies, its pixelated push to the mainstream has gotten a series of boosts this year from some of the biggest digital players. Snapchat—with its wacky filters and other virtual overlays—has continued to be popular among teens (even if Wall Street doesn’t like its pace). Apple, which has long been seen as a potential AR game changer due to the popularity of its iPhone and iPad, seems primed to give AR the turbocharge it needs to attract older demographics. When the Cupertino, Calif.-based company releases its iOS 11 mobile operating system in September, hundreds of millions of Apple-device owners will have augmented reality at their fingertips with a set of features called ARKit.

“Apple and Facebook will make augmented reality an everyday reality,” said David Deal, a digital marketing consultant. “We’ll see plenty of hit and miss with AR as we did when Apple opened up the iPhone to app developers, but ultimately both Apple and Facebook are in the best position to steamroll Snapchat with AR.”

Ikea, which will be one of the first major brands on Apple’s AR platform at launch, is developing an app that allows customers to see what furniture and other household items would look like in a three-dimensional view inside their homes. Ikea also plans to introduce new products in the AR app before they hit store shelves.

An AR visit to a Japanese-Canadian internment camp aims to inspire empathy. Jam3

Other brands and their agency partners are working on prototypes and developing spatial storytelling by layering detailed imagery, pertinent information and other brand-centric assets into physical spaces. For instance, PGA Tour has tapped Possible Mobile to develop an immersive rendering of 3-D golf course models in ARKit that should go live in the next six months. It would seem the initiative is generally driven by the sport of golf’s recent problems courting millennials. But Luis Goicouria, svp of digital platforms and media strategy at PGA Tour, contended that the 3-D experience is being built for all demographics.

“It really isn’t a gimmick designed to appeal to a single age group,” Goicouria said. “Apple is notable because their installed base is so large and the platform so consistent that it allows us to bring something to a large group very quickly, and that gives us immediate feedback from our fans on what is working.”

Apple’s ARKit should be ripe for innovative storytelling. With that in mind, Toronto-based digital production firm Jam3 is using the platform to create a historical and educational narrative app called East of the Rockies in collaboration with Joy Kogawa, an author and former internee at a Japanese-Canadian internment camp. The experiential app will illuminate aspects of detainee life in three chapters on the user’s smartphone screen, with each new episode triggered as users virtually “walk around” the environment and get close to different locations. “[It will] create a lot of empathy in ways that I didn’t think was possible in a digital medium,” noted Jason Legge, producer at Jam3.

The possibilities will only become more self-evident as augmented reality grows. The number of users is expected to jump by 36 percent between now and 2020, when 54.4 million Americans will regularly engage with AR, according to eMarketer.

Source: adweek.com; 20 August 2017

Here’s What You Need to Know About Voice AI, the Next Frontier of Brand Marketing

67 million voice-assisted devices will be in use in the U.S. by 2019

Soon enough, your breakfast bar could be your search bar. Your lamp could be how you shop for lightbulbs. Your Chevy or Ford might be your vehicle for finding a YouTube video, like the classic SNL skit of Chevy Chase’s send-up of President Gerald Ford, to make a long drive less tedious. And you won’t have to lift a finger—all you’ll need to do is turn toward one of those inanimate objects and say something. Welcome to a future where your voice is the main signal for the elaborate data grid known as your life.

Two decades ago when Amazon and Google were founded, only a seer could have predicted that those companies would eventually start turning the physical world into a vast, voice-activated interface. Artificial intelligence-powered voice is what perhaps makes Amazon and Google the real duopoly to watch (sorry, Facebook), as their smart speakers—2-year-old Amazon Echo and 8-month-old Google Home—are gaining traction. Forty-five million voice-assisted devices are now in use in the U.S., according to eMarketer, and that number will rise to 67 million by 2019. Amazon Echo, which utilizes the ecommerce giant’s voice artificial intelligence called Alexa, owns roughly 70 percent of the smart speaker market, per eMarketer.

“Our vision is that customers will be able to access Alexa whenever and wherever they want,” says Steve Rabuchin, vp of Amazon Alexa. “That means customers may be able to talk to their cars, refrigerators, thermostats, lamps and all kinds of devices in and outside their homes.”
While brand marketers are coming to grips with a consumer landscape where touch points mutate into listening points, search marketing pros are laying important groundwork by focusing on what can be done with Amazon Echo and Google Home (the latter of which employs a voice AI system called Assistant). With voice replacing fingertips, search is ground zero right now when it comes to brands.

But how will paid search figure into it all?

Gummi Hafsteinsson, Google Assistant product lead, says that, for now, the goal is to create a personalized user experience that “can answer questions, manage tasks, help get things done and also have some fun with music and more. We’re starting with creating this experience and haven’t shared details on advertising within the Assistant up to this point.”

While Hafsteinsson declined further comment about ads, agencies are now preparing for them. “In the near term, [organic search] is going to be the way to get your brands represented for Google Home,” says 360i president Jared Belsky, who points to comScore data that forecasts 50 percent of all search will be via voice tech by 2020. “Then ultimately, the ads auction will follow. You’ll be bidding to get your brand at the top of searches. I believe that’s the way it will go. Think about it—it has to.”

Jeremy Lockhorn, vp of emerging media at SapientRazorfish, remarks, “The specificity of voice search combined with what any of the platforms are already able to surmise about your specific context [such as current location] should ultimately result in more personalized results—and for advertisers, more narrowly targeted.”

Brands—which are accustomed to being relatively satisfied when showing up in the top five query results on desktops and phones—should brace for a new search reality, Belsky warns, where they may have to worry about being in either the first or second voice slot or else risk not being heard at all. “There’s going to be a battle for shelf space, and each slot should theoretically be more expensive,” he says. “It’s the same amount of interest funnelling into a smaller landscape.”

Scott Linzer, vp of owned media at iCrossing, adds that consumers may not “find it an inviting or valuable-enough experience to listen to two, three or more search results.”

Marketer interest in voice search is downright palpable in some circles. Fresh off a tour of client visits in Miami, Dallas, Chicago, Washington, D.C., and St. Louis, 360i’s Belsky reports that “every CMO, every vp of marketing and, especially, every ecommerce client is asking about this subject first and foremost. And they have three questions. ‘What should I do to prepare for when voice is the driver of ecommerce?’ The second one is, ‘What content do I have to think about to increase my chances to be the preferred answer with these devices?’ And, ‘Will all my search budget one day migrate onto these devices?’ There’s not obvious answers to any of these questions. Being early to all of this means you get the spoils.”

National Public Radio may not seem like an obvious media brand when it comes to being an early AI adopter, but the 47-year-old news organization has moved quickly in the space. Its news, music and storytelling app, NPR One, is evidently infused with machine learning, offering listeners curated options based on their shown preferences. And the organization extended the reach of NPR One’s content by inking a deal with Amazon Alexa in February. Since then, the phrase, “Alexa, play NPR News Now,” has been regularly heard in 400,000 homes that use the service. Additionally, NPR One has functionalities that its corporate sponsors have appreciated for some time; in particular, letting brands get extended audio mobile messaging when consumers want to hear more.

“They can press a button on the phone to hear more about what [carmaker] Kia is doing around innovation,” says Meg Goldthwaite, NPR’s marketing chief. Is voice-enabled digital advertising—where the listener asks for more Kia content and perhaps even fills out a test-drive form—right around the bend for NPR? “It’s absolutely possible,” she says.

Goldthwaite is probably onto something. Last year, IBM launched Watson Ads, which lets viewers “talk” with a brand’s ad and request additional info. Toyota, Campbell Soup and Unilever have tested the units, often averaging between one and two minutes of engagement, per Big Blue. “We have already begun to see that consumers are spending more time with these cognitive ads than with other digital ads,” says Carrie Seifer, CRO for the IBM Watson content and IoT platform.

Imagine being able to talk to video ads via Amazon Echo Show, a smart screen that comes juiced with Alexa voice functionalities. Costing $230, the device shipped in late June and provides a glimpse into next-gen households of the 2020s with talking screens, holograms, appliances and vehicles that Amazon exec Rabuchin alluded to.

“Voice is a productivity tool,” remarks Linda Boff, CMO of GE. Her company provides an intriguing business-to-business example of how AI-based utilities will soon help enormously expensive products like locomotives, jet engines and turbines sell themselves. For instance, GE has developed a prototype that lets an otherwise insentient locomotive send a voice message to a repair technician describing what needs to be fixed. It’s part of GE’s Digital Twin program, which creates 3-D representations of industrial assets and can process information gathered from individual machines globally to better inform decisions. It’s essentially machine-based crowdsourcing for when trains need to fill up on digital feedback instead of diesel fuel.

“The twin can call you and tell you, ‘I have an issue with my rotor,’” explains Colin Parris, vp, GE Software Research, who reveals that his brand’s voice AI features will roll out widely in 2018. “It can provide basic information that a service rep would want. Like, ‘What was the last journey you took? How many miles did you travel?’”

General Electric’s Digital Twin program has turbines and trains learning how to speak to their customers.

Staples also plans to flip the switch on its b-to-b voice AI initiative early next year. In partnership with IBM Watson, the retailer’s Easy Button—a fixture in its TV spots—will add an intelligent, voice-activated ordering service, which already has been tested in recent months by office managers with enterprise clients.

The business supplies chain is practically providing a step-by-step tutorial on how to build such an Echo-like device. Teaming with conversational design and software company Layer, Staples first built an AI-anchored chat app and carefully analysed the conversations.

“It helped us narrow down our use cases and create more robust experiences around the core challenges we were trying to solve,” says Ian Goodwin, head of Staples’ applied innovation team. “Once we got a really good idea of what our customers were asking on chat channels, then we built the voice experience with the Easy Button. It was really a gradual ramp-up rather than just going out and doing it.”

Indeed, voice AI probably shouldn’t be rushed to market. One company that understands that is Trunk Club, a Nordstrom-owned men’s clothier that recently rejected an offer from Amazon to be a fashion partner for Echo. Justin Hughes, Trunk Club’s vp of product development, isn’t AI-averse—he’s hoping to use voice activation in the next 18 months to spur his company’s subscriptions-based sales. But the timing with Amazon wasn’t right for his brand.

“If you are going to purchase between $300 and $1,000 of clothes, we don’t want it to be a weird experience—we want it to be something you return to often,” Hughes says. “There is so much imperfection in voice AI right now; it can be clunky.”

The vp also pointed to an elephant in the room—data ownership—when it comes to retailers partnering with Amazon or Google. “We just don’t want to give them all of our talk tracks,” Hughes says.

What’s more, hackers in the future might find a way to siphon off data collected from various “listening points” in homes and offices. Just last spring, Burger King caught considerable heat from privacy advocates after its television spot—which, in spite of the brouhaha, won a Cannes Grand Prix—hacked Google Home devices around the country.

The last thing voice-focused marketers want is Uncle Sam on the case. “As in-home, voice-controlled AI technology becomes even more prevalent and evolves in terms of substance—more capable of offering real answers to real questions—marketers will need to be increasingly careful to properly follow FTC disclosure and advertising guidelines,” notes advertising lawyer Ronald Camhi.

And since voice AI could greatly impact search-driven commerce, it’d probably be wise for Amazon and Google to encourage industry best practices. Then again, they might want to actually form a larger circle that also includes Facebook, Apple, Samsung and Microsoft. Facebook last week purchased AI start up Ozlo and is rumoured to be developing speaker technology of its own. Apple has star power in Siri, and Samsung in late July debuted voice capabilities for its assistant, Bixby. Also, Microsoft will continue to market its AI assistant, Cortana, in hopes of getting a significant piece of voice real estate.

Says Abhisht Arora, general manager of search and AI marketing at Microsoft, “Our focus is to go beyond voice search to create an assistant that has your back.”

A Rube Goldberg apparatus for the marketing history books

What do you get when you jerry-rig six Google Homes, six Alexa Dots, six laptops, six soundproof boxes and four extension cords? You get an apparatus that’d make Back to the Future’s Doc Brown proud.

Along with a bottle of Puerto Rican rum for the technicians, those are all of the ingredients that were poured into 360i’s VSM, or Voice Search Monitor, which is the brainchild of Mike Dobbs, the agency’s long-time vp of SEO. This makeshift system is designed to give clients like Norwegian Cruise Line an edge as smart speakers increasingly influence consumers’ purchase decisions.

VSM asks Google Home and Alexa Dot (Echo’s little sister device) thousands of questions around the clock, and whether the queries are answered automatically gets recorded on an Excel spreadsheet. An early round of testing found that for the travel category, Google answered 72 percent of questions while Amazon responded to 13 percent of the queries. In the second round of testing for finance, Google answered 68 percent of questions while Amazon answered roughly 14 percent.

Dobbs thinks unanswered questions present brands with an opportunity to become the search result with targeted, voice-minded digital content. “That’s one interesting thing that we believe is going to be white space that marketers need to explore,” he says. “They can raise their hands to take conversations on when major systems don’t have the data sources or depths to [provide an answer].”

Right now, his team is focused on speakers. But how long before they have to pivot and figure out how voice-powered refrigerators and other appliances impact clients’ needs?

“I don’t think it’s 2025—I honestly think it will be in the next two to five years,” Dobbs predicts. “Those technologies are not as elegant as they need to be right now. But, yeah, in five years—we’ll be there.”

Source: adweek.com; 6 Aug 2017