The Next Generation of Emoji Will Be Based on Your Facial Expressions

There’s no faking your feelings with these social icons.

An app called Polygram uses AI to automatically capture your responses to friends’ photos and videos.

A new app is trying to make it simpler to help you react to photos and videos that your friends post online—it’s using AI to capture your facial expressions and automatically translate them into a range of emoji faces.

Polygram, which is free and available only for the iPhone for now, is a social app that lets you share things like photos, videos, and messages. Unlike on, say, Facebook, though, where you have a small range of pre-set reactions to choose from beyond clicking a little thumbs-up icon, Polygram uses a neural network that runs locally on the phone to figure out if you’re smiling, frowning, bored, embarrassed, surprised, and more.

Marcin Kmiec, one of Polygram’s cofounders, says the app’s AI works by capturing your face with the front-facing camera on the phone and analysing sequences of images as quickly as possible, rather than just looking at specific points on the face like your pupils and nose. This is done directly on the phone, using the iPhone’s graphics processing unit, he says.

When you look at a post in the app (for now the posts seem to consist of a suspicious amount of luxury vacation spots, fancy cars, and women in tight clothing), you see a small yellow emoji on the bottom of the display, its expression changing along with your real one. There’s a slight delay—20 milliseconds, which is just barely noticeable—between what you’re expressing on your face and what shows up in the app. The app records your response (or responses, if your expression changes a few times) in a little log of emoji on the side of the screen, along with those of others who’ve already looked at the same post.

The app is clearly meant to appeal to those who really care about how they’re perceived on social media: users can see a tally of the emoji reactions to each photo or video they post to the app, as well as details about who looked at the post, how long they looked at it, and where they’re located. This might be helpful for some mega-users, but could turn off those who are more wary about how their activity is tracked, even when it’s anonymized.

And, as many app makers know, it’s hard to succeed in social media; for every Instagram or Snapchat there are countless ones that fail to catch on. (Remember Secret? Or Path? Or Yik Yak? Or Google+?) Polygram’s founders say they’re concentrating on using the technology in their own app for now, but they also think it could be useful in other kinds of apps, like telemedicine, where it could be used to gauge a patient’s reaction to a doctor or nurse, for instance. Eventually, they say, they may release software tools that let other developers come up with their own applications for the technology.

Source:; 28 August 2017

Facebook and Apple Are About to Take AR Mainstream. Here’s How Marketers Are Gearing Up

The UN, Ikea and the PGA Tour hone their augmented-reality chops

Apple’s ARKit platform at launch will be used by major brands like Ikea.

This past weekend in New York, the United Nations created a Facebook Live filter for World Humanitarian Day that let users overlay their real-time clips with augmented reality, particularly scrolling copy that told stories about civilians who have been affected by conflict. In Times Square, AR-enhanced videos aired on one of the iconic, commercial intersection’s large billboards. The endeavour was powered by Facebook’s 4-month-old AR system, dubbed Camera Effects Studio, which is getting the attention of brand marketers.

“For us, Facebook is an amazing platform to develop AR on because people are inherently using it already,” said Craig Elimeliah, managing director of creative technology at VML, the UN’s agency. “It includes Instagram as well. It includes Live and regular camera—so the sheer scale is unbelievable.”

While AR is still exploratory territory for marketers and media companies, its pixelated push to the mainstream has gotten a series of boosts this year from some of the biggest digital players. Snapchat—with its wacky filters and other virtual overlays—has continued to be popular among teens (even if Wall Street doesn’t like its pace). Apple, which has long been seen as a potential AR game changer due to the popularity of its iPhone and iPad, seems primed to give AR the turbocharge it needs to attract older demographics. When the Cupertino, Calif.-based company releases its iOS 11 mobile operating system in September, hundreds of millions of Apple-device owners will have augmented reality at their fingertips with a set of features called ARKit.

“Apple and Facebook will make augmented reality an everyday reality,” said David Deal, a digital marketing consultant. “We’ll see plenty of hit and miss with AR as we did when Apple opened up the iPhone to app developers, but ultimately both Apple and Facebook are in the best position to steamroll Snapchat with AR.”

Ikea, which will be one of the first major brands on Apple’s AR platform at launch, is developing an app that allows customers to see what furniture and other household items would look like in a three-dimensional view inside their homes. Ikea also plans to introduce new products in the AR app before they hit store shelves.

An AR visit to a Japanese-Canadian internment camp aims to inspire empathy. Jam3

Other brands and their agency partners are working on prototypes and developing spatial storytelling by layering detailed imagery, pertinent information and other brand-centric assets into physical spaces. For instance, PGA Tour has tapped Possible Mobile to develop an immersive rendering of 3-D golf course models in ARKit that should go live in the next six months. It would seem the initiative is generally driven by the sport of golf’s recent problems courting millennials. But Luis Goicouria, svp of digital platforms and media strategy at PGA Tour, contended that the 3-D experience is being built for all demographics.

“It really isn’t a gimmick designed to appeal to a single age group,” Goicouria said. “Apple is notable because their installed base is so large and the platform so consistent that it allows us to bring something to a large group very quickly, and that gives us immediate feedback from our fans on what is working.”

Apple’s ARKit should be ripe for innovative storytelling. With that in mind, Toronto-based digital production firm Jam3 is using the platform to create a historical and educational narrative app called East of the Rockies in collaboration with Joy Kogawa, an author and former internee at a Japanese-Canadian internment camp. The experiential app will illuminate aspects of detainee life in three chapters on the user’s smartphone screen, with each new episode triggered as users virtually “walk around” the environment and get close to different locations. “[It will] create a lot of empathy in ways that I didn’t think was possible in a digital medium,” noted Jason Legge, producer at Jam3.

The possibilities will only become more self-evident as augmented reality grows. The number of users is expected to jump by 36 percent between now and 2020, when 54.4 million Americans will regularly engage with AR, according to eMarketer.

Source:; 20 August 2017

Here’s What You Need to Know About Voice AI, the Next Frontier of Brand Marketing

67 million voice-assisted devices will be in use in the U.S. by 2019

Soon enough, your breakfast bar could be your search bar. Your lamp could be how you shop for lightbulbs. Your Chevy or Ford might be your vehicle for finding a YouTube video, like the classic SNL skit of Chevy Chase’s send-up of President Gerald Ford, to make a long drive less tedious. And you won’t have to lift a finger—all you’ll need to do is turn toward one of those inanimate objects and say something. Welcome to a future where your voice is the main signal for the elaborate data grid known as your life.

Two decades ago when Amazon and Google were founded, only a seer could have predicted that those companies would eventually start turning the physical world into a vast, voice-activated interface. Artificial intelligence-powered voice is what perhaps makes Amazon and Google the real duopoly to watch (sorry, Facebook), as their smart speakers—2-year-old Amazon Echo and 8-month-old Google Home—are gaining traction. Forty-five million voice-assisted devices are now in use in the U.S., according to eMarketer, and that number will rise to 67 million by 2019. Amazon Echo, which utilizes the ecommerce giant’s voice artificial intelligence called Alexa, owns roughly 70 percent of the smart speaker market, per eMarketer.

“Our vision is that customers will be able to access Alexa whenever and wherever they want,” says Steve Rabuchin, vp of Amazon Alexa. “That means customers may be able to talk to their cars, refrigerators, thermostats, lamps and all kinds of devices in and outside their homes.”
While brand marketers are coming to grips with a consumer landscape where touch points mutate into listening points, search marketing pros are laying important groundwork by focusing on what can be done with Amazon Echo and Google Home (the latter of which employs a voice AI system called Assistant). With voice replacing fingertips, search is ground zero right now when it comes to brands.

But how will paid search figure into it all?

Gummi Hafsteinsson, Google Assistant product lead, says that, for now, the goal is to create a personalized user experience that “can answer questions, manage tasks, help get things done and also have some fun with music and more. We’re starting with creating this experience and haven’t shared details on advertising within the Assistant up to this point.”

While Hafsteinsson declined further comment about ads, agencies are now preparing for them. “In the near term, [organic search] is going to be the way to get your brands represented for Google Home,” says 360i president Jared Belsky, who points to comScore data that forecasts 50 percent of all search will be via voice tech by 2020. “Then ultimately, the ads auction will follow. You’ll be bidding to get your brand at the top of searches. I believe that’s the way it will go. Think about it—it has to.”

Jeremy Lockhorn, vp of emerging media at SapientRazorfish, remarks, “The specificity of voice search combined with what any of the platforms are already able to surmise about your specific context [such as current location] should ultimately result in more personalized results—and for advertisers, more narrowly targeted.”

Brands—which are accustomed to being relatively satisfied when showing up in the top five query results on desktops and phones—should brace for a new search reality, Belsky warns, where they may have to worry about being in either the first or second voice slot or else risk not being heard at all. “There’s going to be a battle for shelf space, and each slot should theoretically be more expensive,” he says. “It’s the same amount of interest funnelling into a smaller landscape.”

Scott Linzer, vp of owned media at iCrossing, adds that consumers may not “find it an inviting or valuable-enough experience to listen to two, three or more search results.”

Marketer interest in voice search is downright palpable in some circles. Fresh off a tour of client visits in Miami, Dallas, Chicago, Washington, D.C., and St. Louis, 360i’s Belsky reports that “every CMO, every vp of marketing and, especially, every ecommerce client is asking about this subject first and foremost. And they have three questions. ‘What should I do to prepare for when voice is the driver of ecommerce?’ The second one is, ‘What content do I have to think about to increase my chances to be the preferred answer with these devices?’ And, ‘Will all my search budget one day migrate onto these devices?’ There’s not obvious answers to any of these questions. Being early to all of this means you get the spoils.”

National Public Radio may not seem like an obvious media brand when it comes to being an early AI adopter, but the 47-year-old news organization has moved quickly in the space. Its news, music and storytelling app, NPR One, is evidently infused with machine learning, offering listeners curated options based on their shown preferences. And the organization extended the reach of NPR One’s content by inking a deal with Amazon Alexa in February. Since then, the phrase, “Alexa, play NPR News Now,” has been regularly heard in 400,000 homes that use the service. Additionally, NPR One has functionalities that its corporate sponsors have appreciated for some time; in particular, letting brands get extended audio mobile messaging when consumers want to hear more.

“They can press a button on the phone to hear more about what [carmaker] Kia is doing around innovation,” says Meg Goldthwaite, NPR’s marketing chief. Is voice-enabled digital advertising—where the listener asks for more Kia content and perhaps even fills out a test-drive form—right around the bend for NPR? “It’s absolutely possible,” she says.

Goldthwaite is probably onto something. Last year, IBM launched Watson Ads, which lets viewers “talk” with a brand’s ad and request additional info. Toyota, Campbell Soup and Unilever have tested the units, often averaging between one and two minutes of engagement, per Big Blue. “We have already begun to see that consumers are spending more time with these cognitive ads than with other digital ads,” says Carrie Seifer, CRO for the IBM Watson content and IoT platform.

Imagine being able to talk to video ads via Amazon Echo Show, a smart screen that comes juiced with Alexa voice functionalities. Costing $230, the device shipped in late June and provides a glimpse into next-gen households of the 2020s with talking screens, holograms, appliances and vehicles that Amazon exec Rabuchin alluded to.

“Voice is a productivity tool,” remarks Linda Boff, CMO of GE. Her company provides an intriguing business-to-business example of how AI-based utilities will soon help enormously expensive products like locomotives, jet engines and turbines sell themselves. For instance, GE has developed a prototype that lets an otherwise insentient locomotive send a voice message to a repair technician describing what needs to be fixed. It’s part of GE’s Digital Twin program, which creates 3-D representations of industrial assets and can process information gathered from individual machines globally to better inform decisions. It’s essentially machine-based crowdsourcing for when trains need to fill up on digital feedback instead of diesel fuel.

“The twin can call you and tell you, ‘I have an issue with my rotor,’” explains Colin Parris, vp, GE Software Research, who reveals that his brand’s voice AI features will roll out widely in 2018. “It can provide basic information that a service rep would want. Like, ‘What was the last journey you took? How many miles did you travel?’”

General Electric’s Digital Twin program has turbines and trains learning how to speak to their customers.

Staples also plans to flip the switch on its b-to-b voice AI initiative early next year. In partnership with IBM Watson, the retailer’s Easy Button—a fixture in its TV spots—will add an intelligent, voice-activated ordering service, which already has been tested in recent months by office managers with enterprise clients.

The business supplies chain is practically providing a step-by-step tutorial on how to build such an Echo-like device. Teaming with conversational design and software company Layer, Staples first built an AI-anchored chat app and carefully analysed the conversations.

“It helped us narrow down our use cases and create more robust experiences around the core challenges we were trying to solve,” says Ian Goodwin, head of Staples’ applied innovation team. “Once we got a really good idea of what our customers were asking on chat channels, then we built the voice experience with the Easy Button. It was really a gradual ramp-up rather than just going out and doing it.”

Indeed, voice AI probably shouldn’t be rushed to market. One company that understands that is Trunk Club, a Nordstrom-owned men’s clothier that recently rejected an offer from Amazon to be a fashion partner for Echo. Justin Hughes, Trunk Club’s vp of product development, isn’t AI-averse—he’s hoping to use voice activation in the next 18 months to spur his company’s subscriptions-based sales. But the timing with Amazon wasn’t right for his brand.

“If you are going to purchase between $300 and $1,000 of clothes, we don’t want it to be a weird experience—we want it to be something you return to often,” Hughes says. “There is so much imperfection in voice AI right now; it can be clunky.”

The vp also pointed to an elephant in the room—data ownership—when it comes to retailers partnering with Amazon or Google. “We just don’t want to give them all of our talk tracks,” Hughes says.

What’s more, hackers in the future might find a way to siphon off data collected from various “listening points” in homes and offices. Just last spring, Burger King caught considerable heat from privacy advocates after its television spot—which, in spite of the brouhaha, won a Cannes Grand Prix—hacked Google Home devices around the country.

The last thing voice-focused marketers want is Uncle Sam on the case. “As in-home, voice-controlled AI technology becomes even more prevalent and evolves in terms of substance—more capable of offering real answers to real questions—marketers will need to be increasingly careful to properly follow FTC disclosure and advertising guidelines,” notes advertising lawyer Ronald Camhi.

And since voice AI could greatly impact search-driven commerce, it’d probably be wise for Amazon and Google to encourage industry best practices. Then again, they might want to actually form a larger circle that also includes Facebook, Apple, Samsung and Microsoft. Facebook last week purchased AI start up Ozlo and is rumoured to be developing speaker technology of its own. Apple has star power in Siri, and Samsung in late July debuted voice capabilities for its assistant, Bixby. Also, Microsoft will continue to market its AI assistant, Cortana, in hopes of getting a significant piece of voice real estate.

Says Abhisht Arora, general manager of search and AI marketing at Microsoft, “Our focus is to go beyond voice search to create an assistant that has your back.”

A Rube Goldberg apparatus for the marketing history books

What do you get when you jerry-rig six Google Homes, six Alexa Dots, six laptops, six soundproof boxes and four extension cords? You get an apparatus that’d make Back to the Future’s Doc Brown proud.

Along with a bottle of Puerto Rican rum for the technicians, those are all of the ingredients that were poured into 360i’s VSM, or Voice Search Monitor, which is the brainchild of Mike Dobbs, the agency’s long-time vp of SEO. This makeshift system is designed to give clients like Norwegian Cruise Line an edge as smart speakers increasingly influence consumers’ purchase decisions.

VSM asks Google Home and Alexa Dot (Echo’s little sister device) thousands of questions around the clock, and whether the queries are answered automatically gets recorded on an Excel spreadsheet. An early round of testing found that for the travel category, Google answered 72 percent of questions while Amazon responded to 13 percent of the queries. In the second round of testing for finance, Google answered 68 percent of questions while Amazon answered roughly 14 percent.

Dobbs thinks unanswered questions present brands with an opportunity to become the search result with targeted, voice-minded digital content. “That’s one interesting thing that we believe is going to be white space that marketers need to explore,” he says. “They can raise their hands to take conversations on when major systems don’t have the data sources or depths to [provide an answer].”

Right now, his team is focused on speakers. But how long before they have to pivot and figure out how voice-powered refrigerators and other appliances impact clients’ needs?

“I don’t think it’s 2025—I honestly think it will be in the next two to five years,” Dobbs predicts. “Those technologies are not as elegant as they need to be right now. But, yeah, in five years—we’ll be there.”

Source:; 6 Aug 2017

Amazon India Gets RBI Approval To Launch E-Wallet Services

After Flipkart and Snapdeal, Amazon is set to enter the digital payments market in India.

Amazon India, on Thursday, said it has received Reserve Bank of India (RBI) approval to operate a Pre-Paid Instrument (PPI), or its own digital wallet, giving the Seattle based e-commerce giant an opportunity to grab a part of the growing but increasingly crowded digital payments pie.

The development was first reported by Medianama.
“We are pleased to receive our PPI license from the RBI. Our focus is providing customers a convenient and trusted cashless payments experience.”
Sriram Jagannathan, VP Payments, Amazon India.

In March, the RBI issued draft guidelines suggesting higher capital requirement for entities offering PPIs and tougher know your customer (KYC) norms for customers using these services. According to the draft proposals, all entities seeking fresh approvals to launch PPIs will need to have an audited net worth of Rs 25 crore, which must be maintained at all times.

The RBI also proposed to tweak an earlier provision where you could hold up to Rs 10,000 in a ‘minimum information’ PPI account. PPI issuers will now have to convert every ‘minimum information’ account into a fully KYC compliant account within 60 days. In the interim, customers can hold up to Rs 20,000 in the account.

In his statement, Jagannathan said that the company hopes the regulator will continue with the existing simpler KYC norms to ensure that the use-case for wallets does not diminish.

“RBI is in the process of finalizing the guidelines for PPIs. We look forward to seeing a continuation of the low limit wallet dispensation with simplified KYC and authentication,” said Jagannathan.

Amazon has been inching towards an entry into digital payments for the last few months.

In December Amazon had launched its Pay Balance service to encourage cashless transactions, where customers can fund their prepaid balance using internet banking or credit and debit card. However this service was restricted to transactions on Amazon.

Last February, Amazon had acquired Noida based payments solution provider EMVANTAGE Payments Pvt Ltd for an undisclosed amount. The same month it also roped in former Citibank executive, Sriraman Jagannathan to head its payments business.

Amazon had applied for a wallet license last year in the name of Amazon Online Distribution Services Private Limited. The license was issued to Amazon on March 22 and is valid till the March 31, 2022.

The approval for Amazon’s e-wallet comes at a time when all major e-tailers, Flipkart, Paytm and Snapdeal, have launched their own wallets. The volume of transaction through PPIs has also risen, particularly in the months after demonetisation. According to RBI data, volume of payments through PPIs rose from Rs 1320 crore in November to Rs 2150 crore in March.

Amazon venturing into the digital wallet space could provide some competition to Paytm, said Harish HV, partner at Grant Thornton.
“This will not only boost their sales, but can give good competition to Paytm which has been ruling the wallets game in India, if they plan to run it as an independent business. How serious is Amazon about the wallet business remains to be seen.”
Harish HV, Partner, Grant Thornton

PPIs are one part of the broader digital payments space, which also include newly launched services built on platforms like the United Payments Interface (UPI). The government promoted BHIM is one such service. BharatQR, a QR-code based payment service, is another competitor to PPIs. According to a recent report by advisory firm IMAP, the volume of payments on digital platforms in India could hit $500 billion by 2020.

Source:; 12 April 2017

Google Thinks It Has Cracked the VR Adoption Problem

It’s launching a high-end wireless headset and new software improvements that might finally make you want to try virtual reality.

For most consumers, virtual reality is still a technology of the future.

Google hopes that by making the virtual world more convenient and accessible, more people will want to dive in.

This was the overarching message at the company’s annual developer conference this week in Mountain View, California, where executives like Clay Bavor, who leads virtual and augmented reality efforts, laid out what’s coming next for its Daydream VR platform—including powerful wireless headsets that can track your head position and orientation without special external sensors, and software changes that encourage users to spend more time in VR while sharing what they’re doing with others and not missing out on other things they may want to know about.

Altogether, the updates, coupled with Google’s clout as a leader in many technology spaces (search, Web browsing, mobile, to name a few) could mark a huge change in the visibility and uptake of virtual reality over time. And if it doesn’t work, it could be a huge setback for virtual reality—if not the end of it entirely.

Google has already put in a lot of work and money in hopes of bringing virtual-reality technology to the mainstream, through efforts like Google Cardboard—a foldable VR viewer that works with a smartphone—and, more recently, the more capable but still phone-dependent Daydream VR platform.

Yet while over 10 million of the Cardboard viewers have shipped since it was released in 2014, and there are about 150 apps out for the Google-made Daydream headset that shipped late last year, consumer adoption has been slow going. About 10 million headsets shipped globally last year, according to IDC, a market researcher, which is just a tiny fraction of the 1.5 billion smartphones that shipped in the same time frame.

There are lots of reasons why people aren’t buying into the technology. It’s isolating, and there aren’t a ton of things to do. All of the high-end headsets need to be connected to a computer or gaming console, and it’s annoying to feel cords flying around on your back and shoulders when you’re trying to forget about actual reality and explore, say, Mars in VR. And it’s expensive—a typical headset and its required computing platform will cost you anywhere from about $750 to well over $1,000, depending on what you’re buying.

Google is chipping away at the clunky and expensive issues by working with chipmaker Qualcomm to come up with a reference design for a wireless VR headset, and the company says that HTC and Lenovo are working with Google and building VR headsets like this.

The first of these headsets is expected to be available later this year, and Mike Jazayeri, Daydream’s director of product management, expects they will be priced similarly to desktop-connected VR devices today, minus the price of a PC—so, probably around $600 to $800.

“There’s just less friction. Put the headset on and you’re ready to go,” Jazayeri said.

I got to try an older prototype of one of these headsets this week, watching a short scene from the Star Wars film Rogue One to show off another innovation Google will roll out—a software tool called Seurat that can show desktop-computer-quality graphics on mobile VR devices by simplifying a given scene, computation-wise. The headset was fairly comfortable, with a wheel-style adjustment on the back of my head, and the imagery looked impressively crisp, even when I spun around or kneeled down on the ground to see the reflections better on the shiny virtual floor. When I moved too far in one direction or another, the world around me darkened to let me know I shouldn’t go any farther.

Google’s wireless plan could be a big deal for the VR industry. While several wireless virtual-reality headsets have been shown off that seek to marry high-end visuals and head tracking with a wireless design, they haven’t come out yet, and the ones that are wireless, like Google’s existing Daydream View and Samsung’s Gear VR, aren’t nearly as capable and won’t work without a really good smartphone.

Google is also making its software simpler and more comfortable. For instance, Daydream will add a dashboard so you can see Android notifications and settings without leaving VR, said Jazayeri. And he said it will start letting you share a two-dimensional view on, say, a TV screen of what you’re seeing in your VR headset with other people in the room—an effort to make the technology feel less isolating for those who are using it and more inclusive of those who aren’t.

Google hopes that these moves, plus others like a group 360-video-watching experience that YouTube will launch later this year and a plan to launch Google’s Chrome browser in virtual reality, could make users more interested in trying virtual reality and make it feel more familiar, too.

Source:; 18 May 2017

Google Lens Is A Peek Into The Future Of Computing

Squint and you can see how Google plans to bypass the Search box–and the screen entirely.

Just minutes into Google I/O–the company’s biggest event of the year–CEO Sundar Pichai announced what could be the future of Google as you know it: Google Lens.

Google Lens is an AI-powered interface that’s coming to Google Photos and Google Assistant. It’s “a set of vision-based computing abilities that can understand what you’re looking at and help you take action based upon that information,” as Pichai put it during his keynote today.

What does that mean in practice? Using computer image recognition–which Pichai reminded us is currently better than that of humans–Google Lens can recognize what’s in your camera’s view, and actually do something meaningful with that information, rather than just tagging your friends–which is how Facebook uses image recognition.

Image 4

Pichai showed off three examples. In one, a camera aimed at a flower identified that flower, in what appeared to be a Google reverse image search in real time. In the second, a camera aimed at a Wi-Fi router’s SKU–a long list of numbers and a barcode which would take time to type–automatically snagged the login information and then automatically connected to the internet. And in the third, a camera aimed around a street full of shops and cafes pulled up reviews of each restaurant, placing an interface element directly over each facade in your field of view.
Later in the keynote, another presenter came on stage to show how, with a tap inside Google Assistant, Google Lens could translate a Japanese menu and pull up an image of the dish–in what looks like a riff on the technology Google acquired with Word Lens.

Image 5
Alone, each of these ideas is a bit of a novelty. But all built into one platform–whether that’s a smartphone or an AR headset in the future–you can see how Google is imagining breaking free of the Search box to be even more integrated with our lives, living intimately in front of our retinas.
“We are beginning to understand video and images,” says Pichai, casually. “All of Google was built because we started to understand webpages. So the fact that we can understand images and videos has profound impact on our core vision.”

Source:; 17 May 2017

Facebook Is Working on Technology That Lets You Type and Control VR Devices With Your Mind

Could a ‘brain mouse’ be coming?

Think Facebook’s plans for virtual and augmented reality push the boundaries of how we interact? The social network is now working on technology that lets you type with your mind.

The company revealed it’s working on a “brain-to-computer interface” that will let humans potentially type five times faster with their mind than they currently can with their fingers. The innovation is part of a secretive sector of Facebook called Building 8, a unit that’s devoted to “moonshot” projects that are often as expensive as they are ambitious.

The projects were unveiled today at Facebook’s F8 developer conference in San Jose, Calif., by Regina Dugan, who heads up Building 8. With Facebook’s mind-reading technology, people will be able to wear non-invasive sensors that will hopefully allow them to type at a rate of 100 words per minute by decoding neural activity devoted to speech. Dugan said the technology could be used to help the disabled, but it could also become a way to input thoughts and commands directly into virtual reality and augmented reality devices. On stage, she showed a video of the technology being used to help a woman with ALS type at a rate of eight words per minute.

“It sounds impossible, but it’s closer than you realize,” she told thousands of developers during the final talk of the two-day event. “And it’s just the kind of fluid, human-computer interface needed for AR. Even something as simple as a yes-no brain click would fundamentally change our capability. A brain mouse for AR.”

According to Dugan, the human brain moves far faster than anyone can talk, which creates limits to how we communicate. She explained that the mind is capable of producing around 1 terabit of information per second—roughly the same amount of data as streaming 40 high-definition movies every second. However, speaking is the rate of about 100 bits per second, or about the same bandwidth as an ’80s internet modem.

“Speech is essentially a compression algorithm and a lousy one at best,” she said. “That’s why we love great writers and poets, because they’re just a little bit better at compressing the fullness of a thought into words.”

Along with the mind-typing tech, Facebook is also working on a way to let people hear through their skin. By creating an artificial cochlea, Facebook is working on what it calls a “haptic vocabulary” that lets people wear something on their sleeve to understand words based on vibrations in their arm.

Prior to joining Facebook last year, Dugan led Google’s Advanced Technologies and Projects Lab and before that ran the U.S. military’s R&D lab DARPA. She said Building 8 is modelled after DARPA, while building products “that recognize we are both mind and body, that our world is both digital and physical” and that “seek to connect us with the power and possibility of what’s new while honouring the intimacy of what’s timeless.”

“Your brain contains more information than what a word sounds like or how it is spelled,” she said. “It also contains semantic information that tells us what those words mean … Understanding semantics means that one day you may be able to choose to share your thoughts independent of language. English, Spanish or Mandarin may become the same.”

Facebook isn’t the only technology company investing in mind-reading. Last month, The Wall Street Journal reported that Elon Musk is creating his own brain-computer interface with a new venture called Neuralink, which is developing a way to implant brain electrodes to upload and download thoughts.

Source:; 19 Apr 2017

Just Eat: voice technology will make your phone as good as a human

Voice input and artificial intelligence are close to being able to offer users an experience as seamless as visiting a restaurant, Just Eat’s chief product and technology officer said.

Fernando Fanton was speaking to an audience at 4YFN, the start-up spin-off of Mobile World Congress in Barcelona. He described Just Eat not as an app but a platform whose objective is to “reduce friction” and “open many more situations” for people to order food online.

He said: “One of the things we’re really excited about that we think will make a difference is voice interface.

“The problem of having to understand a particular way of communicating will fade away, and the experience will be as natural as going into a restaurant and asking a person, what should I eat today? We are not far away from that reality.”

He predicted that “material improvements” in voice interfaces would arrive this year or next – but added that the business was also looking at technological avenues including augmented and virtual reality.

Fanton used his speech to announce the launch of a start-up accelerator programme, running for 12 weeks from April, which aims to find partners to improve the business environment in a number of ways.

“We need to offer our restaurant partners an open ecosystem,” he said. “It’s not just about ecommerce, making a restaurant more profitable, but it’s about the entire world around us – what can we do to take care of food waste properly?”

Fanton acknowledged that this approach could lead to Just Eat bolstering its own potential rivals: “The true competition in this space is in the future – it hasn’t been invented yet.”

Source:, 27 Feb 2017