We have grown accustomed to asking virtual assistants such as Siri and Alexa to perform small tasks and provide basic information for us.
But if a Samsung-backed startup ‘s CEO has his way, “artificial humans” will become your teachers, physicians, financial advisors, and perhaps your closest friends. It’s a polarizing concept that this week at the CES tech show in Las Vegas became the most talked about topic.
We consider it to be mainly speculation as we draw the curtain back on the current technologies, but it also poses concerns on how this innovation will really play out in real life and if it is a reality we would like.
Star Labs, a Samsung-backed innovation lab, displayed their AI-powered lifeforms called Neons at CES in giant TV videos. On a human scale, one is a yoga instructor who can help you perfect your downward-facing dog; another is a local news anchor who can deliver interest-based news in your preferred language while Neon, a financial advisor, can help you get your retirement plan right.
These videos attempted to depict how in the future somebody might interact with such realistic avatars, or even form a relationship. Yet today’s technology was wonky in a prototype of just one functioning Neon, and riddled by delays. Her thoughts and gestures were much less believable in real time, and were monitored by a nearby device from the company’s CEO, Pranav Mistry. Onlookers were allowed to pose questions but the responses missed the target too frequently. When asked what was her favorite CES gadget, for example, she replied: “Las Vegas.”
His CES debut came on the heels of a mysterious social media push leading up to the conference, sparking rumors and generating buzz about the coming of the next big thing in AI. Yet it became obvious shortly after their debut that the hype became overblown. Mistry admits the technology still needs work (“It’s just a baby right now”), but his vision is for “the digital species” to be everywhere in one day — in your favorite chat applications, at home or in shop. Rather of buying from a fast food restaurant kiosk buttons you might have a normal interaction with a realistic-looking human AI.
If properly executed, the creations addressed by names such as Frank and Hanna (rather than “Hey Google”) may present an intriguing yet uncomfortable glimpse into what human-like AI lifeforms might mean for our future.
“The marketing rhetoric around the Neons is quite extreme at a time when AI generates lots of confusion and anxiety [with topics such as] humans replacing machines, AI ethics issues and deep fakes,” said Thomas Husson, a principal analyst at Forrester Research. “But if they’re able to successfully express emotions, they would help enhance interactions between consumers and brands, and more broadly humanize technology.”
It’s a tall order for a firm that Mistry says has been working on Neon for only four months. Its core technology, a blend of behavioral neural networks and algorithms, has clear limitations, but Mistry said it will soon be able to support its own original content, expressions, emotions, movements and eventually memory.
“Right now, Neon doesn’t have any intelligence per se,” he said. “They are behaving intelligently, but they don’t have the concept of learning or memory. [Eventually], she will remember that you like pizza or something you’re reading.”
Neon is not connected to any Samsung devices or to its Bixby speech systems, given Samsung’s support. A spokesperson for Star Labs told CNN Business that Samsung knew just a few details of the concept ahead of its CES debut.
Neon plans to launch a business model later this year but hasn’t landed yet. Mistry said a subscription service is an option and working to secure business partnerships as well.
There is no doubt the idea of a “digital species” is controversial. The production of strong artificial intelligence has been warned by major names in tech, including Elon Musk and Bill Gates. Gates has also called AI “promising and dangerous.”
Typically, these concerns revolve around what’s known as artificial general intelligence, or AI that for the most part can do the things a human can do.
“As demonstrated by Neon, we are still very far from a commercially ready AGI solution,” principal analyst Lian Jye Su of ABI Research said. “The best AI nowadays are narrow [ones] that performs singular tasks very well, such as the camera AI in our smartphones, the defect inspection camera AI on an assembly line, and the facial recognition AI in payment terminals. “According to Su, we should “always question the intention and financial rationale behind attempts to make artificial general intelligence a reality.”
Other firms are developing AI that can converse better with us but without a human-like interface. Google showed off Duplex two years ago, which allows AI to make human-like phone calls, while Microsoft is increasingly expanding its Cortana platform to be responsive. Mistry said Neon is aware of concerns regarding human-like AI development.
“There’s always good and bad [sides] of any technology and how we use it,” he said. “That applies to not only AI, but any technology. We believe that it’s our human responsibility, and this generation’s responsibility, that … if we [build something] today, we want to ensure that from the ground up from the architecture level, from the design level, that it’s not misused in a wrong place.”
Neon ‘s idea also arrives at a period when companies like Facebook (FB), Google (GOOG) and Amazon (AMZN) are trying to restore customer trust after a string of data storage controversies. In 2019, for using third-party contractors to listen in and transcribe user requests made through their personal assistants, both Amazon and Apple were under fire. Putting a human-like AI into your house, one that knows your food, actions or finance habits, poses questions about where intimate knowledge could fall.
“Our future can come without compromising our privacy,” Mistry said. “And that is what we are designing — an architecture [that makes sure] any interaction between you and your Neon or you and any Neon, no one has, including me, as a CEO of this company, access to that information.”
A Neon remains a simulated human assistant at this stage that aims merely to provide intelligent, human-like responses.
“But potential implications, such as if such an avatar was embodied into a humanoid robot or could have a true conversation with you, will generate more discussions about AI ethics and regulation,” Thomas said.