Last week, I was thrilled to have the opportunity to interview one of the brightest minds in robotics and AI today: Ben Goertzel . As the chief scientist at Hanson Robotics , the company that created the AI robot Sophia , Ben is a true innovator and a visionary in the field. He is a founder and the current CEO of SingularityNET , a company that focuses on bringing AI and blockchain together to create a decentralized open market for AI; the chairman of the Artificial General Intelligence Society and the OpenCog Foundation ; an advisor to Singularity University ; and a research professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University of Technology, China . Ben also served as the director of research of the Machine Intelligence Research Institute (formerly the Singularity Institute).
Here are a few of the highlights from our conversation on the future of AI:
Lisa Chai:Ben, when I saw you and Sophia at an
AI conference last year, I was amazed at some of the human facial expressions she has been able to capture recently. What other intelligent upgrades has Sophia gone through in addition to her micro expressions? Can you talk about your other groundbreaking work on robotics and human interaction, particularly around Sophia?
Ben Goertzel:Sophia is improving all the time. She now has legs and a rolling base, and she can even choose her mode of locomotion based on the occasion! Sophia has expressive arms and hands, and she can mirror her human conversation partner’s facial expressions with nuance and sensitivity. She can animate over 60 expressions and has motion tracking with built-in cameras that coordinate with head and eye motions to track people’s eyes and faces to maintain eye contact. Sophia can even draw pictures now, using her new arms and hands that are integrated with her vision processing.Since last summer, we’ve also done a lot of experimentation with various AI systems for controlling Sophia’s natural language dialogue, including the OpenCog AGI—or Artificial General Intelligence—system running in the SingularityNET framework. When OpenCog is used to drive Sophia’s dialogue, it’s able to assemble its own responses rather than select pre-coded snippets of dialogue.
Isn’t OpenCog a project that aims to build an open-source artificial intelligence framework? I know that the design is primarily driven by your research, which is basically an architecture for robot and virtual embodied cognition to give rise to human-equivalent artificial general intelligence.You’re absolutely right. OpenCog is currently in use by more than 50 companies, including Cisco. Over the next year we look forward to using more advanced OpenCog tools and SingularityNET AI to enhance Sophia’s dialogue and her emotional and pragmatic human interactions. The goal is for the Sophia robot and others like her to fully understand everything a human will say. Clearly, we’re not there yet, but we are making good progress, and it is a realistic technology goal. If achieved, this will make Sophia-like robots into remarkably valuable tools for home and commercial environments, and it will set the stage for humans and robots to learn an amazing amount from each other.
I have always wanted to ask you this: How do you architect a personality?At the moment, creating an artificial personality is a bit of a technological “black art.” There is some artistry to it. It’s a complex process that requires coding specific rules for a multitude of behaviors and reactions, including linguistic, physical, and emotional.
Machine learning also plays a role, training models to guide behaviors based on data regarding human behaviors that are identified as good training examples. There is also a role for reinforcement, adapting responses and behaviors based on what seems to work and what doesn’t.When will a robot like Sophia be capable of passing the
Turing Test? Do you think we will come to a point where we can’t tell the difference between a robot and a human?I think we may be only a couple years from an AI system that can pass the Turing Test “artificially”—meaning that it will be able to imitate human dialogue, but without any real understanding of what it’s talking about. Having a physical robot pass the Turing Test organically is probably many years off because, as realistic as Sophia is, she is certainly not hard to distinguish from a human if you’re really trying.
Can you tell us about the initial concern or pushback that Hanson Robotics faced when building Sophia? How did these concerns get addressed?The most striking initial reactions to Sophia were adulation and imitation rather than concerns or pushback. Sophia’s first name was originally Eva, but then the movie
Ex Machina came out, and the AI character in the film had a very similar name: Ava. So David Hanson of Hanson Robotics decided to change the name to Sophia. Once she got more popular and became a bit of a superstar, there came a bunch of questions regarding the purpose or wisdom of making robots human-like in appearance.The issue here is that when a robot looks like a person, people will sometimes attribute to that robot more intelligence and understanding and awareness than it may actually have. From a commercial perspective and a satisfying-human-needs perspective, this is a big plus, but some people have seen ethical issues here in terms of the possibility that people would form a deep emotional bond with an AI or robot on the false premise that this AI understands and empathizes with them more deeply than is really the case.I do think it’s important for AI and robot makers to be open and explicit with customers about what kind of AI or
robotic system they’re interacting with. On the other hand, there is a lot of subtlety and nuance here and a lot of unanswered questions. We don’t need to obsessively protect people from becoming emotionally attached to AI systems that are only partially human; this is a valid part of life and potentially a highly rewarding one. We do need to be open to people about what kinds of systems they are interacting with.
Broadly speaking, what is some cutting-edge research regarding emerging applications of AI that you are seeing?AI is being applied everywhere, in every area of human pursuit. I’m especially fascinated with applications of AI in the
biomedical world. We’re seeing more and more results come out based on using AI to analyze clinical medicine or
genomics data, creating new diagnostics, or suggesting new therapies. In our own work at the SingularityNET AI group, we have been applying OpenCog and other AI tools to analyze the DNA of supercentenarians—people aged 105 or over—and are finding all sorts of dramatic, special things about their DNA.My oldest son, Zarathustra, is doing his PhD on the application of machine learning to automated theorem proving—automating math. At this year’s AI for Theorem-Proving conference, there were contingents there from
Google and Facebook—evidence that even very difficult, mostly obscure, and non-commercial areas of AI are now becoming popular! Each year, the ability of AI to guide theorem-provers to prove new theorems or find new proofs of old ones becomes more and more impressive. As math is the core of science and engineering, ultimately the implications of this sort of work will become extremely dramatic.AI is also being used extensively to diagnose crop disease from images of plant leaves, and to track the spread of crop disease from aerial photographs. While it may not be as commercially enterprising as AI in facial recognition, AI in
precision agriculture likely has a far-reaching impact for humanity.
You are known as the pioneer of Artificial General Intelligence. How do you define AGI? How is this translating into our lives? How far are we from realizing this?So basically, AGI refers to AI that can learn how to solve problems quite different from those that were explicitly embodied in its programming or training. To do this, an AI has to be able to generalize fairly robustly from its prior experience. In order for AI to be viable, AI needs to be able to interpret its own experiences in broader context.I view the current “Narrow AI” revolution as a sort of preface to the much larger AI revolution to come soon thereafter: the AGI revolution. A Narrow AGI system is one that displays powerful general intelligence, but is heavily biased in capability toward some particular domain, such as biomedical research or math theorem proving or urban planning.Given the heavily commercial focus of the contemporary AI field, it seems likely that the path to full, human-level AGI is going to pass through a variety of Narrow AGI systems of progressively increasing generality and capability. Crafting and deploying and teaching Narrow AGI systems is going to be an engineering challenge. It will also be a conceptual challenge because these systems stretch our understanding of the foundations of intelligence, computation, life, and mind.I predict the creation of AGI that is at least comparable to that of a human within 5-30 years—and probably 5-15 years. There is a lot of work to be done, but I believe we have a fairly clear roadmap from here to AGI leveraging OpenCog and SingularityNET.
What are your current thoughts on the tech giants like Amazon, Google, Facebook, Microsoft, and Apple and their dominance in the AI stack and the ecosystem they are building?I think this dominance has allowed certain aspects of AI to move forward very rapidly, which is interesting. But I also think this dominance by large companies will be very dangerous for humanity if it’s not remedied fairly soon. Large corporations have a valuable role to play in the modern technology ecosystem, but this role shouldn’t be one of hiring all the AI developers, ingesting all the data about everyone, and then controlling all the AI. In the case of Linux, large corporations utilizing and contributing to the Linux codebase have played—and continue to play—a critical role in advancing Linux as a general-purpose open-source OS. However, these corporations don’t own Linux.My SingularityNET colleagues and I believe that the creation of beneficial narrow AI applications and broadly beneficial AGI is more likely to happen if the underlying fabric of AI learning and reasoning algorithms is decentralized in ownership, control, and dynamics. A decentralized control structure will allow a network of AI agents to address a greater variety of needs and problems, and to leverage contributions from a greater variety of people and human organizations. Centralized organizations end up needing to focus narrowly in order to achieve efficiency. Decentralized organizations can be more heterogeneous and can also roll products out flexibly to markets that don’t offer obvious high returns on investment. This is one reason why Linux-powered smartphones are dominating the developing world.
Open-source AI has been a big topic here at ROBO Global, especially given that Microsoft has now committed to invest $1 billion in the OpenAI project. Can you talk more about your new venture SingularityNET? What is its true purpose? Can you talk about how it works?SingularityNET is a completely open-source, decentralized AI platform, with the vision to democratize cutting-edge AI and datasets because we think AI is too powerful a technology to be kept within the silos of large organizations. We are exploring ways in which we can bring core aspects of this technology to people who may not necessarily fully understand it today, such as non-profits, small and medium businesses, and even individuals who want to understand and trust how the black box behind the technology is working.Currently we have about 100,000 people in the community and a bunch of contributors globally who are committed to the vision and that interact with us on a monthly basis in our groups and Telegram communities, and on chats and Twitter feeds. We’re really trying to change the paradigm here to align incentives and create a more trustworthy and economical system that benefits humanity.The new for-profit company my colleague Cassio Pennachin and I have created, Singularity Studio, is engaged in creating enterprise AI software products to be licensed to large corporations—but with the twist that the core AI functionality behind the products is obtained by outsourcing to AI agents running in the decentralized SingularityNET platform.Singularity Studio is initially addressing the finance, healthcare technology, sustainability, and IoT verticals, and we already have some significant customer traction. In the Linux analogy, we may say that Singularity Studio is to SingularityNET as Red Hat is to Linux. Singularity Studio is intended to generate tremendous economic value on its own, as well as to drive growth of the SingularityNET platform ecosystem. This will indirectly drive growth of Singularity Studio even more by leading to the presence of more AI tools in the platform for Studio products to draw on.
How does SingularityNET works in conjunction with some of the robotics work that you have done?SingularityNET provides a platform and infrastructure in which AI agents can cooperate, collaborate with each other, outsource work to each other, and provide services to customers for fairly negotiated prices—all in a purely decentralized way, without any centralized controller. Blockchain technology is leveraged to enable decentralized governance and a high level of security over all aspects of the system.This decentralized AI platform can apply powerfully to every industry, but it has particular power in any domain where data from large numbers of ordinary product users plays a key role. This is because the blockchain-based framework underneath makes data privacy, data security, and democratic network governance extremely natural. Home and office and commercial-establishment robotics fit this profile very well. SingularityNET is perfectly architected to serve as the secure “robot mind cloud” behind a large, peaceful army of Sophia robots and other social robots of every kind. I am very excited about the future of artificial intelligence and the role it will play in enterprises and in our personal lives.