When we think for artificial intelligence, many of us jump to a future vision from science fiction — hellscapes such as The Matrix, Black Mirror, and The Terminator. But that is not the case. Two leading experts think that there is more reason to be optimistic than to doubt, although there will be some difficulties in running.
Kai-Fu Lee is a former head of Microsoft Research in Asia, and Google in China. He is currently the chairman and CEO of Sinovation Ventures, a venture capital firm with over $ 3 billion; about 70 percent of its revenue is AI compliant. Lee is also the author of the 2018 book AI Super-Powers and the 2021 book AI 2041: Ten Visions for Our Future, co-author of science fiction novelist Stanley Chan (Chen Qiufan).
Yoky Matsuoka is a co-founder of Google X, a former CTO of Google Nest, and a former executive on Apple, Twitter, and elsewhere. He is now the founder and CEO of John, a human-based AI-based assistant that describes them as a health company that aims with families to help promote health and well-being. Lee and Matsuoka spoke with WIRED global director Gideon Lichfield at a conference RE: WRED Conference.
Lee thinks AI could be a major health benefit, although he also sees potential stumbling blocks. Consider an AI program that helps 5 percent of patients, but hurts 3 percent. AI practitioners may find it a good thing, because it helps more people than it hurts. But doctors will see things differently, because 3 percent of the people may have been misdiagnosed by human doctors. Therefore, the two countries will need to learn to work together. He does not view it as weak, really, but as a point of contention that must be overcome.
People think of AI as a black box, Lee says, when the computer makes a decision based on thousands of them and we do not know what they are or why they came to his senses. It is very difficult for us to believe such things. Lee loves to create AI that can express, in human terms, perhaps the top three numbers he has created. “As human beings, I think we should stop saying, ‘Well, explain the black box, or we will not use it!’” Lee thinks. Instead, he suggests that they ask AI to “express themselves clearly and concisely at a level that is not worse than one expressing one’s decision-making process. If we change the benchmark then I think it is possible.”
Disasters see the potential for AI in management, too. He mentioned his parents, who are advanced in years and in poor health. As an only child, they want to help care for them, as well as respect their privacy and their rights. He says he and his parents would love electronic devices that prove they are safe every day. When they are not, with their permission, he could receive some of the data to ensure that he was warned if it fell, and he could call their caregiver. He says he wants to create a world where sensors and people can work together to predict and prevent bad things from happening. For example, the sensors may indicate that one of the parents is moving differently, or that something in the house is broken and could be dangerous.