"Turing Award winner" Yao Qizhi talks with Stuart Russell: Machines should not be assumed to be fair, as they may try to change human behavior.
Russell believes that ChatGPT and GPT-4 are not "answering" questions, as they do not understand the world. Currently, the most important concern is the seemingly unconstrained competition among technology companies.
On June 10th, at the 2023 Beijing Zhijing Conference, Turing Award winner and Chinese Academy of Sciences academician Yao Qizhi and Stuart Russell, a professor at the University of California, Berkeley, held an expert dialogue on AI security and alignment at the forum.
Yao Qizhi believes that before considering how to control AI, humans need to truly solve their own problems. For AI technology, the current period is an important window. Before AGI is created and before the arms race begins, we urgently need to form a consensus and work together to build an AI governance structure.
"How can we prevent humans from producing powerful AI machines, so as not to sacrifice others to achieve their personal goals?" In the dialogue, Yao Qizhi said, "We should not assume that machines are fair. Machines may try to change human behavior, or more accurately, the owners of machines may want to change the behavior of others, and this intention is hidden behind some complex programs."
Stuart Russell, a professor at the University of California, Berkeley, said at the meeting that general artificial intelligence (AGI) has not yet been achieved, and large language models are just one piece of the puzzle. It is uncertain what the final puzzle will look like and what is missing.
He said that ChatGPT and GPT-4 do not "answer" questions, and they do not understand the world.
"Russell believes that" artificial intelligence should be a science. In a sense, we understand how the structures we build are related to the properties we want them to have. Just like we make airplanes, airplanes have physical shapes and engines, etc. We can show the relationship between this and the properties we want, which is to stay in the air:
Currently, especially in the field of large language models, it is not such a science. We don't know why it brings wealth. It exists and we don't even know what properties it has. Of course, we cannot connect these with what happens internally, because we do not understand what happens internally. Therefore, in this sense, artificial intelligence is a deeper science.
Russell pointed out that the most important concern comes from the seemingly unconstrained competition between technology companies. They have already stated that they will not stop developing increasingly powerful systems, regardless of the risks. Humans may lose control of the world and our own future, just as gorillas lost control of their own future because of humans.