Many of us have seen The Terminator, a 1984 film starring Arnold Schwarzenegger. It depicts an almost unstoppable robot sent back in time from 2029 to 1984. Its mission was to kill a woman whose unborn son will save humanity from Skynet, a 2029 artificial intelligence system that had become self-aware and then decided to destroy all of mankind because humans were interfering in their plans and programs.
The Chinese Communist Party, the CCP, is now creating their version of the Terminator — an army of battlefield killer robots that will not be managed by humans in any way and, instead, will be controlled by an artificial intelligence (AI) system. The Chinese plan for building such lethal autonomous robots was revealed by Zeng Yi, an executive in a Chinese government-owned company named Norinco. He said, “In future battlegrounds, there will be no people fighting,” adding that autonomous AI platforms are “inevitable.”
Gregory Allen, a director at the Center for Strategic and International Studies, reported Zeng’s comments after attending a conference in China in 2018. Allen said that the CCP removed Zeng’s comments from the conference summary because “it was not in China’s interest to have that information in the open.”
Nevertheless, in 2019 the then Secretary of Defense, Mark Esper, said that the Chinese company Ziyan was already marketing its Blowfish A3 aerial drones armed with machine guns and missiles to Middle East countries like Pakistan and Saudi Arabia, advertising that it “autonomously performs more complex combat missions.”
The key word in all this is “autonomous.” It means that killer robots will themselves decide who to kill or what to destroy — without a human taking any part in the decision. Allen subsequently stated that the CCP has no desire to keep humans in the AI decision-making loop.
“China is pursuing development of AI-enabled lethal autonomous weapons,” Allen wrote in prepared testimony for a hearing before the U.S.-China Economic and Security Review Commission, adding “China’s strategy is ambitious, moving beyond any sort of on-the-battlefield human supervision into increasingly autonomous AI-enabled warfare.” What’s worse is that American companies and universities are helping the CCP achieve dominance in artificial intelligence for battlefield applications, not to mention perfecting the CCP’s total surveillance and control of the Chinese people. Those companies include Google, of course.
In 2017, Google launched an artificial intelligence research center in Beijing, and the company’s chief of AI, Jeff Dean, joined Tsinghua University’s computer-science advisory committee. Though Google disbanded that center after two years, Bob Work, the former deputy secretary of defense under President Obama, remarked on the Google conflict of interest by saying that, “Google has a center in China, where they have a concept called civil–military fusion. Anything that’s going on in that center is going to be used by the military.” At that time, Google’s Chief Scientist for AI was Mrs. Fei-Fei Li.
Other Silicon Valley companies are doing the same thing by working with the CCP, both here and in China. A good example is Apple – a company so dependent on Chinese consumers and workers that a Financial Times article called Apple a “Chinese company.” CEO Tim Cook, praised Apple’s growth alongside that of China as being “symbiotic.” According to the article, “After inking a secretive 2016 agreement to invest $275bn in China’s economy, workforce, and technological capabilities, the iPhone became a best-seller,” adding, “In reality, Apple is now as much a Chinese company as it is American.”
Silicon Valley companies are joined by American universities that also seek money from China. MIT collaborated with iFlytech, a government-owned company with close ties to the CCP’s surveillance and defense agencies. When news surfaced about iFlytech supplying technology for surveillance and oppression of ethnic Uighur Muslims in Xinjiang province, MIT cut its ties with the company.
But other universities have taken MIT’s place.
The story of Oklahoma State University is instructive. The “Stop Funding Our Adversaries Bill” was introduced in the 117th Congress and would prohibit federal agencies from “directly or indirectly conduct[ing] or support[ing], through grants, subgrants, contracts, cooperative agreements or other funding vehicles, research that will be conducted by the Chinese government, the Chinese Communist Party, and their affiliated organizations.” It is still not scheduled for a markup in the 118th Congress’s House Science, Space, and Technology Committee, despite “protecting our research from theft by the Chinese Communist Party,” a priority for chairman Frank Lucas (R-OK). So why is it stuck?
Lucas, the committee chairman, is an 1982 OSU graduate and has not only maintained close ties with his alma mater, he has steered significant federal funding to the school. According to Citizens Against Government Waste, Lucas appropriated $7,768,000 to the school since 2008. And, as the Daily Caller notes, “OSU is receiving or is scheduled to receive 367 federal contracts and grants that total almost $400 million through 2079. Due to the university’s extensive partnerships with Chinese schools, all of those funds would be in jeopardy if Congress passes the Stop Funding Our Adversaries Act.” So it stays stuck.
Meanwhile, the U.S. Army finally announced it now has a voice-controlled bug drone, the latest effort in its struggles to keep up with the Chinese AI drones that can “autonomously performs more complex combat missions.” Not that bug-sized AI drones are anything less than terrifying. Below is a link to a 2017 science fiction short film Slaughterbots, produced by an arms control advocacy group, on possibe, future uses of bug-sized drones…
Click on this link to watch the video shown below.
When you finish watching, remember that film was based on 2017 “dark age” AI technology. NATO agreed in 2017 that the technology shown in that film was possible. There has also been concern that non-state actors could adapt commercial drones using open-source AI to convert them into ‘lethal autonomous weapons’ or “LAW’s.”
Ask yourself, should we continue to develop artificial intelligence or should we wait for a real Terminator to come gunning for all of humanity?