By Aaron Bry

Christopher Nolan’s blockbuster movie “Oppenheimer” has reignited the public discourse surrounding the United States’ use of an atomic bomb on Japan at the end of World War II.

One theme explored in the movie was Robert Oppenheimer realizing the implications of the technology he worked diligently to develop only after it had been used in the form of an atomic bomb.

Palantir CEO  Alexander Karp likens Oppenheimer’s realizations to today’s debate surrounding artificial intelligence: AI developers may only realize the technology’s potential harm once it’s too late. 

“We have now arrived at a similar crossroads in the science of computing, a crossroads that connects engineering and ethics, where we will again have to choose whether to proceed with the development of a technology whose power and potential we do not yet fully apprehend,” Karp writes in a New York Times opinion piece. 

ArtificiaI Intelligency (AI) is integrated into the myriad of technological systems that exist in the world around us, and eventually one could be combined with an advanced weapons system, and it is not impossible to imagine that AI would be able to deploy lethal weapons. MARKUS WINKLER/UNSPLASH

Palantir CEO On AI Guardrails: Karp argues that while AI and machine learning technologies are indeed incredibly powerful, calls to halt the development of the tech are unwarranted.

Like Sam Altman, the OpenAI CEO, did in front of Congress, Karp argues for a regulatory framework to build “guardrails” and ensure the technology is developed in a positive manner. 

AI is integrated into the myriad of technological systems that exist in the world around us, and eventually one could be combined with an advanced weapons system, Karp said, adding that it is not impossible to imagine that AI would be able to deploy lethal weapons.

 A precedent exists for tech companies showing hesitancy about their technology being used for weapons. In 2019, Microsoft employees spoke out against a defense contract with the U.S. military.

Google employees also wrote a letter condemning a Department of Defense contract in 2018: “Building this technology to assist the U.S. government in military surveillance — and potentially lethal outcomes — is not acceptable.”

Throughout the vast corporate history of the United States, it’s been common for business leaders to seek less regulation from the government. The fact that two of the most influential leaders in the AI space, Altman and Karp, are arguing for more regulation should be telling.

In the tech world, AI has been a buzzword for nearly two decades. But the rise and popularity of OpenAI’s Chat-GPT, an AI-powered search engine, has rekindled the wonder surrounding AI for investors and the general public alike.

ArtificiaI Intelligency (AI) is integrated into the myriad of technological systems that exist in the world around us, and eventually one could be combined with an advanced weapons system, and it is not impossible to imagine that AI would be able to deploy lethal weapons. MARKUS WINKLER/UNSPLASH

Stocks like Microsoft Inc, which is an investor in OpenAI, and NVIDIA Corp, which designs semiconductor chips necessary for computers that run AI programs, have seen incredible rallies in 2023. Microsoft’s stock is up around 40% year-to-date, while Nvidia’s is up a staggering 214%. 

Produced in association with Benzinga