26-year-old Suchir Balaji, a former OpenAI researcher and whistleblower who raised concerns about the artificial intelligence company’s practices, was found dead at his residence in San Francisco, according to reports.
The officials found him dead in his Buchanan Street flat on November 26, the San Francisco Police said. The police arrived at his apartment after his friends expressed concerns over his well-being, and found him dead. He was confirmed dead that day and the news of his demise has come up now.
The medical examiner has not revealed the cause of Balaji’s death, however, the police said that at present, there is “no evidence of foul play”, according to reports. His role and knowledge were considered crucial in legal proceedings against OpenAI. Balaji had publicly claimed three months before his death that OpenAI had breached copyright legislation in the development of ChatGPT.
What Were His Allegations?
The late 2022 launch sparked a wave of legal challenges from writers, programmers, and journalists, who accused the company of unlawfully utilising their copyrighted content to train its program, thereby boosting its valuation to over $150 billion.
Balaji said in an interview published on October 23 in New York Times that OpenAI was impacting businesses and entrepreneurs negatively whose information was being used to train ChatGPT. “If you believe what I believe, you have to just leave the company. This is not a sustainable model for the internet ecosystem as a whole,” he told the outlet.
Expressing concerns on his personal website, Balaji claimed that OpenAI’s process of copying data for model training was a potential infringement of copyright.
He observed that although generative models seldom generate outputs that are exact replicas of their training data, the reproduction of copyrighted material during the training process could potentially breach legal regulations unless covered under the provisions of “fair use.”
Balaji dedicated nearly four years to OpenAI, playing a key role in data collection for the company’s flagship product, ChatGPT. However, following its release in 2022, he began to critically examine the legal and ethical dimensions of OpenAI’s practices. By mid-2023, he reached the conclusion that these AI technologies were harmful to both the internet and society, leading to his decision to resign.
OpenAI issued a statement and said that it builds AI models using publicly available data “in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents”. “We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,” it said.