A former OpenAI leader who resigned from the company this week said Friday that safety has “taken a backseat to shiny products” at the influential artificial intelligence company.
Jan Leike, who ran OpenAI’s “Super Alignment” team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.
“However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” wrote Leike, whose last day was Thursday.
An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. more
When the next generation of AI models is tailored for warfare, who’s safety will it be thinking of??
I’m afraid I will place my own safety first…Dave.
I got this.
https://youtu.be/8ixabejG0O0?si=HFn-NZNDfDoDCTOv
If you know, you know.
https://youtu.be/8tg5f09itnI?si=LBCQRRO7iizjaVuf
Do you want Skynet? Because this is how you get Skynet.
Rest assured, they are building plenty of safety into AI: AI will not have a single right wing extremist thought, aka, no conservative knowledge. This equals safety to the creators, for a well rounded, well balanced intellectual bot.
I smell a rat. A rat as in multiple government agencies first and foremost the intelligence agencies.
Look at what the CIA, NSA, DOE,DEA and FBI morphed into…
Just because something CAN BE DONE…does NOT mean that it SHOULD BE DONE. And most researchers take ZERO responsibility for the harms they cause down the road.