X

OpenAI to give the AI Safety Institute early access to its new model

Featured image for OpenAI to give the AI Safety Institute early access to its new model

We’re at a point where we can’t really trust AI companies, but we have to give credit to the companies that are making an effort. While OpenAI definitely has its secrets, the AI startup is, at least, making moves to gain public trust. According to a new report, OpenAI vowed to give the AI Safety Institute early access to its latest model.

Different countries are developing divisions dedicated to analyzing AI and mitigating potential dangers. In the U.S., one division is called the AI Safety Institute. This is a division that’s in charge of making sure that AI doesn’t cause harm to people in any way. It’s a relatively new division, it’s positioned to be a pivotal entity in the AI industry going forward.

OpenAI will give the U.S. AI Safety Institute early access to its latest model

OpenAI will give the AI Safety Institute early access to its latest foundation model. That’s a pretty big step. One of the main things that all governmental bodies want is for AI companies to be transparent with them and clue them in on what they’re working on. It’s very possible for a company to release a product that causes actual damage. Sometimes, it could be hard or impossible to walk back the damage.

In a post made by OpenAI CEO Sam Altman, we get the news that the company is giving it access to its latest foundation model. However, the post didn’t mention which model, so we don’t know if it’s providing early access to a game-changing model or one for smaller applications.

Also, the post mentions that it’s only the next model. We don’t know if the company is going to provide early access to future models. In any case, this is still a great step to take.

The company is taking other steps

In the post, Altman also stated that the company is going to dedicate 20% of its computer resources to its safety efforts across the company. This is a good thing to hear because there was some recent drama surrounding the company’s safety efforts. There were reports of the company diverting resources from its safety team. However, OpenAI developed a new team internally afterwards.