Stanford Researchers Measure the Transparency of Popular AI Models

Stanford researchers used the Foundation Model Transparency Index to measure how transparent the ten largest AI language models are today.

Artificial intelligence: redefining customer experience
Photo by Hitesh Choudhary on Unsplash

Stanford Ranks AI Models According to Transparency

Using the system, the researchers evaluated each model based on 100 criteria. This includes the disclosure of the sources used in training data, hardware information, the labor involved in training the models, and other details. Moreover, the downstream indicators, the disclosure of information on how the model was used after it was released, were also included.

The study revealed that Meta's LLama 2 is the most transparent model among the others with a score of 53 percent. Surprisingly, OpenAI's GPT-4 only ranked third with 47 percent. Meanwhile, Google's PaLM tied with Anthropic's Claude 2 with 37 percent.

Investment towards AI came pouring especially within this year. Along with the surge comes the threat of the decreasing transparency in the AI industry which led the team to investigate. "There are some fairly consequential decisions that are being made about the construction of these models, which are not being shared," Percy Liang lead of Stanford's Center for Research on Foundation Models expressed.

The Importance of Transparency in the AI Industry

AI while it comes with great power also comes with great responsibility. Many companies have been integrating AI into their services and it is important to have a clear idea of how these systems work before proceeding to tangle their company with such services. For one, many AI companies have been sued by artists, media companies, and more due to the illegal usage of copyrighted works.

However, some companies would rather stay in secrecy to keep their competition at bay. The AI industry quickly became a competitive landscape with various big tech companies coming up with their own model. The fast-phased evolution of AI could lead to less transparency in the future.

The Standford researchers called out the AI firms to become more transparent about the power of their models. It is the right of the users, researchers, and regulators to have a complete awareness of the limitations and possible implications of such models.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics