OpenAI CEO Sam Altman stated that the company is not currently working on developing GPT-5 during the MIT event, which is good news for those experts who have been calling for a pause on AI development. Instead, the focus is on improving the capabilities of GPT-4, the latest version of the Generative Pre-trained Transformer model.
Altman also expressed his support for ensuring the safety and alignment with human values of AI models, but he believed that the open letter calling for a pause lacked technical nuance in terms of where to pause.
“An earlier version of the letter claims we are training GPT-5 right now. We are not, and won’t for some time. So in that sense, it was sort of silly. We are doing things on top of GPT-4 that I think have all sorts of safety issues that we need to address.” said Altman.
GPT-4 represents a notable advancement from its forerunner, GPT-3, which was made available in 2020. GPT-3 boasted an impressive 175 billion parameters, making it one of the most extensive language models available. While OpenAI has not disclosed the precise number of parameters for GPT-4, it is estimated to be around one trillion.
OpenAI is a prominent AI research lab that has created GPT models for various purposes, such as language translation, chatbots, and content generation. However, the safety and ethical concerns surrounding these large language models have been a topic of discussion. Although GPT-5 may not be on the horizon, the development of GPT-4 and other models based on it will undoubtedly raise further questions about the safety and ethical implications of such AI models. Altman’s remarks suggest that OpenAI is aware of these concerns and is working to address them.