Recently, some users have raised concerns about whether GPT-4 has become less intelligent. Many users have reported a decline in the quality of answers generated by the large model, particularly in program generation. Code generated by GPT-4 now often contains errors, according to user feedback.
What is the reason for this perceived decline in GPT-4’s performance? One possible explanation is a psychological effect. Some industry insiders speculate that people’s expectations have increased after the initial period of surprise and excitement.
Another theory is that the plugin may be causing pollution for the problem to be solved, as the extra prompt words of the plugin could be counted as noise.
OpenAI developer outreach ambassador Logan Kilpatrick has responded to these concerns. He stated that the ontology of the big model has remained static since the release of GPT-4 on March 14, and there is no indication that large amounts of external data have contaminated the model.
However, Kilpatrick also acknowledged that the Big Model is inherently unstable and may produce inconsistent answers to certain cue words.