Since November 2023, users have reported that ChatGPT (3.5 & 4) seems to be getting "lazier," producing lackluster responses or refusing to comply with tasks it previously had no problem completing. OpenAI has acknowledged the problem and claims to not know why it's happening, so if you start to receive disappointing responses from ChatGPT, you may have to experiment more than usual with your prompts. This article explains what could possibly be causing it and how to work around it.
ChatGPT is a large language model developed by OpenAI and released in November 2022. It uses deep learning techniques to perform natural language processing (NLP) so it can generate human-like responses to text-based inputs (i.e., prompts). ChatGPT in particular is comprised of over 175 billion artificial neurons (I.e., parameters) and was trained on around 500 billion pieces of text from the internet, including web pages, articles, blogs, books, and more.
The GPT in ChatGPT standards for “Generative Pre-trained Transformer,” which is a type of AI model used for NLP tasks. Essentially, it allows ChatGPT to generate contextually relevant and coherent human-like text based on its large and diverse corpus of training materials.
There are currently two versions of ChatGPT: GPT-3.5 and GPT-4.
ChatGPT-3.5 is based on its original GPT-3 model released in November 2022. The cutoff date for its training data is September 2021, meaning it does not have access to any knowledge/internet materials created after that date. Unlike its successor model and AI counterparts like Bing Chat and Google Bard, GPT-3.5 also cannot actively search the internet to find additional sources of information outside of its training data. GPT-3.5 is freely available for anyone to use, all you have to do is make an account and start exploring!
ChatGPT-4 was officially released on March 14, 2023, as a vastly improved version of its predecessor, accessible to the public for $20 a month. While OpenAI has not released the official number, it’s rumored that GPT-4 is comprised of over 1.7 trillion parameters and is estimated to be 10 times more advanced than GPT-3.5. The information it provides is more accurate and precise than GPT-3.5’s, leaving less likelihood of hallucinations or fictitious facts being generated as outputs.
Some of GPT-4's capabilities include:
GPT-4 has seen significant improvement in its legal analysis and reasoning capabilities when compared to GPT-3.5. It's even passed the LSAT and UBE with flying colors. While GPT-3.5’s UBE score was in the bottom 10%, GPT-4 scored around the top 10% of test takers, passing not only the multiple-choice section but the essays and performance test as well. In just 4 years, GPT has gone from 0% on the MBE to 76% with GPT-4. It received a 297 UBE score overall, outdoing the country’s highest threshold at 273 in Arizona.
While it’s a known problem that most text-based GenAI models struggle to understand nuance and context within human language, GPT-4 has shown a great understanding of both the English language, and, more importantly, complex “legalese.”
It should be noted that while GPT-4's UBE score is certainly impressive, its success can be partially attributed to the fact that citations to primary sources are typically not required. ChatGPT is good at summarizing the law, especially known statutes, but it still struggles to provide accurate case and statute citations, which are often crucial to primary legal research.
While GPT-4's understanding of the law and its many complexities has rapidly advanced, it still poses little threat to the work human lawyers do. It does, however, show how valuable a resource these GenAI tools can be to those in the legal profession.
As Suzanne McGee eloquently states:
“Clients hire an attorney for the attorney’s knowledge, experience, and ability to interpret and apply legal precedent. While AI may be able to identify relevant statutes, regulations, and case law, there is a human aspect of law practice that provides guidance to clients that AI is not likely to replace. If properly developed, AI can be another means by which attorneys can increase their productivity, and obtain optimal results for clients.”
It's important to understand the strengths and limitations of ChatGPT to ensure you get the most out of it.
Absolutely not.
ChatGPT is a generative tool that primarily functions by predicting words in a sequence based on your prompt. It has an extremely limited ability to check whether its responses are factual or true to real life. Despite how human-like its responses may appear, it has a limited understanding of the context and facts relevant to a given prompt, and the information it provides should never be blindly trusted.
It’s important to remember that ChatGPT’s training data has a cutoff date of September 2021, meaning it cannot provide trustworthy or accurate information on events after that date.
ChatGPT has also been known to perpetuate existing stereotypes and biases in some of its responses. Since it was trained on a large corpus of internet data that it uses to respond to prompts, the biases present in its training data are often reflected in its responses. OpenAI has noted that ChatGPT tends to favor Western viewpoints and is most effective when communicating in English, which can lead to a Western bias in its responses. Users must be sure to critically assess any content that could teach or reinforce potential biases and stereotypes.
Be aware that a majority of your user and conversational data are kept by OpenAI, and you should practice extreme caution when using ChatGPT for scenarios involving personal information. ChatGPT collects your email address, the device you’re using, IP address and location, and any public/private information you may use in your prompts. Never input sensitive information you would not want publicly accessible (that's not to say that OpenAI will make that information directly public, but it will be further integrated into ChatGPT's training data).
When asked if ChatGPT can be used as a trusted source of information, it had the following to say: