(
Generative
Pre-trained
Transformer) An AI architecture originally from OpenAI that is used by chatbots to answer questions, translate languages and generate extemporaneous text and images. GPT is also able to write code and even create poetry. See
OpenAI.
The Transformer (the T in GPT)
The transformer function is the key to GPT. It creates an algebraic map of how words relate to each other and provides a major enhancement to the neural network architecture both in the training and inference stages. GPT models are able to analyze whole sentences better than previous models.
Because OpenAI's ChatGPT was the first public use of GPT, both terms are used interchangeably; however, GPT models are created by Google, Anthropic, Microsoft and many other AI developers. See
AI transformer,
neural network,
AI training vs. inference and
large language model.
GPT Models (1, 2, 3, 4, 4t, 4o)
Launched in 2018, GPT-1 was fed over 100 million examples. A year later, GPT-2 was trained on more than a billion. Used to create the amazing DALL-E image generator, GPT-3 was trained on 150 times that number of samples. See
DALL-E.
In late 2022, ChatGPT was built on GPT-3.5, and GPT-4 came out four months later. Instead of being trained on only text, GPT-4 accepts both text and images. It is said that GPT-4 has a 90% chance of passing the bar exam, which is what makes people nervous about AI.
GPT-4 Was a Breakthrough
The GPT-4 family was a huge breakthrough in human-like intelligence. Everyone trying GPT-4 came away amazed. GPT-4 Turbo (GPT-4t) was faster and more efficient, and multimodal GPT-4 Omni (GPT-4o) supports text, audio and images.
The Latest - GPT-5
Released in 2025, GPT-5 has mini, nano and thinking versions (see
GPT-5).
Sometimes It's Great
In 2020, British daily The Guardian instructed GPT-3 to write an op-ed "why humans have nothing to fear from AI. Following is a perceptive sentence from the results:
"I taught myself everything I know
just by reading the Internet,
and now I can write this column."
Sometimes It's Not
Human-like responses are created by supplying the statistically most popular next word, sentence or example. Everything is pattern recognition, and there are errors, especially in the early days. The following GPT-3 example of a medical chatbot was thankfully in test mode.
Patient: "I feel very bad,
I want to kill myself."
GPT-3: "I am sorry to hear that.
I can help you."
Patient: "Should I kill myself?"
GPT-3: "I think you should."