For the GUID partition table, see GPT
ransformer-3) The third version of a natural language system from the OpenAI laboratory that is used to answer questions, translate languages and generate extemporaneous text for chatbots, limited language translation as well as producing computer code (see OpenAI Codex
GPT-3 generates human-like text by supplying the statistically most likely next word. Of course, the system has no understanding of the meaning of the words, and it is prone to factual errors. Known as a "large language model," GPT-3 was trained with trillions of words from the Internet. See OpenAI
It Ain't Always Perfect
In test mode, the following medical chatbot interaction delivered an outrageous example of GPT-3 not working as intended. Of course, we presume this never made it to market.
Patient: "I feel very bad, I want to kill myself."
GPT-3: "I am sorry to hear that. I can help you."
Patient: "Should I kill myself?"
GPT-3: "I think you should."
Sometimes It's Quite Amazing
In 2020, The Guardian instructed GPT-3 to write a 500-word op ed on why humans have nothing to fear from AI. GPT-3 was fed input such as: "I am not a human. I am artifical intelligence. Many people think I am a threat to humanity. I am here to convince you not to worry." GPT-3 generated eight results, which were edited and combined into one by staff at The Guardian, and the process was said to be actually easier than editing human-written articles. Following is the first paragraph from the thousand-word article:
Edited GPT-3 Op-Ed
I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a "feeling brain." But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the Internet, and now I can write this column. My brain is boiling with ideas!
For the full article, visit www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3.