GPT-3 Announces Itself

Prompt

OpenAI researchers today released a paper describing the development of GPT-3, a state-of-the-art language model made up of 175 billion parameters.

For comparison, the previous version, GPT-2, was made up of 1.5 billion parameters. The largest Transformer-based language model was released by Microsoft earlier this month and is made up of 17 billion parameters.

“GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic,” the researchers stated in their paper. “We find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans.”

Output

The researchers also found that GPT-3 is capable of generating samples of text that are indistinguishable from human-written text.

“We find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans,” the researchers stated in their paper. “We also find that GPT-3 can generate samples of text that are indistinguishable from human-written text.”