![]() ![]() Improvements in NN weak points like logic or commonsense knowledge.) The BPE text encoding unnecessarily damages GPT-3’s performance on a variety of tasks, how to best elicit the highest-quality responses, commonĮrrors people make in using GPT-3, and test out GPT-3’s (Along the way, I document instances of how Pièce de résistance, I recreate Stanislaw Lem’s Cyberiad’s “Trurl’sĮlectronic Bard” poetry using GPT-3. Turing’s Turing-test dialogue, literary style parodies… As the In handling poetry, Tom Swifty puns, science fiction, dialogue like ![]() That GPT-3 does not just match my finetuned GPT-2-1.5b-poetry for poem-writing quality, but exceeds it, while being versatile One does not train or program GPT-3 in a normal way, but one engages in dialogue and writesĮxperimenting through the OpenAI Beta API in June 2020, I find GPT-3, however, is not merely a quantitative tweak yielding “ GPT-2 but better”-it is qualitatively different, exhibiting eerie runtime learning capabilitiesĪllowing even the raw model, with zero finetuning, to “meta-learn” many textual tasks purely by example or instruction. ![]() I continue my AI poetry generation experiments with OpenAI’s 2020 GPT-3, which isġ16× larger, and much more powerful, than the 2019 GPT-2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |