I am a mathematician and artificial intelligence researcher at Booz Allen Hamilton and EleutherAI who specializes in natural language processing, deep learning theory, and AI ethics.

I am also a student in the College of Computing at the Georgia Institute of Technology where I am pursuing a Masters of Science in Computer Science and an organizer of the AI Village, a community of hackers and data scientists working to educate the world on the use and abuse of artificial intelligence in security and privacy.


    • My papers "GPT-NeoX-20B: An Open-Source Autoregressive Language Model," "What Language Model to Train if You Have One Million GPU Hours?" and "You reap what you sow: On the challenges of Bias evaluation under multi-lingual settings" were accepted to the ACL Workshop on Challenges & Perspectives in Creating Large Language Models. The preprints are available on OpenReview. (April 2022)

    • EleutherAI has released GPT-NeoX, a 20 billion parameter language model that is freely and publicly available. (Feb 2022)

    • My paper "Multitask Prompted Training Enables Zero-Shot Task Generalization" was accepted to ICLR 2022! Check out the preprint on arXiv and try the model out on HuggingFace. (January 2022)

    • My paper "Neural Language Models are Effective Plagiarists" is now available on arXiv. (Jan 2022)


M.S. in Computer Science, the Georgia Institute of Technology, expected 2022

B.S. in Mathematics with honors, the University of Chicago, 2016

B.A. in Philosophy, the University of Chicago, 2016