Homework 3: Large language model (LLM) prompting
Due 2025-04-14, 11:59pm. Instructions last updated 2025-03-24.
Learning objectives
After completing this assignment, students will be able to:
- Prompt LLMs programmatically with templates (parameterized)
- Demonstrate the difference between zero-shot, few-shot, and chain-of-thought prompting
- Engineer and test different prompts
Overview
In this assignment, you will explore different prompting techniques for OpenAI LLMs. You will fill in a Jupyter notebook hosted on the Pitt CRCD to run your code.
To get started, click on the class nbgitpuller link and edit the template notebook, hw3_template.ipynb
. You can run it on the standard CPU server; no GPU is needed.
OpenAI account setup
Until the class OpenAI account is available, you will have to use your own account with free credits. You will need an OpenAI account and API key, you can sign up here and learn how to make an API key here. The OpenAI API is paid, however, this homework will stay well under the free $5 credit given to each account. Be careful not to exhaust your free OpenAI credits while testing. You can check on this page here. To avoid exhausting your credits quickly, avoid running cells over and over again after you’ve completed an exercise.
Deliverables
- Your code: the Jupyter notebook you modified from the template. Submit:
- your .ipynb file
- a .html export of your notebook. To get a .html version, click File > Save and Export Notebook As… > HTML from within JupyterLab.
- A PDF report with answers to questions provided in the template notebook. Please name your report
hw3_{your pitt email id}.pdf
. No need to include @pitt.edu, just use the email ID before that part. For example:report_mmyoder_hw3.pdf
. Make sure to include the following additional information:- any additional resources, references, or web pages you’ve consulted
- any person with whom you’ve discussed the assignment and describe the nature of your discussions
- any generative AI tool used, and how it was used
- any unresolved issues or problems
Please submit all of this material on Canvas. We will grade your report and may look over your code.
Background readings
The following optional readings are good references for LLM prompting:
- Language Models are Few-Shot Learners. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, …others. ArXiV 2020.
- Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig. ACM Computing Surveys 2021.
- Best practices for prompt engineering with OpenAI API. Jessica Shieh. OpenAI 2023.
- Training language models to follow instructions with human feedback. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, …others. ArXiV 2020.
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, Denny Zhou. NeurIPS 2022.
Acknowledgments
This assignment is based on a homework assignment designed by Mark Yatskar and provided by Lorraine Li.