What Should Data Science Education Do with Large Language Models? (2307.02792v2)
Abstract: The rapid advances of LLMs, such as ChatGPT, are revolutionizing data science and statistics. These state-of-the-art tools can streamline complex processes. As a result, it reshapes the role of data scientists. We argue that LLMs are transforming the responsibilities of data scientists, shifting their focus from hands-on coding, data-wrangling and conducting standard analyses to assessing and managing analyses performed by these automated AIs. This evolution of roles is reminiscent of the transition from a software engineer to a product manager. We illustrate this transition with concrete data science case studies using LLMs in this paper. These developments necessitate a meaningful evolution in data science education. Pedagogy must now place greater emphasis on cultivating diverse skillsets among students, such as LLM-informed creativity, critical thinking, AI-guided programming. LLMs can also play a significant role in the classroom as interactive teaching and learning tools, contributing to personalized education. This paper discusses the opportunities, resources and open challenges for each of these directions. As with any transformative technology, integrating LLMs into education calls for careful consideration. While LLMs can perform repetitive tasks efficiently, it's crucial to remember that their role is to supplement human intelligence and creativity, not to replace it. Therefore, the new era of data science education should balance the benefits of LLMs while fostering complementary human expertise and innovations. In conclusion, the rise of LLMs heralds a transformative period for data science and its education. This paper seeks to shed light on the emerging trends, potential opportunities, and challenges accompanying this paradigm shift, hoping to spark further discourse and investigation into this exciting, uncharted territory.
- Abubakar Abid, Maheen Farooqi and James Zou “Persistent Anti-Muslim Bias in Large Language Models” In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’21 Virtual Event, USA: Association for Computing Machinery, 2021, pp. 298–306 DOI: 10.1145/3461702.3462624
- “Easily accessible text-to-image generation amplifies demographic stereotypes at large scale” In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 1493–1504
- Benjamin S Bloom “The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring” In Educational researcher 13.6 Sage Publications Sage CA: Thousand Oaks, CA, 1984, pp. 4–16
- “Identifying and Reducing Gender Bias in Word-Level Language Models” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, 2019, pp. 7–15
- “Language Models are Few-Shot Learners”, 2020 arXiv:2005.14165 [cs.CL]
- “Sparks of Artificial General Intelligence: Early experiments with GPT-4” In arXiv, 2023
- Longbing Cao “Data science: a comprehensive overview” In ACM Computing Surveys (CSUR) 50.3 ACM New York, NY, USA, 2017, pp. 1–42
- “Extracting training data from diffusion models” In arXiv preprint arXiv:2301.13188, 2023
- “ChatGPT Writes Performance Feedback” Accessed: 2023-06-29 Textio, https://textio.com/blog/chatgpt-writes-performance-feedback/99766000464, 2023 URL: https://textio.com/blog/chatgpt-writes-performance-feedback/99766000464
- Liying Cheng, Xingxuan Li and Lidong Bing “Is GPT-4 a Good Data Analyst?”, 2023 arXiv:2305.15038 [cs.CL]
- Thomas H. Davenport and DJ Patil “Data Scientist: The Sexiest Job of the 21st Century” Harvard Business Review, 2012 URL: https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century
- Thomas H. Davenport and DJ Patil “Is Data Scientist Still the Sexiest Job of the 21st Century?” In Harvard Business Review, 2022 URL: https://hbr.org/2022/07/is-data-scientist-still-the-sexiest-job-of-the-21st-century
- “Curriculum guidelines for undergraduate programs in data science” In Annual Review of Statistics and Its Application 4 Annual Reviews, 2017, pp. 15–30
- “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”, 2019 arXiv:1810.04805 [cs.CL]
- “Mind meets machine: Unravelling GPT-4’s cognitive psychology”, 2023 arXiv:2303.11436 [cs.CL]
- “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” In arXiv, 2023
- Emilio Ferrara “Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models”, 2023 arXiv:2304.03738 [cs.CY]
- Significant Gravitas “Auto-GPT: An Autonomous GPT-4 experiment, 2023” In URL https://github. com/Significant-Gravitas/Auto-GPT, 2023
- Stephanie C Hicks and Rafael A Irizarry “A guide to teaching data science” In The American Statistician 72.4 Taylor & Francis, 2018, pp. 382–391
- Matthew Hutson “Robo-writers: the rise and risks of language-generating AI” In Nature 591.7848, 2021, pp. 22–25
- Kayla Jimenez “Professors are using chatgpt detector tools to accuse students of cheating. but what if the software is wrong?” In USA Today, April 2023, 2023
- Kaggle “Heart Failure Prediction Dataset”, 2021 URL: https://www.kaggle.com/datasets/fedesoriano/heart-failure-prediction
- Jimoon Kang, June Seop Yoon and Byungjoo Lee “How AI-Based Training Affected the Performance of Professional Go Players” In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22 New Orleans, LA, USA: Association for Computing Machinery, 2022 DOI: 10.1145/3491102.3517540
- “GPT detectors are biased against non-native English writers” In arXiv preprint arXiv:2304.02819, 2023
- “Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4”, 2023 arXiv:2304.03439 [cs.CL]
- “Big data: A revolution that will transform how we live, work, and think” Houghton Mifflin Harcourt, 2013
- Shima Rahimi Moghaddam and Christopher J. Honey “Boosting Theory-of-Mind Performance in Large Language Models via Prompting”, 2023 arXiv:2304.11490 [cs.AI]
- “WebGPT: Browser-assisted question-answering with human feedback”, 2022 arXiv:2112.09332 [cs.CL]
- OpenAI “GPT-4 Technical Report” In arXiv, 2023 DOI: 10.48550/arxiv.2303.08774
- “Improving language understanding by generative pre-training” OpenAI, 2018
- “HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face”, 2023 arXiv:2303.17580 [cs.CL]
- “LLaMA: Open and Efficient Foundation Language Models”, 2023 arXiv:2302.13971 [cs.CL]
- Nikhil Vyas, Sham Kakade and Boaz Barak “Provable copyright protection for generative models” In arXiv preprint arXiv:2302.10870, 2023
- Matt Welsh “The End of Programming” In Commun. ACM 66.1 New York, NY, USA: Association for Computing Machinery, 2022, pp. 34–35 DOI: 10.1145/3570220
- Benjamin Yakir “Introduction to statistical thinking (with r, without calculus)” In The Hebrew University, 2011, pp. 324
- “Judging LLM-as-a-judge with MT-Bench and Chatbot Arena”, 2023 arXiv:2306.05685 [cs.CL]