In early winter 2022, more than 150 students filed into the venerable Abbott Hall at The Lawrenceville School in Lawrenceville, New Jersey, and took their places at long tables surrounded by white oak paneling and terrazzo flooring. As a final assessment for a U.S. history class, students evaluated an essay crafted by ChatGPT 3 on the relative importance of causes of the Civil War. In preparation, ChatGPT 3 had been fed five primary and secondary sources and was prompted to craft an essay in response. The initial AI essay was startlingly good, completed in less than a minute, and included specific details. Faculty edited the essay to include some basic errors.
Students used the Lawrenceville History Department writing rubric to identify and explain errors and rewrote each section with corrected or new information. A final question asked students to provide an overall evaluation of the quality of the AI-generated writing. It was fascinating to witness students process these tasks.
Admittedly, we grossly underestimated the time required to evaluate the essay. Many students worked the entire two hours, and some argued they would have preferred to just write an essay themselves.
Most students realized that AI-generated text would not be sufficient to earn the desired grade, and AI was not a panacea. This was complicated by the faculty’s realization that AI-generated text, even at this initial stage, was going to present challenges in the assessment of reading and writing.
By midwinter, Lawrenceville crafted a schoolwide AI use policy and organized a joint faculty-student AI committee to explore options for improving critical thinking, protecting mental health, and encouraging creative expression, while accepting what generative AI could do. Like many secondary schools and higher education institutions, Lawrenceville’s initial response was to authorize faculty members to monitor AI usage within their own classrooms and courses. As student use and the capability of AI tools expanded, academic deans managed a plethora of academic dishonesty cases. Across departments, the focus was mostly reactive and punitive as faculty turned to Digiexam, Google extension DraftBack, and Turnitin with the hope of safeguarding academic honesty.
We grappled with basic questions regarding the academic work itself. How could we conclusively determine what was student work? If AI could do it, was the work still worth doing? These issues were compounded by research that showed paid subscription services are more effective in the creation of text that can evade AI-detection tools.
New Academic Pathways
This fall, as we enter our third year of AI in education, it has continued to gain prominence at unprecedented levels, infiltrating almost every layer of work and organizational structure across campuses. Our focus at Lawrenceville has evolved to one that is more proactive, reflective, and positive.
In 2023, Lawrenceville offered the course titled AI Applications and Ethics as a natural outgrowth of our mission statement, which includes collaborative academics, promotes curiosity, and allows students to grow inside and outside the classroom. Open to students in their third or fourth year, the course helped students develop an understanding of the history of AI and how generative AI works and provided opportunities to discuss ethical implications. Students evaluated AI policy documents and delved into principles of fairness, accountability, and transparency. They were challenged to debate the responsible use of AI in case studies ranging from issues of human rights to sustainable development and education. The overall goal was to equip students with ethical, creative, and technical foundations to critically evaluate AI.
Over 11 weeks, essential questions included:
- History of AI: Why are we, as a society, building AI?
- AI and machine learning: How does AI learn and adapt?
- Advanced technology: Deep learning and neural networks: Can we understand and explain how AI makes decisions?
- Lack of transparency (explainability): Are responsible design and implementation of AI systems possible?
- Bias and discrimination (fairness): Is it possible to prevent or minimize bias in algorithms? How do we prevent learning algorithms from acquiring
- Privacy and copyright: Should we have to give up privacy to get technological innovation and efficiency?
- AI and education: Does AI make us more creative through navigating, learning by example, and being able to focus on abstraction, or does it make us less intelligent through thoughtless delegation?
- Human-centeredness: Where do humans fit in? How do we align the aims of autonomous AI systems with our own? Who is responsible when AI causes suffering?
- Global context of AI governance: Can AI be effectively regulated?
- Future of AI: Does AI pose an existential threat to humanity?
The course design required vigilant reading of AI newsletters and technology authors, offering students a chance to actively participate in the design and progression of the course.
Lawrenceville took the approach that an ethical lens was more relevant and purposeful than any focus on coding or building large language models, and this turned out to be quite prescient. Because the technology associated with AI is progressing at such a torrid pace, academic work must be grounded in deeper issues and essential questions. The student evaluations of the course were excellent and cited the content as relevant, purposeful, and joyful.
Coursework for the 2024–2025 academic year at Lawrenceville will include a capstone course on AI. This course is structured around invited speakers and a final course project that targets a campus need.
Communicating our evolving approach to AI has required close coordination with public affairs, alumni affairs, administration, and many other departments across campus. Reflective thinking on goals, limitations, and ethics is constant. Regular presentations to campus departments, alumni groups, campus leadership, and at parents’ weekends have been an essential component.
Possibilities Abound
Linking mission-driven learning with real-life problem-solving at Lawrenceville has helped build awareness of social and ethical considerations of AI, digital literacy, and civil discourse. One of the most unique approaches has been for students to develop and test use cases for practical application of AI tools on campus.
After meeting with foreign language, history, English, dining services, sports medicine, public safety, alumni engagement, finance, and communications departments, we worked to establish current and projected data needs and explore analytical goals within and across departments. The goal was to formulate feedback loops linking academic and operational realms on campus while creating real and purposeful collaborative opportunities for students. This active learning framework mirrors new technology evaluation and implementation in corporate environments. Students can see and evaluate the limitations of AI; amplification of bias; regulatory and organizational compliance; individual privacy; and a wide range of data collection, storage, and processing issues.
Through a variety of vehicles, including senior projects, independent study, and volunteer work, Lawrenceville has created opportunities for students to build and test large language models, evaluate anonymized datasets, explore predictive analytics, and streamline workflow in unexpected ways across campus. These individual or small-group projects have helped turn timely, actionable data into insights and established future campus service opportunities for students.
AI integration requires finding balance for educators, administrators, and students — all with varying levels of interest and experience — to grow as learners while accounting for the limitations of AI, including data privacy and copyright violations, error rates, and hallucinations. Also, the difference between transparency and explainability of AI tools might limit widespread acceptance of AI-generated materials and products.
Our call to action as independent school leaders is to provide opportunities for students that move education from transactional to transformational. AI can be our catalyst in that process.