Data Privacy
Data privacy encompasses safeguarding personal data from unauthorized activities. Modern data privacy laws protect individuals’ rights and require organizations, like schools, to comply. When incorporating generative AI in schools, it is vital to prioritize data privacy, ownership, and transparency through responsible data management, clear usage policies, and open dialogue about the technology’s strengths and weaknesses.
Discussion Points
- What is the data privacy policy? How can we effectively implement data privacy measures and policies to protect students and faculty?
- How will we monitor and maintain compliance with data privacy laws and regulations while integrating generative AI in our school?
- What procedures will ensure transparent communication about AI’s capabilities, limitations, and data usage?
Humanity (Being Human in the Age of AI)
It is important to bear in mind that AI tools are not human, and therefore lack the ability to think. While individuals can delegate control to AI tools, they cannot absolve themselves of responsibility for the accuracy of information. It is essential to ensure that the information obtained through these tools is true. Knowing when and where to outsource cognitive work to such tools can allow people to focus on their uniquely human abilities, such as creativity, imagination, critical thinking, ethical conduct, and the ability to reason. Technology cannot replace human connection, and relationships with students are even more important now.
Discussion Points
- How can we strike a balance between leveraging AI tools to enhance our capabilities and ensuring that we maintain active engagement in critical thinking and decision-making processes?
- In what ways can we as educators integrate AI tools into our teaching practices while still prioritizing and fostering meaningful human connections and relationships with our students?
- How can we develop guidelines or best practices for using AI tools that promote ethical use, encourage human creativity and imagination, and ensure accountability for the accuracy and truthfulness of the information generated?
Algorithmic Bias
Algorithms are sets of instructions designed by people to determine how a computational system reads, collects, processes, and analyzes data to generate outputs. When implemented in a governmental or business process, the algorithms represented in applications can take on a status of authoritativeness. The design of applications sometimes incorporates biases that result in decisions or outputs that discriminate against people who are not from the dominant culture. Thus a process that is supposed to be fair generates unfair results. Critical analysis of applications may show bias in the data used to train an AI agent such as ChatGPT or in the instructions encoded in the application.
Discussion Points
- How might the efficiency gained by creating a procedure or algorithm be balanced against costs in terms of the harm that its implementation causes?
- How might the creators of applications such as ChatGPT be held accountable for their selection of data used to train the AI agent?
- How can educators show students the consequences of different design decisions in creating applications that people use every day?
- How can we guide our students to not only recognize these blind spots but also incorporate a greater multiplicity of viewpoints in their scholarship
Acknowledgement and Ownership
Proper citation from a generative AI tool includes describing the prompt, naming the specific tool, developer, date content was generated, and URL. Citations should both be made in the text and in a bibliography.
The specific citation style will determine the way these should be formatted. A helpful electronic resource with details on APA, MLA, and Chicago-style formatting can be found here: https://www.scribbr.com/ai-tools/chatgpt-citations/.
Discussion points
- A component of effective research includes finding multiple sources that provide evidence to support a claim. What are some ways teachers can encourage students to use generative AI tools with other non-AI databases?
- What might one do if a generative AI tool creates text that cannot be supported using information from anywhere else?
- How might someone integrate research from generative AI into written work in a way that goes beyond just paraphrasing the text or material it generates?