Building Human-Centered Dispositions for an AI-Ready School
This session invites participants to reimagine AI integration through the habits of mind that make learning deeply human: curiosity, empathy, ethical discernment, and reflective practice. Building on Dr. Jordan Mroziak’s work in AI policy and literacy, community engagement, and design thinking, we will explore practical routines and protocols that nurture these dispositions—helping classrooms and teams grow not only more capable with AI, but also more attuned to one another.
Transcript
Welcome today.
We are so thrilled to have all of you here with us.
We have a very exciting guest.
So we've been talking a lot about AI for a long time, and the great thing is, is when we get to bring different voices into that conversation, and we have a very big treat for you all today.
So we have, uh, Dr.
Jordan, uh, ec and he is really doing some interesting work.
His background is in AI and learning design and social imagination.
Um, he has worked at some incredible places.
So he was the director of Ed Safe, the AI Alliance.
He's done some work with Carnegie Mellon.
He's, uh, done some interesting things for Stanford.
He is a Fulbright, uh, specialist for Slovenia.
Just really an interesting global perspective and, you know, at the helm of what all of these, uh, massive organizations.
So, Jordan, welcome.
Thank you so much for being here with us today.
Yeah, of course.
Thanks for the invitation.
A delight to be here.
Uh, it's a beautiful, uh, sunny day.
So, uh, very excited to, to dig into the conversation.
Hopefully, uh, feel some, some questions as they emerge, but, uh, give us some sense and, uh, of where we are right now in place of AI and pedagogy in school, and kind of what we're we're looking at right there.
Yeah.
And feel free to also tell, uh, the audience more about your background too, and there's so much there.
Um, so any of that that you wanna unpack, um, welcome to do that.
And attendees, as you are joining us here, please say hello in the chat, introduce yourselves, and we'll have a back channel there going too.
Yeah, no.
So, uh, again, uh, I'd like to be here.
Um, so my background, uh, is actually ZA Arts Education originally.
Um, so there are a number of years doing both formal and informal, uh, teaching and learning, um, in the aesthetic education space.
Works in higher education, teaching the history of class, history of hip hop class, and, um, in addition to things that work something other nerdy buckets.
Well, you know, graduate pedagogy classes and things like that.
Um, so, uh, I had spent time there and then found my way to doing, uh, community outreach and engagement work at Carnegie Mellon University, where I was working alongside of schools across the country, um, really in kind of a long format instructional coach, uh, kind of capacity sales.
Those schools might have been in, uh, borough West Virginia, uh, or Utah.
They also may have been, uh, in very much, uh, Atlanta, uh, and working alongside educators in Georgia Tech.
So kind of a, a smattering of, of locations, but became really interested in kind of like what is, uh, teaching and learning broadly, uh, what schools do especially well.
Um, and where might we put an emphasis now given the, the interest in artificial intelligence.
Um, how do we teach for kind of these four human dispositions? Uh, despite, uh, the shininess of a a technology, it is indeed just a tool.
Um, and so to that end, uh, how do we keep centering our, our kind of collective humanity and, and building metrics and community in these spaces, knowing that we're in a different moment, uh, with, with AI right now.
Um, if it's okay, I can share my screen.
Uh, we'll do a bit of a, a walkthrough.
Uh, but, um, I'm sorry, Dr.
Cross, is there anything else that you wanted me to touch on there? No, that sounds good.
Just attendees know that.
Um, we have a small group today.
This is gonna be interactive, so if you are where you can turn on your camera and you wanna join in, if you have any questions or things, feel free to let us have know those.
I'm going into it.
You can also, again, engage in the chat.
I'll keep an eye on that too.
Yeah.
Um, I am fond of mentioning that most of my sessions that, uh, I grew up the, uh, youngest of four, uh, four boys, uh, or the youngest by a bit of a distance, nine years between me and my closest sibling, which is to say that I'm quite comfortable being interrupted or having things thrown at me.
Um, and so as things come up for you today, there's questions that, uh, emerge for you or anything like that, please, uh, let's engage in the conversation.
You can either drop them in the chat there and walk whatever is most helpful to you, uh, as part of the journey.
Um, so let's, perfect.
Uh, is everyone able to see that? We're good on screen sharing? On mine? It's still on the little part where it says, present Tom on yours.
Are you seeing it all the way in the presentation mode? Oh, no, sorry.
Yeah, it's still on the where.
Looks like he's ready to click on present.
Oh, it, how's that? Still the same.
Um, It's, it's doing the same thing.
I tell you, zoom has been funky with its settings recently for, for this.
But I, I tell you what we can see, um, if you get off of that, that share button, maybe we can see it from, um, kind of the presenter view and we can still see if that works.
Sure, No problem.
Once stop share.
Yeah.
Or it could be selecting a different, you know, maybe it's, uh, getting that open, and then you might wanna go ahead and, and hit, you know, full screen or, you know, share and then open up Zoom.
Try it that way.
That's what ended up working for me yesterday.
Sorry.
No worries.
You're, you're speaking to an all tech group, so if anyone empathizes, uh, it's these folks here.
Yeah, Yeah.
Tech works until you're, uh, ready to give an important presentation, then it's done.
Amen.
Everyone, although this is the worst thing that happens today, I'll take it of success, right? So, All right.
We can see those.
You can see that.
Yep.
So we'll just keep it like this for now.
That's okay.
Sounds great.
Perfect.
Um, yeah, so this journey started for me again, I was mentioning everywhere this arts and space of really, uh, went to school called music, um, and saw such an incredible focus on, uh, technique, uh, which was really lovely to see, but also acknowledging that whenever we make art or we engage in like, arts learnings, not simply technique, but it's like actually developing habits of mind.
And like most often people are like, oh, you're an artist.
You must be creative, right? And sort of like this line to this thing that we call creativity, even though really in my experience, uh, personally, that like, that was never being taught directly, right? That was kind of this after thought of what, what might emerge in a classroom space.
But we were doing things like teaching skills and practicing for skills and teaching pieces and, and, uh, phrasing and, and leader a a lot of things that amounted to technical knowledge, but perhaps not spaces.
What does it actually mean to do this thing, uh, uh, and to enact being a musician or an artist.
Um, and so it started in when you were really interested in this.
And so, uh, follow a line of, of research and thoughts of pedagogy.
Like, why do think about what is it this mode for human leaders, for ai? Focus on how we define our manager.
Why do human dispositions matter? And so, you know, as we think about this AI technology advancing, really maintaining emphasis and necessary focus on these human dispositions and habits in mind, how we might want to find them, um, really love the emphasis for us, be placed on prioritizing things that have traditionally not been reason to measure.
Um, and maybe that's because we haven't spent time really thinking deeply about how to measure, but you think about how we cultivate curiosity when to, uh, creativity excuse or, um, ensuring that education remains in this really, like purposeful, human centered place in what men, uh, are perceiving as an AI driven environment, right? Um, so at the end of the day, like that all sounds lovely and philosophical, but for me, it really does come down with that, that AI integration is not only technical AI education about culture ethics, it's a tool.
Um, and so we're looking at kind of some of the ways that shows up for us, uh, inside of this space, um, and hopefully, uh, dig into some questions about how we show up for us on the pedagogy.
And, but I am mindful that like, as a piece of tone setting, um, to come to this work and those who know me closest motivating, well, of course, that I tend to offload my to or we use these couple points.
Um, but these are just like three that I'm, uh, themes cognize for myself.
Um, and I think it's important for us as everything to keep number two.
But, uh, you know, one is the we in the world that are questions.
And so if we are to exist in approach, saying, how are we going to be more efficient, right? Those questions establish a goal values, uh, who we are, um, and what we deal, right? And so we are able to shape those questions in whatever we're make the most.
So about technology, novation can be about efficiency, but they might also about use technology to help, right? Uh, how the most technology people, uh, feeling safe, uh, feeling valued.
Um, so I think those, those questions become really important for us in this moment to a very ill defined halfway standard original intelligence.
Um, the next two are very much related to me.
Uh, I found it attributed to Laurie Anderson, but I'm not sure that that's entirely true.
Uh, but if we think technology can solve our problems and we don't understand the problems, and we don't understand the technology, um, I very much a lot kids know as a technology leader spent my life around technology.
Um, I beat off of it, I curious about it, but I also know that, uh, technology is implemented by people who have values and practices of themselves, and it's designed by people who have similar values.
Um, and so the way in which that how technology takes shapes and the way that it's implemented, uh, are really about solving problems that are, uh, human problems.
Okay? So I'm not to say that it can't be of assistance, um, but it is certainly not going to be a pure, all right? It's nothing to magically solve anything for us.
We are still very much involved in this.
Um, and then, uh, the last one is long.
I won't read it in.
It's that, but, uh, very much appreciate the Dr.
Winnie, um, um, booking ai, one of my favorites of in this area of like really understanding intelligence, uh, it's limitations.
Um, questions are bias, um, but not just limitations, but how do those questions shape what constantly, but they, right.
It's not simply pointing out f laws, but it's acknowledging the, the laws in the way in which it works, um, so as to be able to, to generate, uh, pro-social, uh, implement technology too.
So I just wanted to sort of place these there for us to hold for ourselves.
And as we exist throughout this today, now when we think about the importance of human position, we're understanding the role of, right.
Uh, and when we know the human market sensor to our values, they see the world, but they're especially essential now the environment that we see ourselves based world, right? Um, one of the first sets of dispositions that I think passed about arts education, um, was aesthetic education, the capacities for and learning.
Um, and one of those was the comfort with ambiguity.
And I come back to that a lot because some things that education doesn't, especially how is building out of place for, like, how are we sitting in this question? How we critically, but also very much how are we developing a comfort with even their own big questions? Um, and so I think that these mindsets are absolutely essential to know that we not teaching directly for them at times, but they're still being taught the way in which we embody classroom processes and walk through of a place, the set up of a place.
All those go to shaping that work, uh, and how we show that might.
And so, dispositions are these very much essential traits.
Um, they guide behavior, how we engage technology and each other cultural context.
Now, I wanna jump very quickly, just like after briefly went, and this, this to if there's any number of frameworks about this, but two, that I wanted to put a little side of one another perhaps for conversation.
Um, and knowing that the feedback on one another, which one another are different reasons.
One is the castle framework, right? A very well respected one around socio-emotional learning.
And that little slight, so even if we just kind of look at that first three on what's associated is we see these data stuff, right? Relationship skills, social witness, self management, responsible decision paper.
We can also look at something that was very much unlined, sort of a contemporary slash future facing elements.
And this is the 2030.
Um, you can dig into that.
There's a lot of, uh, writing on that currently.
Um, but just acknowledging a, that learning compass, which they look at through the lens of every developing one, um, and having their own network compass, whereas that if that sensor there, we see it, right? And so these frameworks, regardless of what we we're looking at this, the, um, upper broader learning lens, or very much specifically focused on kind this, uh, internal work of what you understanding the castle to be.
Um, obviously, systems that play in the, um, I think that it's something that calls attention to me that, uh, what we are doing inside of this space is specifically within the casting framework, um, that all schools are shaping the work of social emotional learning, right? All classrooms are doing the work of socioemotional learning.
We might be doing them with intention, um, and in a sort of a proactive or productive way, we can also do them to the dismissal of some of that, right? Um, and so in my mind, the stereotype of classrooms with students sitting in row facing forward, not speaking, um, the shaping and understanding of what socio learning is, even if we don't understand it material.
Uh, and so the invisible nature impact here, uh, I think that's really significant when we thinking about these things to show up, um, in teaching awareness spaces and also education ecosystem, uh, as whole building, um, and is director, uh, or directors.
So, so I would place an emphasis on some of those questions, even though we were, these are not necessarily, they tested areas that we encounter.
Uh, they're solely real, uh, though feel very, who our students are, who our educators are, who our administrators are, that's, um, a level of values to that space.
So, well, I fully appreciate that there's any number of framings of, of disposition table of research basis.
And I did wanna put together just like a, a brief table, uh, for us to consider here in thinking of what, what these dispositions are a, a very brief explanation for us to, I know at least consider as a, a surface pass, um, kind of where they're being pulled from, uh, with regards to billing for research and writings.
Uh, but specifically for our purposes, how does this intersect with the world of artificial intelligence? And how are we in schools that are AI ready, uh, but still very much human centered in their orientation, right? So, you know, I wanna give you a moment just to, to review here, but, and so I won't read this down to you specifically, but, you know, we find ourselves looking specifically in this moment, uh, with a hyper focus on ai.
Uh, but really where the attention fits is generative artificial intelligence, right? Generative artificial intelligence being AI that generates materials or content.
Not all AI is general artificial intelligence, right? And so I often refer to it in this space than others have, as well as like, it's either this backpack word, uh, but increasingly I think of it more as like, you know, your check on luggage kind of word, right? We're trying to shove as much as we possibly can into it.
Um, and so it winds up taking on all this meaning that we don't really have a great way of parsing out and may not end it to, um, and so when we think about how artificial inte is showing up in teaching and learning, Google is again, this moment of generative, uh, artificial intelligence for us.
Things like chat, GPT or, um, maybe familiar with the recent announcement of soa, uh, this update to make incredibly high quality creepy, um, video, right? Hyperrealistic video based upon text prompts.
So what we're looking at here is that, uh, when we think about curiosity, this intersection of AI prompts inquiry, hopefully beyond default AI outputs, right? What we mean by that is that we should never trust solution, like, and think about artificial intelligence and gen ai and teaching and learning moments.
Uh, it is a word calculator, uh, in many ways, right? And not to be reductionist about it, I think that it take a couple different views of it, but at the end of the day, it is simply taking, is using, uh, and implementing incredible levels of, of mathematics, um, to generate plausible sounds in the text.
Um, but that text that it's generated is not inherently the same.
And what the, some is calculating, uh, statistically what next phrases, words might be based on the process of creating tokens.
And so, as we think about the way that I try to emphasize these things when I'm teaching educators about artificial intelligence or making recommendations that, or developing some of these literacy frameworks that we're looking at, um, I'm mindful that even things like, uh, a sense of criticality when we approach the tool is at its core, uh, teaching a level of curiosity, right? Um, we shouldn't simply accept, but rather critique, uh, engage with, uh, continue to prompt.
We the, um, researcher RA bond, uh, famous for, uh, establishing PAC framework, um, uh, that technology sort, the ebb and flow intersection or maps of technology, pedagogy and content know, um, as we think about, uh, technology teaching learning spaces had a great way of phrasing it.
That, uh, sometimes we call hallucinations or we call, uh, generations by artificial intelligence that are not true hallucinations.
It's actually much more effective for us to think of the fact that, uh, everything that AI generates is a hallucination that sometimes it's factual.
Um, so I think that as we encounter this technology, uh, it is, uh, really important for us, uh, to remain curious about it through a, through a level of criticality.
Um, when we think about empathy, uh, in that space, uh, obviously these systems have human impact.
And so as we engage in this process, they're remain critical.
Um, we really do need sort of create habits of that, uh, empathic capabilities so that we can just evaluate the impacts, the outputs, the systems, the decisions, the processes, and the policies that we put in place because they are actually putting up against what it means to be human, um, and creating impact for, for people that we engage with.
When we think about creativity, um, we do know that AI has this incredible ability to elevate creativity in us.
It might give us food for thought, right? Um, but we also know because it isn't deep statistically, uh, based upon extended emergent research that we're seeing, is that it's actually flattening creativity as well.
And so, while it might give us this ability to create faster or more rapidly, um, in environments where there's not an inherent level of creativity established, we also know that because somehow the model would work, um, that the top end of that creativity is also being level up as well.
And so, to the extent that we're seeing this, we know that it's having this impact.
So probably up to teaching work, creativity, uh, how we're practices, uh, that both leverage the tool, um, but also invoke what it means to be human, uh, as a creative element as well, uh, joy and purpose.
AI should, as we know, serve human flourishing, not remove the things, but bring us the most joy.
Um, and so, you know, I'm mindful of like one of their early means, uh, from the past years that I wanted a i to do my, so that I can have more time free in art, uh, not the other way around, right? Um, and so this thing that, uh, we have access to is actually zoom that live.
Uh, but it was providing a space for us to be more human, uh, to that made more of our time when it does provide efficiencies into the space of facilitating human connection.
Um, that ethical discernment question is very real for us.
You know, the very artificial intelligence force.
We also know for a fact that we don't have great insight into the algorithms.
We don't know the Instagram, we don't know the Facebook sound.
We don't know how chat GT creates what it does.
Uh, and so it creates the black boxes where we very much need to be mindful of the decision that we're making, um, and aware that there's these ethical trade offs of efficiency at what cost, right? At what kind of, um, and then lastly, something that I think is really important, um, which we may have time to it to get to much later on, is that reflection is needed to evaluate that role and bias, right? And so, as a metacognitive act, reflective practice for teachers, learners, for ourselves as administrators and technology leaders, um, what's our own practice, uh, in accepting that, yes, the tool may have its purposes, but at the end of the day also, um, what kind of questions am I asking of the system, of the policies and rest of it? This is actually taken from, uh, research that, uh, MIT was an article from the summer, and I will make sure that all of this is, is part of the package.
Uh, so we can come back in and look at some of those in different depths that done.
But it was emphasizing the fact that humans can still do many things that AI cannot.
Uh, and they use this kind of like acronym of eppo to, to look at this space.
Um, and so while AI may be able to imitate plot convincingly by creating this like next word of tokenization, we also know that as human beings, uh, as people who work in systems of education, um, these are all things that are significantly important to us, uh, when work that we do.
Um, and perhaps arguably what brought us to work inside of the space originally, right? The fact that we were connecting with other people, providing presence, that there was a level of hope and vision for the work that we were doing, um, a level of creative imagination behind the work.
Uh, and then of course, being familiar with other people.
So these kinds of dispositions, uh, are not simply kind of like a opt in that yes, these exist, uh, but in many ways it'll exist now more so, or should be highlighted more so for ourselves as we think about how do we navigate decisions about artificial intelligence, uh, in inside of these spaces.
Uh, you know, as an educator, what might it look like to, to build in empathy as a piece, rea assessment, right? Um, what might have led to a value for leadership, uh, or a sense of vision or creativity more deliberately as technology leaders, you know, what does it mean to, to lead with a sense of ethics? And not say that we don't already do that, but certainly through the lens of now in a space where people perceive this as being uploaded onto their technology, leveraging these things and centering them in the conversation.
Um, I think these are, are really deep questions, um, that we can, we can look towards, um, that might guide our practice and create a compass for us moving forward.
You know, just to provide a, a bit of grounding for Ative looks website and cultivated, none of this is necessarily new, right? We have been saying for a hundred years, but, but of relational, experiential, and social, um, education itself is a practice of freedom, uh, or should be.
And we create systems that will have to happen or, uh, diminish that, that possibility.
And then, of course, all the way through to things like Project Zero and creating cultures of thinking, where we're looking at classroom routines, the language that shares up inside our classroom spaces, and the opportunities to, to reinforce some of these habits.
So as we look at kind of the, the cultivation of these pathways, what is this? We know that it happens at various levels of the system, right? And we know that the different levels of the system have different, uh, groups that are impacted by those decisions, right? And so, of course, modeling by educators is the disposition of the co before the attack, right? How are we showing up, uh, in ways that allows to, to occur, um, with the level of, uh, meaning, uh, and being deliberate about it.
Um, to that end, right? How are we embedding practices or habits that leverage questioning, right? Or inquiry, active critique and reflection.
Um, but then also through the lens of like, uh, from environmental design, like what does the classroom actually look like? It's the, what's the culture, uh, thatss, what is being valued inside of this place? What's the school elements? What, what's the technology environment? Some of this, uh, signals, some of these things are valued, right? Um, those are all parts of design that we would roll over, uh, regardless of the level of the system that we participate in.
And then lastly, uh, I am fond, uh, of acknowledging with, uh, what we name, we nurture, uh, that the things that we collect data on, reading tests, English tests, math tests, when we name it, we, we nurture it, we care about it, right? That signals to both ourselves, but also schools, all sorts of students, also to educators.
But those are things that we're putting a spotlight on.
What might it mean to acknowledge that have been shift, perhaps some of that spotlight, because I think as educators and those in education, we are capable of, of walking into them, right? Like we can do the work that has been happening.
Um, we are certainly not new to having things thrust upon us, uh, as educators and those to work on, uh, teaching learning spaces and education broadly.
But what does it mean to, in our own practice, remote system, again, we participate in name some of these things more as things that we care about, um, that in our policy, uh, for education technology, we're inviting, uh, who's kind really important, critique the selection, whatever it looks like for us.
And then, you know, just acknowledging some of these, like pedagogical shifts or moves for us, you know, thinking routines, seeing things wonder, uh, as very, uh, common one centered problem framing, where like, we know, and as electric builders or we know as, uh, technology s and technology leaders, um, that we very much need to do this on, on a regular basis, but like, we know we're experts, but there's a, so how are we user, uh, how do we center users that problem solving and those type user in particular, uh, whoever they, uh, happen to be inside of that system.
Creating values, reflections as part of our meetings, uh, as part of our discussions, um, for ourselves, metacognitive prompts.
Much more useful, I think, for, uh, spaces of civil group learning.
Um, but useful, again, to to call attention to the ways in which we, uh, and processing this kind of work.
And then very much in this moment, something that we, we bring in this conversation is like, how do we celebrate process, not just product, right? How we thinking about like, the journey that it takes to get to a place, whether that's a $10 a policy, or whether or not student project, classroom space.
How are we honoring the fact that process is important as part of that academic to do for the sake of innovation, but that some of these things that we're looking at, uh, in, in our, uh, thinking of dispositions deserve to be, uh, a very deep part of that consideration as well.
You know? Uh, well, let me, uh, pause there.
I know that I've been talking for, for quite a bit, tossing a lot of ideas out there.
I I did just wanna ask if there's any questions that come up, any pushback, um, things that you're seeing in the field that, that, that caught your attention? Is there anything that comes to mind for you as, as part of the, the discussion so far? Use the chat or, or come off mute.
No, I was actually not great.
Something that I'm, uh, oh, I'm sorry.
Please.
Hey.
Hey, Jordan, before you jump back in, do you mind, um, would you go to your audio button in the bottom left corner? I wanna see if we can make just a small adjustment.
There's, uh, the little thing where you can pop it out, looks like a little up arrow, like a little carrot, and there's the audio settings.
That first one.
Yes.
Would you mind opening that up? And then Yeah, of course.
Scrolling on the, it should default to the audio.
Scroll down to audio profile and check out the Zoom background and noise removal, and make sure that that dropdown is on high.
The background speech, there's a little dropdown there, is that one on.
Um, I just wanted to see if we could just tweak your, your audio for us.
We're just getting a little bit of interference.
Oh, I'm sorry.
Yeah, yeah, of course.
No, no, it was on medium, but I put it on high right now, so maybe that's better.
That's better.
Yeah.
Yeah, that's clear too.
Thank you.
Sorry About that.
Yeah, no worries.
I'll just start again, then we can do, we can run the 30 minutes back now that you can hear me.
No, it just, it's, it's helpful and it's, as we have a discussion too, I know you've got a couple more things on here, but I think our audience is also very intrigued about the concepts of like, AI literacy.
And I know for one of your positions, you've done some contract work with Stanford, looking at AI literacy and ethics learning designer.
Can you tell us, too, just kind of off the cuff, just a little bit about, about that piece of it and how that might relate to K12 schools? Yeah, of course.
Let me, uh, I'm gonna jump out of share.
I'm gonna open up another screen real quick that I think help Yeah.
Us with that.
Great.
Yeah.
Yeah.
So part of the work that I've done, I've had the fortune, uh, uh, partnering with some, uh, some really great groups over the past few years.
Um, but one of the ones that I'm keen to center, um, actually in our conversation, here we go.
This again, y'all able to see the website? Yes.
Perfect.
Yeah.
And so, uh, this kind of like literacy element of like how this connects for me, um, because this is from Digital Promise, uh, you're familiar with Digital Promises work.
We Love those guys.
Yes.
Yeah.
Yeah.
Um, and so, uh, one of the authors here, uh, Dr.
Pati really is a, a very frequent collaborator of mine.
She's fantastic, would highly recommend for work and the, the broader work of digital promises teams.
Um, She, uh, has spoke to the Atlas au audience as well.
She was in the cohort undermine, uh, in graduate school.
So, uh, very fabulous, uh, just wealth of knowledge.
She's fantastic.
Yeah, The a absolutely lovely human being, um, and deeply thoughtful in her work, um, but was instrumental in, in creating this.
So as a team who assembled distance, uh, I can't speak highly enough about it, but when I think about this and kind of the space of space of ethics, you know, um, without belaboring, it just, uh, very briefly for the people here and for those who are going to watch this later on, you know, the, the way in which this is, is that there's a set of core values at the center inside of that yellow circle, right? And so no matter what, as part of, we think about like the literacy elements, human judgment, keeping our humanity at the center, and some of the things that I'm talking about, like dispositions, I think are, are, are essential in that element when we talk about bubble, um, and that centering justice, that bubble that's at the center there.
But as we build out from there and think about these modes of engagements, there's a way that we understand ai, there's a way that we evaluate it, and, right? And so those are kind of like the three pillars of engagements, um, that we find ourself in thinking about artificial intelligence at this moment.
Color coded, uh, to the top part, those types of usage, interaction, creation, and problem solving.
Um, interaction is for me, uh, perhaps most the most insidious one or invisible one.
But it's like, you know, if you're scrolling on Instagram, Facebook, uh, if you're watching Netflix, listening to Spotify, whatever it winds up being, right? The algorithm is paying attention to what you're doing.
If you're skipping a song, if you're liking a song, if you're adding it to a playlist, if you'd like a movie that you watch on Netflix that just sees that you just binge free seasons of law and order, right? Like it is watching these things.
And that is going to be algorithm that leads you something person.
So your interaction with it is very real, regardless of whether we can see that or not.
The creation element is this generative moment that we find ourself in where we can take a text bro and generate audio, video, uh, images or text, um, you know, so from a pedagogical sense, things like magic school or risk, or those types of things where we can, um, prompt for a rubric, right? Uh, for it to help us create a lesson plan and whatnot.
Or even if you're thinking about, uh, notebook lm, where you can put text in and it creates a, some of like, podcast out of it or people to listen to, um, valuable tools, uh, that we can think of leveraging inside of this space.
And then problem solving feels a bit more complex to me, but it's like the, the usage of an AI tool with a, uh, more complex, deeper outcome to it.
So perhaps there's some element of generative AI too, but perhaps there's also machine learning, something along those lines.
Um, visual learning models, uh, that type of thing.
The literacy practices that run the bottom there, sort of purple ones from algorithmic thinking all the way through on the other side to information and disinformation, um, are for me, kind of like umbrella terms of the ways in which we see this manifest, um, in classroom spaces, uh, and teaching learning spaces more deliberately.
But very importantly inside of that is this ethics and impact one, right? Uh, and so I, I like to call attention to the fact that as we think about these elements of these human dispositions that we're trying to design for and teach for, are all very much like embedded and complementary to everything here, right? We're not simply teaching AI for the sake of someone being able to instill their own language model.
We are teaching AI literacy that goes, uh, that runs deeper in that, or because we know for a fact that we're in a space where, uh, lemme see if I can dig further down, stop the share.
There's another element to this that I think is also really valuable for us to, to look at on, While you're getting this pulled up.
Um, there's a podcast episode on Talking Technology with Atlas, where I, I believe, um, they sat down with, uh, both Patti and also, uh, Jeremy, the executive director of Digital Promise.
I don't think that episode has come out yet.
So you'll be on the lookout for that, uh, coming soon.
It's, it's in the pipeline.
Oh, that's amazing.
I think it overlap really well with this, and they'll have much better insight into this as it, but, um, again, I, I, I respect their work so much.
This is one of the things that I think is important for us to acknowledge here, is that while the tool is new, while the technology is new, as we're thinking about AI literacy, like it is a, uh, it evolution of things that have been in the kind of like education people system and ether for a number of years now, right? Um, it's a recombination of those, and it might look, it might manifest differently because, you know, now we looking at chat PT as, but at its core, we are still pulling from concepts around data literacy.
We're still trying to leverage computations things.
We're still looking at media literacy and digital citizen.
It's just that these things now overlap in like new and exciting ways, um, but also like complicated ways that we have to think through when we think about things like, uh, privacy, transparency, you know, student data, teacher data, right? Like, so it's a, it's a recombination of some of the elements that have existed, um, in my mind for, for a, a longer time now.
But are, we're being asked to sort of refocus our attention into the space and, and connect AI literacy to, to all these practices, uh, in, you know, ecosystem that, uh, you know, AI didn't go through a procurement process.
We know that for a fact, right? Like, it, it's in our schools, whether we bought it or not.
Um, but to that end, like what are we doing to, to actually make sure that it's being used well understood, well, not just for students, right? Who are, it's hyperfocused on whether students are using it to of teeth or right, whatnot.
Um, but for ourselves to acknowledge the, uh, administrators need to understand the technology as well.
Why would I not necessarily just randomly use chat to generate infuse mail to Terence? Pretty good reason behind that.
Uh, right? And so developing these kinds of, uh, techniques and understandings that at multiple levels, um, comes back to some, was telling what we were looking about earlier when we're thinking about this place of like, how do we navigate these tensions, right? Um, and what's the space around, think of AI as a shortcut and well, what does it actually mean to, to build into a place where we can design learning and, and policy that leverages some of those, those dispositional responses on point.
Yes.
Um, lastly, uh, before I just like open it up for questions, something that feels really important to me is like, this is a, a quote for each article.
Again, I will place all these in a document for you to have access to.
Um, but these are three big takeaways about what we can learn from a history of education technology.
Uh, and so, you know, quote first regularly reminds students, teachers that anything try is the best case, right? We do not have best practices right now practices, we don't have enough quantum new, it's emerging, but we do have a hundred or so years of research of education and, uh, research to broadly.
So we can start from a place of, of knowing, uh, but be open to this experimentation.
Second, schools need to examine their students in particular and experiments they wanna conduct.
Some parts of each curriculum might invite playfulness and, and bold new efforts while others deserve more caution.
And then third, when teachers do launch new experiments, they should recognize that their own local assessment will happen much faster than rigorous science, right? Like you are the experts on the ground in the field witnessing the thing happening in real time.
Um, and so while I would love to have a venture trove of research from MIT Stanford and other high level universities inside of this space to support best practices, we are remain curious and critical for moments.
Um, and so my own practice inside of, inside of this space is to acknowledge them, uh, incredible claims, uh, human, incredible evidence.
So once we go into this space and, and listening to the rhetoric and the discussion around to not simply accept it, uh, but to be, uh, questioning humans, uh, and modeling some of that curiosity, uh, we hope that other people in the system have around this thing, um, without seeing it as against, uh, cure all to, to many of the issues that we find, uh, education systems right now.
So I know this was a, this was a sprint, not a marathon.
Um, but I'm hopeful that some of this insight is, is useful to you as you sort of are, uh, are developing your own pository about, uh, leadership roles such that end, happy to answer in the questions, share resources, uh, and, and point to those types of things that, uh, are helpful in shaping.
Really appreciate all of your time, uh, and happy to kind of revisit anything, go back and focus on, but all this will be, uh, as well.
Awesome.
Well, thank you again for being here.
Um, attendees, feel free to come off mute and share any questions or thoughts that you have.
You can also drop those in the chat while they're having some think time.
I do have a question that I've seen come up a lot recently as I'm going out into schools, and I'm talking about talking to educators.
A lot of them are wanting more resources about the environmental impact of ai.
And as we're talking about this kind of through an ethical lens, Jordan, do you know of any particular, uh, resources that are addressing those? And audience, if you have those, um, again, feel free to join the conversation or to drop 'em in the chat, but anybody know of, you know, some good studies or anything along the lines to help people start to have those conversations with students? Yeah, that space is like kind of emergent in this, uh, area, and it is un shockingly pretty difficult to get straight answers from some of the larger tech companies on Fire Impact, right? Um, but there's a couple, uh, very big articles that have kind of demystifying it.
We, we, we know because the sheer energy these require, um, I think that some of the more sensationalist pieces have tried to reduce it or simplify every time you prompt it's a bottle of water, it's like not quite that straightforward, right? The kind of prompt what you're asking for, you know, is all those are real factors in there.
Um, but we, we, we know for a fact, uh, in research, uh, I think it was an article just this past week about like, you know, uh, without being quite so alarmist, like electricity prices are going up and how are we navigating that because that it's regular citizen, but in past majority of that electricity, uh, seeing used is data centers and those types of things.
So, um, happy to, to provide some, uh, some resources there just to point people to that conversation.
But environmental impact is real, and those questions around how we're using it pretty, pretty important for students to grapple with as we, uh, think about the role of this string in our, in our everyday life.
Especially knowing that just yesterday open ai, uh, released their web browser, right? So, Yes.
Yes.
Have y'all seen that? Has anyone downloaded it or played with IT audience too? Yeah, it's, uh, generative AI web browsers.
Like it's got some pretty big concerns around security from what I understand, some of the early reading that I've done, uh, right.
But like, the more that it, it's, the more that the thing permeates our systems and, uh, the greater the impact, right, uh, that we're gonna have.
So, yes.
Yeah, it's a Mac based browser.
If you haven't checked it out, it's not available to pc, which I thought was an interesting, interesting one.
Um, thank you.
And I think it's really good too, when we're talking about those ethical conversations, talk about 'em holistically, you know, I feel like it's very easy to demonize a new technology, but not to have perspective, like, are you, and, and again, it, it does have the environmental impact, so like talking about those real things, but then also, are you on the weekends binging Netflix for 12 hours straight, right? Like, that has an environmental impact too.
And so having conversations with the students about all of of tech, uh, we've got a great question in the chat from Ling, she wants to know how do you address the discomfort of adults, both the parents and the teachers of the SEL component, such as being overdependent, increased impulsivity, et cetera.
Yeah, I mean, I think that's an interesting question.
Um, I am, uh, mindful that I think regardless of who the intended, kind of like, what the, the end impact of an initiative is or, or a framework is, whether that's, you know, that's when I put a AI literacy initiative into our schools, we wanna talk about incorporating SEL into our schools regardless.
I think that that, uh, the process with which that UN has to be done or un that undergoes, uh, in our journey there is making sure that parents, teachers, administrators, students are part of the conversations.
Yes.
Not necessarily that they have some type of like end control, right? But at the same time, like providing a space to be heard, reconciling with those questions, right? Not just like making a space for them to ask questions and like calling it a day.
Like what is genuine dialogue and what is genuine discourse? How do we model those types of things in, in our practice, right? Well, at the end of the day, it's not just, uh, you know, I thinking back to like any number of like one-to-one device initiatives that have happened over time, right? And like that if the end result of that is that like you wanna impact a student, but you don't bring parents into that conversation about why their child's not bringing home parents and caregivers, while it's not bringing home an iPad or, uh, Chromebook, you are leaving out an enormous piece of the puzzle, uh, around how this piece of technology or tool or framework is going to have an impact in those learners' lives.
And so, regardless of where that like end goal is, I do think that we have to acknowledge that there's many kind of like spokes in that ecosystem.
And providing a space for learning and dialogue for each one of them is important.
Thank you.
Ling.
I have another resource to also address that.
So the rhythm project, if you're are not familiar, it's headed up by Dr.
Allison Lee.
Fantastic.
Another person, like y'all need to go follow her on LinkedIn, check out the stuff that the Rhythm project is putting out.
Um, she started out in big tech and has transitioned over to this nonprofit.
And the Rhythm project has put out this thing, uh, called Sparks, sparks Toolkit.
And it is a set of downloadable cards, and you can actually have this interaction with your students.
But ling it could also be interesting to do this maybe at like a parent coffee night, you'll have to look at the cards and check 'em out and see, but as you're trying to build that comfort, it really, uh, lets you look at different scenario based, uh, inquiries and from a student perspective to answer those.
So again, really the target audience of this was students, but I'm wondering if you might could also use them to showcase using your students in other environments to bring in, um, your different constituent groups, like your parents and your teachers too.
The, I just used it.
Uh, I think it's, I'm, I'm looking, I'm trying to see, is it the, I'm opening the link right now to make sure it's the same thing.
Sorry.
There's a, the rhythm project is fantastic, highly recommend, uh, their work.
But I just used this, uh, very recently working with educators, um, about like, does this, uh, scenario, um, increase human connection? Uh, it depends or does it decrease human connection? It's all kind of like AI centered.
Yes.
Right? Yes.
Yeah.
That's, that's theirs.
That's it.
Yeah.
Um, and I love that, right? Like, it provides us a space to like enter into a conversation knowing that AI hits all these different scenarios.
Um, but we, we puts us in a posture of dialogue and, and, uh, not debate in a bad way, right? But like debate in, in a healthy way where it's acknowledging that like, yeah, these are real life scenarios.
Um, how do we actually do this in a way that feels meaningful? Uh, and how do we get different perspective? How do we invite different perspectives into a space to then render a kind of ecosystem new decisions? Yes.
So y'all, I'm gonna put one more kind of related link to that.
So an adult version of this that focuses on data privacy, cybersecurity, and some of those types of threads around AI implementation.
Um, it's called the AI Turing Trials.
And this is a resource through nine.
We actually have a, a version of these, uh, co-branded with Atlas, but for right now, you can go and get 'em directly from nine on their website.
Again, you can download 'em, but I highly encourage you to check those out with your senior administrative team to spark some really great conversations and let us know some feedback on, on how that goes.
Um, Benny, Tom, Jason, you've all got your cameras on.
Anybody want to jump in? We've got a few more minutes.
Any final questions or thoughts you'd like to share? I, I'm, I'm hoping a question comes out of this.
These are just some thoughts that I've had, and I'm trying to formulate a question for 15 minutes out of this.
Um, you know, and I, and I'm sure I'm in some of the, the same space and, and readiness that other schools are, but we have, you know, we have teachers that have been experimenting with ai.
We don't have a formal AI adoption policy right now.
Um, and we haven't had a ton of professional development for faculty on the use of AI ethics of using ai.
Um, I'm facilitating a group of faculty right now, and the most thing that's the recommendation come out of it is we don't wanna move.
You know, at least this group doesn't, you know, sex self-selected group that wanted to be part of this is they don't wanna really move too far away or too far ahead with AI because they're not comfortable with it.
They want, you know, a year's worth of professional development.
On the other hand, I've got pressure from parents and students who are using it and want us to use it.
And, you know, we're geared up to pull the trigger on Google Gemini because it has, you know, we're a Google Workspace school.
So, um, you know, for what it's worth, Gemini supposedly be building in those guardrails.
So that's appropriate for, you know, we're a high, we're just a high school, so we're looking at it for any, you know, notebook, lm.
So the question is, I mean, and obviously we want to do the, the, um, uh, we, you know, uh, we want to do the professional development.
We, we have new leadership and academics, so hopefully they'll, they'll free up some time.
I mean, that's always a problem.
I, you know, every technology person will tell you getting, whether it's academic based or admit, you know, operational, getting any sort of, um, professional development time is really difficult.
But is it worthwhile to say, Hey, yes, okay, where is, where's a school we're gonna push ahead with the quote official adoption of Gemini, you know? And, and a lot of this is equity, you know, a lot of our kids have their parents' credit cards and they can get the pro version of chat, GPT and whatever.
Um, but, you know, for digital equity, all right, hey, we have this, but let, but you know what, we're not, yes, the school's turning it on, we're quote adopting it, but we're not, it, it is not a, uh, you know, it's still in the experimental phase because teachers still need to learn alongside the students.
And is, is it, you know, do you think that's an okay to way to, to get it? Or said, you know what folks? Sorry, we need, we really need to get more people on board, um, through professional development and, and we wanna, you know, push it off here.
I'm getting, I'm getting pulled in both directions, so I'm just looking for a p anybody here to say, nah, Tom, you know, stand faster.
You know what? Yeah.
You know, if you could put it off a year, try and get more faculty on board.
So just 2 cents from someone outside of traditional systems.
But I'm currently working with a few districts this year, probably like five or six on kind of a small cohort based model.
'cause they're grappling with a similar question, right? Um, so they more or less a coalition of willing, like 12 to 15 different districts of life, people who are interested and invested and willing to play experiment, um, but who are also being put in a position.
You, I'll, I'll do the times, um, working through very specifically, like what does this mean for our policy at the district level, um, and making sure that they have input into it.
Um, and then also making sure that anything that is done is vetted through consideration and right.
So that it's not simply buying a tool for the sake of appearing innovative, right? But that it is actually like how does this align with our vision, our identity of like, who our learners are, who we want our graduates to be.
Um, so I think like, starting small is a totally reasonable approach and in many ways kind of acts as a bit of like, you know, dipping a toe in that to how do we temper the environment? Yeah.
This kind of thing.
And then also acknowledging later on as part of the journey districts is like they are going to have, uh, you know, risk is gonna come in magic school, AI is gonna come in, Google's gonna come in and do kind of like a, a set of learnings about the specific tools.
Um, I don't wanna provide second emotions about any one of them specifically, but the thing that I appreciate about any hold them is that, well, with the exception of, of Google, though I have no reason behind it, is like, I really think we would be focusing on bringing education tools, not technology companies, right? Like an organization, if you're familiar with PlayLab ai, really great group, uh, doing astic work, but they centered education safety around student data.
And then it was just on, as a nonprofit, I would never, now I will go on record as I would never want like a teacher in my experience teams chat a classroom space, right? Yeah.
It's just, it's not education technology.
That data goes somewhere, you know what I mean? Like those, there's a lot of issues behind that.
Um, and so I'm just mindful that for what it's worth, those are kind of my initial considerations.
But again, we have others in the room and they also have a deep wisdom, uh, on that, so.
Sure.
Anyone else have thoughts for Todd? Well, I, I work at a kind of a different school, kind of, uh, the, the part of the school I teach in is grades five through nine, and I'm teaching a course in AI to eighth graders.
That's a full academic course.
And so it was really helpful today to see your work.
I think it was with Stanford, right? I think had trouble hearing at that point, but that kind of outlined that one slide that kind of outlined how AI could be integrated, if that, if I'm getting the right idea from that.
Um, and so I, my challenge is similar to, um, what was just said about just kind of how to bring some of the things I'm doing in my class to the broader audience, um, and to get those conversations started with the faculty as, as a larger group.
Um, because you know, that, I mean, as was said before, that technology's out there, some students have, all of our students have access to it, to a certain degree except, um, you know, whether students are using it appropriately or whether they're thinking through the, in, in, in thinking about how they're using it and being critical about how they're using it and what those effects might be.
You know, I think those are all things that need to be addressed in some way.
Um, and I teach like such a small group of students and it's, it's such an amazing class and it's a really fun class too.
But, um, but I wish that I had everyone's, you know, um, all, all the students as part of the discussions we're having.
Um, so it's, it's, it's challenging and, and I know it's, you know, nascent technology even though it's been out for a number of years, but just to, to be able to get that broad adoption is challenging.
So, and any other advice or thoughts around that is always helpful.
Ashley, I know that you're getting ready to close up shop for the, for the hour, but here's what I'd like to remind you all is not only are your students already using it, but my guess is a majority of your faculty are using it, um, in a shadow AI without, um, disclosing that particular use.
Um, I've had the opportunity of being at many schools and have shocked the administrators in the space when serving the faculty that I'm working with.
Um, to this particular case, it's AI is here and we need to begin addressing this at some level, um, in a very, very measured and thoughtful way.
Thank you Vinny.
And you are right, we are at the top of the hour, so I do wanna start to close this up.
Final wrap up here.
Um, Vinnie School Kincaid, if you are not checking out their policy and you need some help with getting started, and by the way, if you don't have anything written down saying what students can do or what the appropriate use of AI might look like at your school, that's a governance issue and you're really opening yourself up for a lot of risk.
We've even seen schools sued by parents where the students used AI and the parents simply said, well you didn't tell 'em that they couldn't.
And so having some type of policy in place, um, is definitely something to to consider.
Vinny's has a fabulous creative commons license attached to it, so make sure that you are attributing if you do use their work.
But I dropped a link to that in the chat.
I also wanted to let you know we have a partnership with ED three Dow and we have put out three brand new courses.
They are based in pedagogy and so it's project-based learning and ai media literacy, critical thinking in the age of AI and design thinking and artificial intelligence.
So definitely go check those out.
Um, some more resources were shared in the chat.
Jordan dropped a link to DI Digital Promise.
So thank you so much y'all.
We are out of time to for today, but we hope to see you back at another Atlas event soon and we'll definitely continue the conversation.
Jordan, thank you so much for being here with us.
It has been an honor to have you.
It was a lot of fun.
Uh, so thank you very much for sharing and course keep up the great work So much for having me.
Great to see everyone.
Appreciate Your time.
Alright, bye everybody.
Y'all take care..
Takeaways
-
AI and Human Values
Technology should be shaped by human values and designed to solve human problems, serving human flourishing rather than being a "cure-all".
-
The Role of Dispositions
Human dispositions like curiosity, empathy, creativity, joy/purpose, ethical discernment, and reflection are essential for navigating AI. These human qualities cannot be replicated by AI.
-
Generative AI Criticality
Generative AI tools (like ChatGPT or Gemini) function as "word calculators" that generate statistically plausible text. Users should approach all outputs with criticality and curiosity, as everything generated by AI could be considered a hallucination.
-
AI Literacy Frameworks
AI literacy should be seen as an evolution of existing literacies (like data, media, and computational thinking) and must center on core human values and centering justice. Understanding and evaluating AI, along with ethical impact, are key pillars.
-
Implementing AI Policy
Schools should not delay addressing AI, as both students and faculty are already using it. When implementing an official AI adoption, starting small with an experimental approach and ensuring all stakeholders are part of the conversation is recommended.