Skill Development and Ethical Considerations in an AI-Driven Future
From a keynote panel at the ATLIS Annual Conference 2024, in Reno, Nevada, panelists discussed the importance of developing AI literacy and ethical understanding alongside traditional skills to prepare students for the future job market, suggesting that independent schools implement guard rails around AI to address ethical concerns and equip students with the skills needed to become curators of AI in their day-to-day lives. They also highlighted the likelihood of AI affecting various job types over the next decade, underscoring the necessity of preparing students with the skills necessary to navigate this changing landscape.
Resources
- 9ine, specialists in data protection, tech, and cybersecurity
- Mission Impact Academy (MIA), online education platform
- Tarja Stephens on LinkedIn
- Mid-Pacific Institute
- Paul Turnbull on LinkedIn
- Pandata, AI design and development
- Cal Al-Dhubaib on LinkedIn
- The Future of Jobs Report 2023, Word Economic Forum
- The State of Generative AI in the Enterprise, quarterly reports from Deloitte
- Brandeis Marshall, data equity strategist
- Durable Skills, website
Transcript
Narrator 00:02
Welcome to Talking Technology with ATLIS, the show that plugs you into the important topics and trends for technology leaders all through a unique Independent School lens. We'll hear stories from technology directors and other special guests from the Independent School community and provide you will focus learning and deep dive topics. And now please welcome your host, Christina Lewellen.
Christina Lewellen 00:25
Hello, everyone. Welcome to Talking Technology with ATLIS. This is Christina Lewellen, and I'm the Executive Director of the Association of Technology Leaders in Independent Schools,
Bill Stites 00:34
that I'm Bill Stites, the director of technology for Montclair Kimberley Academy in Montclair, New Jersey.
Hiram Cuevas 00:39
And I'm Hiram Cuevas, Director of Information Systems and Academic Technology at St. Christopher's school in Richmond, Virginia.
Christina Lewellen 00:46
Hello, Bill, and Hiram, we are recording live from the stage at the ATLIS 2024 Annual Conference in Reno, Nevada, we've had an incredible couple of days. And what I want to talk about real quickly, I want to just frame up the unbelievable day two panel that we had, rather than a single keynote, we had on our day to morning's opening session, a panel about AI and how it's shaping the workforce. I know that so many of the topics at this conference, have focused on the conversations that are happening with AI, but this word sort of lifted it up to a higher level. What did you guys think of it?
Bill Stites 01:23
The perspective I thought it brought was one that I don't think we necessarily always talk about in schools, when this has come up, a lot of the conversations that we have had in schools and that others have had the schools have very been very reactive. And I think what we were able to explore in the opening today was how you can begin to think about structuring the conversations that you're having at your schools in a very positive light. And in a way that's also very thoughtful and connected to mission,
Hiram Cuevas 01:58
I was really excited to hear that the way we've been discussing AI at at our school is emphasizing that the use of AI is now a life skill, right? It is not something that you can kind of hope that you get to it at some point in the education. And I made it very clear at one of our board meetings, that we are doing our boys a disservice if we do not expose them to proper uses of AI because every industry is being disrupted. And it was so clear from the panel. And they this reinforced that over and over and over again, that to say no occupation is safe, is a little difficult to absorb. But really, it's an opportunity as well. It's not that you're going to disappear. It's just that you're going to evolve into this new role in this new position with all the new occupations that are going to develop from it.
Christina Lewellen 02:52
Yeah, we're gonna get right to it, folks, because this conversation is worth every moment that you invest in it, the panelists, all three, they're just worldly and so intelligent, and they have so much experience in the corporate world. So to have their perspective brought into a conversation about AI and specifically what independent schools can do to shape our learners make sure they're ready for the workforce of tomorrow. It was just really inspiring. I could see a lot of wide eyes, I could see a lot of nodding heads. And so listeners, we really hope that you enjoy this conversation, because I know I'm still buzzing with it. And it was earlier today. So everybody enjoy this conversation around AI in the workforce with some pretty incredible panelists. All right. Our sponsor, our supporter of this event is nine. So I'd love to welcome Mark origin send to the stage he's going to tell you about what we're about to do.
Mark Orchison 03:44
Yeah, welcome. Good morning. My name is Mark. I'm the founder and CEO of 9ine. 9ine provides solutions to schools globally, in the areas of privacy, cybersecurity, and tech management. We have our digital platform, the nine app which supports technical leaders within education, on managing privacy, compliance, and other sorts of risks and challenges around managing technology within education. So with the session this morning, I know that each of the panelists will introduce themselves as part of the discussion. But I want to take a moment to give you some context around why we've gathered this panel today, where many sessions at this conference dig into the details of managing the immediate opportunities and challenges presented by AI. This panel of experts will zoom out to explore what exactly we're preparing our students form a workplace that has already been significantly and irrevocably shaped by AI. During on a panel of corporate executives and industry experts, this session will remind us why technology leaders must embrace the role in shaping the AI trajectory in their schools in order to prepare students for the workplace that awaits them. This lineup of panelists will explore our imperative to embrace ai ai literacy, and remind us why the decisions we make about AI today will undoubtedly affect our student's ability to compete in the math could place it tomorrow. It's my pleasure, as founder and CEO of nine to sponsor the session, and to bring back to the stage while she's already here, Christina.
Christina Lewellen 05:09
Thank you. Thank you. And I'll invite my panelists to come on up, let's get this party started. One of the reasons that my team and I wanted to put together this discussion is because as I've been visiting events and speaking to heads of school, sometimes Business Officers, tech leaders, and even faculty, one of the pieces that I haven't been hearing much about is the workforce. And I was thinking to myself, but I am the workforce, meaning atlases, staff, we use AI all the time. And we experiment with tools, we look for productivity hacks, and ways that we can use AI to leverage the already brilliant minds that we have on our staff. And I wanted to just bring together a few experts, so that we could talk about and remind everyone that it's not just about academic integrity, it's not just about whether AI is cheating, but that it is also that we are preparing our students for this workforce that will trust me expect them to be aI literate. So let's go down the line here and Tarja. Do you want to start and introduce yourself? Tell everybody why you're here and what your background is. Sounds
Tarja Stephens 06:17
great. Hello, everyone. Good morning. My name is Tarja Stevens. I'm a co founder of MIA - Mission Impact Academy. And we are online education platform, really focusing on providing AI training programs for companies, executives, teams, individuals, and we focus on non technical talent. What does it look like? And in addition to that, we are looking into not only the skills gap, but gender cap. So our mission is to empower 1 million women with the new AI skilling opportunities, knowing that it will be diverse future. So super excited to be here. And looking forward this conversation. You
Christina Lewellen 06:54
have a lot of fans in this audience. So everybody follow her on LinkedIn. She has great content. And Paul, another great friend of mine here.
Paul Turnbull 07:02
Morning, everybody, it's good to see you. I am the ninth president of MIT Pacific Institute. We're a preschool through 12th grade DEI School in Honolulu, Hawaii. I know there's a couple of others from Hawaii out there. So Aloha. That was a really heartfelt applause. One of the nice things about being in Hawaii is that you not only have weathered, it doesn't fluctuate quite like it does in Reno. But we have a really tight knit group of individuals, when we think about different schools around town. So MIT Pacific is known pretty much for its artistic endeavors, we have a pretty robust School of the Arts, we also have a very robust, immersive technology program. So we try to take the two ideas of engineers who see things and right angles and artists who see things in spherical shapes and try to put them together and see if we can all get along. This is my 11th year and Mid Pacific. I'm just really glad to be here.
Christina Lewellen 07:59
Thank you for being here. I'm sorry that you had to come someplace cold. And this is very fitting. joining us virtually is Cal, I'm going to turn it over to Cal. But I have to say this is fitting because I think during our planning meeting, he was wandering through the streets of Manhattan and showing us some very interesting, like dogs and characters along the way. So Hey, Cal, thanks for joining us.
Cal Al-Dhubaib 08:20
Hey, guys, thanks so much for having me. My travel plans were a victim of the eclipse that you all heard about. But I'm a data scientist by training. And for the last eight years, I've headed up a data science consulting practice. And we actually help folks design and build machine learning and AI solutions and high risk mission critical settings. We've worked in education, health care, energy, and defense. I'm now head of AI and data science at Further that acquired Pandata, my firm just a few weeks ago, really excited to dive into this conversation. But we've been helping our clients, especially over the last 12 months, start to grapple with issues of workforce getting up to speed dealing with topics like aI literacy, AI risk management, and just really glad that I was able to participate virtually.
Christina Lewellen 09:08
All right, so we're gonna start at the high level. And I'm going to ask you kind of around the thesis of why we're here today. How do you all envision AI transforming the nature of work? Over the next let's say, let's give it a timeframe and say like a decade, what are the skills do you think we're going to need for the workplace of really today but
Tarja Stephens 09:29
tomorrow? Wonderful if I can start what a question I know it's a topic for every every one of us and why I say it. It is the future work is here it has arrived and why I'm very excited about it. Is that Today's a day that we can really start generating how it's going to be changing our works our companies. So yes, we talk about productivity. We talk about automation, we talk about the soft skills, human skills, but really what we are excited about to see Is that how now it's time to understand where the tasks are going? Not that salts are going to be replaced, but also what is our role working in the workflow and really understanding the responsibilities that now coming into developing these strategies? For example, I love to share when last week, I don't know, you saw the nine tech companies or form coalition that IBM, Google, it was Intel, lead by Cisco. Together, it's 1.7 million employees. And 805 billion annual revenue, they came together and really are now exploring the same question that you just did, how this is going to involve their workforce, but also everybody else? So exciting time to see. But I don't know if that answered it, other than we are exploring it. And there's going to be a lot of changes coming.
Christina Lewellen 10:49
Right? Absolutely. So
Paul Turnbull 10:50
the K 12. Level, ultimately, we so we have a tech vision in Mid Pacific and the visions driving question is how can technology enhance the human experience rather than drive it? And if everything that we do is about that one guiding question, enhancing the human experience, then you can start unpacking that a little bit. And you know, when you think about workers and where we sometimes as we get older, we fear failure, we are assessed at certain levels, except for the TLS exam, folks, you're going to do great, no failure, all the things that we are traditionally sort of set up to both covet and fear turns, then I think, in the next 10 years with AIS support and collaboration and partnership, into essentially intellectual curiosity unleashed if we can get students to the ability or to the stage where they are simply looking at how to be more curious, knowing that whether or not it's an individual or a consortium of companies are a consortium of individuals who are out there and other companies who can help them. They also have this amazing collaborator in generative AI. And it's already happening, there are a couple of studies that just came out that showed that generative AI is a little more persuasive in some ways than human beings who are experts in that subject. So if you can actually get to this place where you're not afraid to ask questions, then you get to the really good stuff. And the really good stuff, the bilateral piece is how to be a good employee. But ultimately, if you're a good person with high levels of intellectual curiosity, and you're not afraid to fail, that is a wide open door to success.
Cal Al-Dhubaib 12:30
I'll pick up from there and say that one of the fascinating observations from someone who's been building machine learning and AI systems for quite some time as if you were excited about AI, somewhere from five to 10 years ago, you were probably a technologist. And in order to do things with machine learning, you needed to know how to program you need to know math, and you need to be very deeply technical. And with the advent of tools like Chat GPT, and other software platforms that have AI capabilities embedded into them. Now, the same capabilities are available to the general public. And we've been helping our clients start to figure out what sorts of training they need to get their workforce up to speed and we've broken it down into three levels. The first is AI safety. And this is acknowledging the fact that justice, we're able to be more productive with AI folks with bad intentions can be more productive with AI. And so there's a need to update safety training on things like phishing scams that might come in the form of synthesized voice or more sophisticated emails that look like they're coming from a real person. The second area of training is AI literacy. And this is a little bit more broad. This is really important at the executive level, and that users whose workflows are being impacted the most. And this is developing the common sense of when to trust AI, how it might break the types of unintended consequences that are associated with using AI and machine learning. And it's really interesting, I was at an event just a few weeks ago, and the analogy that I heard is like if you treat it like baking, you know, you take a recipe and you tweak the amount of flour, you tweak the amount of salt, you get a cake still, but it might taste a little bit different. And with AI, you tweak the flour, you tweak the salt up the ingredients a little bit and you end up with a chicken. And it's that unpredictability that we want to try to address with this. How does this happen? Why did Why do AI systems break? And when do you trust them? And then the last thing is AI readiness. And this is acknowledging that AI meant for specific purposes or rules like content creation, or workflow optimization or AI for insert almost any task and and this is the broadest part of upskilling that we're seeing starting to happen. And the areas where it's most developed now, at least because it's been around the longest is with respect to content creation, copy editing, personalized marketing, but we're starting to see A lot of courses or resources pop up just for example, I was looking at Coursera recently, and there's a Chat GPT for data analysis. And there's a GitHub copilot for developers. And so we're starting to see the emergence of these very role and user specific trainings. And that's AI readiness.
Christina Lewellen 15:17
That's awesome. And actually, that is great. How that leads me to a question that I have for Paul, which is, can you tell us a little bit about how your school has put guide rails in place around AI? Let's
Paul Turnbull 15:29
go back about a year and a bit. When open AI first released Chat GPT was the end of November, I walked across the road from my office to our tech guys, and started talking about it. And so within about two weeks, we had essentially stood up the beginning of a conversation around parameters in terms of policy and research. After that, we've talked about putting together a and AI advisory council. So the great thing about being a Mid Pacific is that we have lots of intellectually curious folks that I had already mentioned that concept. But in this particular case, because of our network, we were able to pull together this council of individuals who represent Microsoft and Vidya two different universities, and then a, essentially a lidar scanning firm. And putting them all together, they helped us come up with four potential avenues of consideration. Those in particular, our research, governance, curriculum and community. So for us, in particular, what we're looking at is making sure that we one, pay attention to the research. And if you think about any school, we're all in schools, if you think about the faculty and the fact that teachers are supposed to be slightly myopic, right, we want them to focus on the relationship between themselves, the curriculum and the students. If there's this massive, overwhelming urge, somebody yesterday was saying, you know, like a tsunami, right? If you live in Hawaii, first of all, you don't want to use that particular paradigm. But if there's this overwhelming amount of information, people tend to shut down. So the first thing that the council does, and then what we do internally is to say, we'll filter enough for you so that you can see some of the things that we're offering you. Same kind of thing that ATLIS is doing right now with the AI hub that we saw on the screen earlier. So research will hand you some information that will help you by discipline, elementary, middle high school, and then by department. And then from the curriculum standpoint, we put together a number of individuals, we've got probably two dozen teachers now who are all working together, again, elementary, middle and high school and then separated horizontally by discipline and department. They're using Tag boards and a variety of other sort of community based endeavors. So that we can all see with transparency, what they're trying what they're not doing well, what they are doing well. And then we start getting into the community side, it's things like this, it's bringing people in, it's having the ability to really understand who's like a horse race right now. Right? If it's not GPT, then it's Claude or somebody else, that everybody's going to be back and forth for a little bit. Our job at the Council is to try to say, here's where we think things are now, maybe don't dive into that right now. Here's where we think it's going to go, let's put our eggs in that basket just for about five months. And the more that we can start to identify those lanes, then for the teachers, things start to slow down. Then after that, where we ended up was. So essentially now we've trained 100% of our faculty on how to use AI, what it looks like, what it feels like, what AI does, and does not generative AI does and does not do, and our operations side. So for the folks who are in support offices, right down to our landscapers, and all the way to athletic trainers, and therapists, all kinds of things that will help them improve the efficiencies of their office. And now we've started to branch into middle management, where we are using evaluations, we're setting up an evaluation sort of system for our leaders. And we're using AI to take perhaps the sounds familiar to you, right? You put out a survey, there's a Likert scale, that's easy. But as soon as there's the comments, people sometimes will receive comments, and the first thing they do is try to figure out the voice. Who said that to me, why is that person. So we're now using AI to really take all that data, squeeze it down into very simple tactics that we can say, here's what you're doing well, here's where you can improve. Here's the next step. So all of these things together help us and that doesn't even get into the fact that we've changed our policies and our AAUP and our handbook and our guides and all those things. It's really about how people are working with it.
Christina Lewellen 19:49
It's really incredible. So that leads me to circling back into one of the questions I sort of tacked on the early question which is, this is how everything is happening at Mid-Pacific to address it, but Cal, Or Tarja too, what are the skills that we need to be looking at for the workforce? Like, is he nailing it? Right? Like, obviously, this is very early, and I'm sure it'll change 10 times over. But is that like, right on point? From your opinion? Is it addressing the skills shift that we're seeing in the workforce?
Tarja Stephens 20:20
Absolutely. If I can jump in the skills of the future, couldn't be more hot topic. Yeah, we talk about AI, the technical skills, but also now the soft skills, we actually like to call them human skills, storable skills, 21st century leadership skills, those become so important. The Future of Work report by a World Economic Forum came out, what are the most in demand skills in the future, communication, relationship, building empathy, these are the things that are now seen on those reports on top of that, and I couldn't agree more, as we see the automation coming, how we can really set every employee for success, the personalized training programs, the AI readiness, those are so important, what we have seen now really getting the workforce ready, and understanding what are the things that I will be keeping in me, when I have those meetings, I can really have now the data, I have the knowledge, and I have the data, what is the speed Skilling and really making those speed decisions as well, based on now the data that I have, I do want to address as well, the understanding of how now the skill based organizations are coming in to topics of executives, Cal that support is something that you want to address as well. But really, what are the skill badges? How do I validate those skills that we are now getting, by doing by learning these case studies by experience, and that opens up so many more opportunities for hiring, and really that alternative pathway, so I come from Finland, that's where my name is. And there's so many different layers of alternative pathways for now, jobs. We have an apprenticeship sexually in Miami. I live in Miami, based in Finland. So we are developing AI apprenticeship. What does that mean for someone who's not most likely able to go the traditional learning tyrannize? So I'm very excited to be exploring all these alternative pathways as well, that are really focusing on the skills are skill based. I
Cal Al-Dhubaib 22:15
loved your perspectives, you know, tying the two points here together, the one thing we're seeing with AI councils is especially organizations that are doing it well as they're adopting an agile mindset. I encourage everyone to be open to Okay, you go through a little bit of pain and coming up with your first AI policy and what that needs to look like. And what are some areas where your workforce does need upskilling than what we're seeing happen is as new capabilities evolve as the, you know, the horse race, so to speak, of which AI tool is better than the other continues. We're getting folks constantly asking, Well, hey, is this piece of software and policies Isata policy? Hey, we have this really weird outcome over here. So how do we want to adapt to it. And this brings me to highlight, you know, the real investment in adopting AI tools in using AI tools isn't just the software alone. But it's in having individuals that are trained using it well, but also in having the right oversight in place so that when weird things happen when out of policy exceptions happen. There's a known individual with the capacity and availability to be able to intervene, especially if it's a mission critical application. So we oftentimes help clients build things that might inform clinical decision making. And in that case, you want to have somebody that you can pick up the phone, and immediately get in touch with but in some cases where hey, it's a tool that's not working as expected, you want to have some sort of process where someone can review these things. I want to highlight that the real investment here isn't just tech, it's humans that can use the tech well, and the right oversight resources to make sure that you are handling and adapting your policy over time. Cal,
Christina Lewellen 24:06
if we could stay with you for just a minute, I'd like you to if you wouldn't mind. Can you share with me what your thoughts are about what the most significant? Ethical considerations are, you know, especially what should companies address when it comes to AI? And how do ethical concerns manifest themselves in AI applications in terms of what you're seeing with your clients?
Cal Al-Dhubaib 24:29
So there's no shortage of examples of machine learning and AI going wrong over the last three to five years? I think at this point, most folks have seen the Google Gemini images that created historically inaccurate but ethnically diverse individuals that create a little bit of a challenge or controversy. But even before then, we've had situations with self driving cars, getting into accidents that a human wouldn't have or in the situation of Apple, for example, when they released their credit card, it offered lesser credit limits to women in the same household as men, all else being equal. And it's not because the individuals designing these models have malicious intent. It's that machine learning and AI suffers from unintended consequences more often than other disciplines. And I like to describe this, because as a function of AI being software that does two things, it's recognizing patterns, and it's reacting to those patterns. And so with that, you get one guarantee, you're going to be wrong some amount of the time. It's all about pattern matching. And so the examples that are used to train these models can perpetuate biases unintentionally, sometimes it can be quite complex to try to be bias or normalize the data in a way where you're removing or dealing with undesired biases, or harmful representations. And that's the root cause of a lot of these potentially ethical concerns with the use of AI. And that was even before we got the generative. Today, we have a whole host of other issues that are emerging related to copyright, the definitions of fair use their cases now being handled in courts, that times versus open AI is one to watch, for example. And what we're seeing our clients ask questions about is, hey, if they do use open AI, or they use software application X, and there isn't an indemnity clause protecting that client in place, might they be held liable for something they produce offer bad software? And these are all the types of conversations that unfold particularly with the risk management sub council or within the AI Council. And it's very addressable. But it's important to understand when you're looking at using AI for any function, to try to imagine, well, what are the ways in which it could be wrong? How much might that cost me? What does that look like? What's the likelihood of that happening? And what's the severity of the event, and using that to inform your policies and procedures around how you want to use these tools, Cal, I
Paul Turnbull 27:08
couldn't agree more. And especially when you're talking about students elementary through high school, using a golf metaphor, right, if you tee off and you hit an amazing drive, if you're off by a half a degree at the tee, you're off by about 30 yards when it finally gets to where it's going. That's how I see working with elementary and middle aged students, if we don't get it right early, in terms of I mentioned intellectual curiosity at the beginning. But part of that intellectual curiosity has to also be about the old world was fact checking, right. And now it's about whether or not you are recognizing a hallucination or whether or not you are actually going back to learn more about the thing that was just offered to you. And human nature is just a little bit lazy, we get used to having something tell us that this is right. And after a little while, you kind of stopped. So the work for us in the K 12 sphere is really to embed that idea of the second and third check within our faculty. And then it hopefully really sinks in with our kids. Because the ethics are absolutely there.
Tarja Stephens 28:10
Couldn't agree more. Absolutely. And what we see now, as well as the collaboration, really that not to do in silo, but communicate work together. And that's what we see now with these partnerships with companies coming together and really now sharing knowledge and really going this together globally, locally, government, public, private sector. That's
Christina Lewellen 28:31
incredible. What types of jobs do y'all expect will be most affected by AI? Recognizing that this is I'm not going to hold you to this prediction in even six months, let alone two years, five years. But do you have a guest for right now,
Tarja Stephens 28:46
if I can jump in International Monetary Fund said 60% of the jobs right now are exposed to technology. Last year, Goldman Sachs came up all the jobs that got to be exposed 80% are held by women. So those are very concerning. And at the same time, like, well, how can we do because then again, World Economic Forum report saying 85% of jobs are not even being created yet. So the disruption is happening. And rather than coming from the fear, I always want to come with the opportunity. I would say all of us are going to be affected, every one of our jobs are going to be somehow affected. But let's find the positivity, what are the roles? What are the tasks that are going to be automated? How can I now Augmon myself, like we said, the knowledge, the information, the creativity that comes now through my work, so rather than saying they are taking over again, how can we augment our works with the new tools, but customer service? Absolutely finance, those are the big roles, content creation, we see those aren't going to be legal and again, those are based on the reports are being said. So there is going to be disruption until we know when I really believe like for example, the last week's announcement these nine companies coming together We are all exploring it right now, what are the tasks? What are the roles that are individually and I think it's very important to say that this is the peak year for any executive in this business world. We can't delegate this to anyone. We are accountable right now, how can we build these strategies? How can we support everyone from the mailroom to the boardroom, every one of us have learned this, but it's so custom for each company, there is not a one size fits all matter. And that's why we believe that it's so important with the current employees, you have to really inspire them, find those early adopters, early innovators, bring them in and really start the conversation in the department levels. And everyone who is now part of it.
Cal Al-Dhubaib 30:40
I have a funny question to ask. And I'm gonna need some participation from my panel, but show of hands in the audience, who's overwhelmed in their role today? They
Christina Lewellen 30:51
laughed at you Cal. And I saw a lot of hands. Yeah, those hands are probably tired from having been at the slot machine and playing cards all night, but they still went up.
Cal Al-Dhubaib 31:03
Okay, well, I've tried this social experiment in many different settings. This was my first time in front of this many educators. And I generally see the same thing happened. You know, I want to contrast this with this fear of we're coming for jobs when we're trying to get humans to do more and more with fewer people. And one of the interesting things that Tarja has said that I really can't stress enough it's skills are jobs that have skills that are exposed. And so what does exposure mean? Exposure means that maybe some function or some part of what you were doing is now going to be altered, not replaced, but altered by workflows and tools available to us that might incorporate generative AI or even traditional discriminative AI. And there's a report recently that I read by Deloitte state of Gen ai 2024. They just came out with it in January. And while 70% of the 2000 or so individuals that were hauled in that said that they expect significant return on investment within the next one to three years, as a function of Gen AI, fewer than 30% of them actually had a concrete talent plan in place today. And in addition to some of the jobs that are being exposed, we're starting to get early clues as to what might be the new function of jobs that are growing as a result of Gen AI. I just talked a little bit earlier today about the need to have humans monitoring the outputs and these tools, humans that oversee the policies and enforce how we use Gen AI. And you know, even if you think about this in context of a learning aid, or a tool where there's some student interaction happening with Gen AI, this has to be facilitated. It has to be moderated, there has to be ongoing testing and oversight. And who's doing that job today. And that might be our first clues of the new jobs that are being created in the general workforce. But on the tech side, there is a huge shortage, for example, in talent needed to curate the data, quality and scale needed to be able to continue building and improving these Gen AI systems. And there's a shortage of talent in those roles today.
Paul Turnbull 33:34
I would play off of both of those things again, right when you're talking about younger learners who will be future employees future workers. In schools, we have either a portrait of a graduate or learner profiles. That Mid Pacific our learner profile really focuses on nine traits, habits or dispositions. And if we look at those dispositions, that's what makes us human. And what makes us human is a little unalienable right? And I stole that from Brandeis Marshall, that is not a me thing. I like Brandeis, Marshall was great. She came out and basically said very early on when you think about things that are on AI viable, very complex, creative tasks, conflict resolution, if you push AI enough, it'll either give you a hallucination, or just say, No, I can't. Things like contextual understanding. We're talking together all about this. But you know, rather than talking about at Fortune 500 level about numbers and since we're talking about the dispositions of human beings, so if we can get our students in this place where they're going to navigate the ebb and flow of jobs, absolutely. Jobs that are automatable will most likely go away or certainly be drastically affected, right. Other things that are routinized will be drastically affected. That's already happening on campuses, which is why you know, you've got a number of hands that are are all saying, Yeah, I'm a little bit overwhelmed. Oh, that we'd have the same number of hands or more. If I were to ask you how many feel like you're in your IT departments are understaffed, actually, just for fun how many people feel like you're understaffed?
Christina Lewellen 35:13
Also some hands? Okay.
Paul Turnbull 35:15
So I totally represent an emblem is the little knowledge is a dangerous thing where generative AI is concerned, you're gonna have tons of heads of school who are starting to do the old good, you know, AI is here, we don't need another body, we'll just, you know, drop these bots in and we'll get an agent sort of spun up, and it'll all be great. Well, that may never actually be true. But we're in this really weird place where we think it might be true. So if we keep pulling back to Charlie's point about not what we're going to lose, but what's additive, what will we gain? Right? Most industrial revolutions have shown that the type of employment has actually increased by one and a half times, not, you know, the Lamplighters are no longer with us, what are we going to do? So that's our responsibility and education is to focus on the human aspects, and then absolutely embed and integrate AI as a tool, through the disciplines and through the ages. For
Christina Lewellen 36:15
all three of you, can you guys tell me how are you using AI when we're talking about the workforce in general. But one of my favorite little habits lately is, when I come across a new AI tool, I go find its privacy and data policies. And I put it into a bot to summarize the red flags, so that I don't have to read the whole policy. Before I start using the tool. How are y'all using AI in your lives.
Tarja Stephens 36:41
We just finished our two week AI productivity course. So we had a six week AI leadership program, we had 70 participants over 20 countries. Now we just finished another one that is all about productivity. We are exposed tools every day through our meal programs. And what I'm doing as a co founder, it's next to me every day all day. So I am using it in the strategy. Building marketing plans right now really challenging the team together. I know the knowledge but I feel like with the AI now I'm able to really scale as a new startup. However, what I also want to do and share what we do with our team is that every month we have an AI tools demo our we have a lot of freelancers, we have a lot of contractors factional team members, and they use so many tools. So as a company, we invite them to share three minutes each. And they go over what tools they're using. Why as a co founder, now I'm able to be exposed and I really see where they're using the data, the data is so important that I need to start seeing the guardrails. So where can we put our data in? So that is amazing. Our I recommend everyone to ask because you will be surprised, whichever of your team members are using AI, to have a visibility and also start the conversation. Why are you using this? Why you feel it's beneficial. Let's look at those privacy things. So not only me using but we are encouraging a lot of sharing knowledge within our teams. And those demo hours, for example, are a great way to really know what the new ones are speed scaling that we call. And I want to add, I feel like not only is AI skill set, it's a mindset, you really need to have a lot of unlearning to do. Especially me coming from the corporate, a lot of unlearning to do to have that open mindedness of where can we use it?
Paul Turnbull 38:26
I would totally agree with that. I was just using that same mindset yesterday talking about how the greatest skill of the 21st century the remainder of the century will be the learn, unlearn and relearn. Yeah, and that unlearning, if you're thinking about generative AI is really actually very helpful. Because it's this idea that okay, maybe I wasn't totally wrong. And I'm not totally responsible for unlearning and relearning the whole thing. I don't have to throw out the old cookbook and try to figure out this new recipe. We use it for efficiencies, the same thing, visioning a little bit of a provocateur as well, you know, how do I get my thinking, really pushing the boundaries a little bit evaluations, we talked about that. And kind of, in a way, if you can make an evaluation a little more anonymous, and you can do it in the aggregate. So the way that we've been working with it, imagine we are all at my school, and we're all responsible for certain outcomes and deliverables. The way in which we delivered those things, however, is really important because we work with human beings. So we're measuring the actual discourse and the behaviors, the way we listen to each other the way we lean into a difficult conversation, and to gather qualitative data back and not have it hurt. If you haven't grown the thick skin to really take that information in is very helpful when you have a summative report that just simply says in a very constructive way. Here's what you're doing really well, but let's tweak this, or here's something you're completely missing and And here are the five other people who are in your same place. Now again, right, we're connecting individuals. So we're doing that for efficiency and productivity. But, again, because we're schools, and we're human based organizations, it's trying to find ways that AI can help us recognize where some tools are missing. And then we're just sort of devising those tools as we go. That's
Christina Lewellen 40:21
really cool. And count. I mean, you kind of don't count because like you do AI, you created AI. But how do you use it?
Cal Al-Dhubaib 40:27
Ironically, we were the ones that needed to learn how to use it the most. But I found as somebody who produces a lot of content, I actually spend quite a bit of time using tools like Chat GPT, to synthesize finding. So now I'll go set a conference, attend multiple sessions, all jot down bullets and notes into whatever note taking app I happen to be using at the time. And then I'll take session descriptions and presentations if they're available. And I'll combine it with my notes. And I'll do a creative process with Chat GPT to identify, Hey, what are the top 10 things that I might want to tell my insert audience and need give it these very, very detailed descriptive notes about the audience you're trying to reach? And so I use it as a almost a brainstorming tool to find of my own ideas and what I'm getting exposed to what are things that I want to highlight the most. And then once I go and write content and bring it back, I actually get it to critique my content and say, what are what are the flaws? How might I be able to improve this, and I take it sometimes, and I leave it sometimes, but it helps me produce higher quality content faster. As I've started to settle into my new role, I was actually really impressed that quite a few other teams are already finding safe ways to incorporate Jenai into their practices. One of my favorites was the individual that's helping with recruiting. And she stitches together the different interviewers that are part of a job interviewing process. And because there's so many new roles that were opening up, I had to hire an AI strategist just last week, and putting together this job description when there aren't really a lot of good job descriptions just yet for that specific role. Imagine having to come up with interview questions from scratch each and every time. This HR specialist, she has knowledge of what types of questions are fair, how they align to our interviewing practice. And she's actually using Gen AI to prep the interviewers with questions that will help them gauge the candidate in a fair way per the requirements of the role. So I thought that that was a really unique example of using AI but didn't replace someone whose job it allowed them to deliver higher quality output in less time in a way that helped us improve the overall hiring process. Because they were a specialist and an expert in the area, they were able to do it in a safe way. And they were able to curate what made sense and what didn't. Awesome.
Christina Lewellen 43:01
All right, who's got the mic? Let's get some audience questions, we have a couple of minutes, obviously, there's a lot of brainpower up here hanging out with me. So I see my staff running toward a raised hand. And while
Tarja Stephens 43:13
we wait, I'm gonna say we have one session called on my day with AI. And that's his show until you sit and wait and see when somebody is showing you from the morning to the evening, and how they can in 20 hours a week save time. So So Intel is a great one, from the email automation all the way it's called my day with AI love it, go ahead. So
43:33
just thinking about how we can prepare students for the new workforce. One of the things I've been trying to focus on is how to get our students to be curators of AI and machine learning models, versus just users of those tools. And I'm wondering if there's any strategies that you've used to kind of help prep students for that, or if you know of any tools that are actually lowering that bar of actually getting into the RL space, I know that costs a lot of money to train a model like Chat GPT. But in the RL space, there's still a huge scope for those very specific domain specific machine learning applications. And whether anyone has some like lowering the bar examples of how students can get into that without having to dive into all of the Python and the libraries and psychic learn that kind of stuff.
Cal Al-Dhubaib 44:27
I'll take a quick stab at this. And this reminds me of sort of familiarity we did with our clients in the past. When introducing ML and AI literacy. I'm a big fan of the idea of show them how it breaks, show them how it fails actually work into the exercise, a challenge for them to get the AI tool whether it's Chat GPT or insert model of your choice to get to a point where it says oh, I don't quite know that or says something that isn't quite right. And make it an exercise for them to figure out what are those boundaries. And I find that individuals who are more aware of how AI can fail, tend to be more wary of just blindly accepting answers. And they end up being more effective curators just by nature of learning how they break.
Tarja Stephens 45:18
And if I can add, I love what you just said. And in addition to that, so many companies would love to be part of kind of having these demonstration projects. So we call them mentoring companies, a lot of AI startups, they would love to see the collaboration of how the students could be learning it and they coming in. So that's another way as well is that now that this is so new, that as much as the students and the educators are thinking, what do I educate is the same way that companies that how we can really come together to real case scenarios and case studies and collaborate on those. So that's kind of the employers and companies side. So a lot of AI startups are looking like, what would be a great case study that I could be part of, and being a mentoring company, from
Paul Turnbull 46:01
the student perspective, you know, playing off of both Taria and Cal, the idea of voice as students, if it's in the humanities, and you're putting something together, that's narrative, the ability to work with generative AI and realize that it's not going to give you everything that you need, Chat GPT is really good example of just a very locked in kind of voice. And so the more that students can almost argue with it, and say, That's not what I was trying to say, and really get to this place where it's a collaborator, and nothing more, that's the first piece. And then it's sort of the higher level computer science classes and that kind of thing. We're encouraging students, and we have a couple of teachers who are working with students now, especially where GPPs are concerned, how they can start to develop their own so that they're more curators and creators than they are consumers. But it's really quite a range, especially right now.
46:54
I'm really into curriculum, and I found that the curriculum that currently exists, is really knowledge and understanding based. I'm wondering if you have found or created like a discrete set of skills that we need to acquire to develop AI literacy.
Cal Al-Dhubaib 47:09
So do you mean,
Paul Turnbull 47:10
if we were to look at the old traditional Bloom's Taxonomy? Are you talking about discrete skills that are above the bottom knowledge understanding? Not
47:19
necessarily like the IB has ATL skills? It's a set of skills that we're explicitly teaching our learners. I think we all have an idea of what those skills are of being critical thinkers, but has that list been created somewhere?
Paul Turnbull 47:32
So from the education standpoint, I'll just jump in really quickly, I don't think that there would necessarily be a separate disparate AI skills, higher order thinking protocol, other than asking teachers to really take the critical thinking skills, higher order thinking skills that exist out there across curricula, and find out ways that generative AI can be embedded into those. So that ultimately, we're working far beyond application. And we're working into evaluation of a variety of different skill sets and creating different products and different thoughts. It's all about networking. At this point, honestly,
Tarja Stephens 48:12
I'm going to check the website, but there is a nonprofit that has focused on durable skills. And again, they don't call them soft skills, but Dorval skills, and they have a beak we'll have 100 Drupal skills based on I don't want to say the number but when millions of job applications and they're looking, what are they asking? And with that there's 10 categories, and under that is 10 categories, subcategories, and they're all about leadership. What does that under mean? You know, communication, that empathy, critical thinking creativity, with me a team, we always refer to that. But there was a lot of frameworks, there was a different ways. But we have seen that has been a great kind of a roadmap for us, because it is data based. And it's 100 doable skills in this big wheel, that is 10 categories, and then subcategories eat,
48:59
put it in the shownotes. I've been perplexed
Cal Al-Dhubaib 49:02
by individuals who have had no background in STEM and then take to AI really well, they become very quickly effective at prompting using Gen AI tools. And then there's individuals who also come from a similar background and seem to struggle with it. And I've been discussing with peers like what is the pattern? What's the differentiator? What are the common skills that help folks get it? And there's a lot of traditional training that has nothing to do with Gen AI, that you can incorporate into competencies around Gen AI. And just a couple examples of this would be logical tautologies quantitative reasoning, mathematical or statistical thinking, basically strengthening the skills that allow individuals to recognize when something is a probability, and instead of thinking in terms of things as yes, no, it's well 90% It's going to be this and 10%. It's going to be that and how do I want to deal with that. And so stat and quant thinking combined with logic and philosophy tend to be really, really strong foundations for those who aren't going to stem route to be able to excel. As far as AI literacy goes.
Christina Lewellen 50:18
Let's do one more. And then we'll wrap it up
50:22
just to piggyback, Cal, from what you just shared when you talked about some of the skills that wouldn't necessarily lend themselves to feeling like someone can use AI really well, I think our schools love to talk about some of the things you talked about hairy earlier about empathy, and all these other types of kind of soft human skills. And so they feel as though they don't need to focus on building some of these tech skills, or AI skills, let's just focus on the stuff we've traditionally focused on. And they can succeed in any career. How do we address that, to make sure that we're teaching those traditional skills, but also trying to find ways to incorporate skills that would better prepare our kids for what's coming up ahead.
Tarja Stephens 51:05
If I can jump in, we call this Lean learning what you mentioned. And I think Cal you mentioned in a beginning, like the AI foundation for Milgrom, the ballroom, everybody in workforce has to understand, and also the technical aspects of it and understanding to really how to explain complex things in a simple matter. What we see is what we thought already in November class now in April is old. So the speeds killing lean learning, like testing time, when now open AI is going back with the surah with five words, you were able to generate one minute video. So that is kind of what I think it's so important that the students are able to understand that what you learned six months ago in this new world of technology is probably going to be modified. It's new, it's upgraded. And the mindset of I need to be curious all the time hunger for learning these new things. But absolutely, I think it's so critical that the technology and understanding it in in Lean matter, I come from lean world lean methodologies. So lean learning is just in time, understanding that it's going to be upgraded, and really changed what you have been learning. So I don't know if that answered. But absolutely, that's so valuable. And so needed. I
Paul Turnbull 52:20
would totally agree with that, that ultimately, the again, right, so it's training, training, training, training, and in terms of faculty, because a lot of it's a fear based kind of thing where if I don't know, then I'm going to be wrong. Having them understand that there's no right or wrong, it's just trying, you just have to get your hands dirty, and try and try. The other thing is that education tends to be where a pendulum right, we tend to swing back and forth. Oh, we understand this. Now we're moving on. This is not something we're going to move on from so the ability for teachers to understand that I may try this one thing. Now it doesn't work, it doesn't feel good. But I should try it in another couple of months, because it may produce a different kind of result. And then where the students are concerned, absolutely having them understand that, whether it's the arts, humanities, stem, or anything in between, that there are applications that will work for you, whether you are just a consumer or whether you want to be a creator, content creator. That's the only thing that we've been working with our faculty on, it's just try and try and try. And I can say that from a sentiment standpoint, at the very beginning of our school year, we asked folks like, are you going to use it, you know, monthly, weekly, or never sort of thing. We were close to about 46% of our faculty, were saying that not at all, that's dropped down to just a little under a third now, which is really great. And then the folks who were using it maybe once a month have now moved to weekly and a bunch of the ones from weekly and moved it to daily. So now what you see is when you're walking around the campus, we've got student projects that are visible and demonstrations of learning that are visible. And if it's art or music, English language development, math, across the board, AI projects are popping up. And what I like about it is again, it's the juxtaposition of what the student was trying to accomplish on one side, you know, what's the prompt? What's the question? The generative AI sort of results on the other side, and in the middle? Is that Venn diagram of the student correcting what they got from generative AI. And when you go back three months later, and you ask the same student to do same, the same thing again, they can articulate for you the steps they would change. It's not rote learning, remember, regurgitate forget, it really is a deeper sense of learning.
Christina Lewellen 54:40
It's really cool. And Cal Do you or your cat want to have a final word?
Cal Al-Dhubaib 54:47
I think he's ready for lunch. Great points all around. As far as foundations go. There's a lot of conflicting talk today. You know, when you hear the CEO of Nvidia saying that there will be no need for coding in 1020 years from now, because Gen AI has advanced to x. I don't know if that's worth making a bet on preparing our students for STEM readiness and roles that will likely need them to have some sort of quantitative reasoning and base level of AI literacy. So just to kind of push back on that point of, we'll use what we already have, and then they can dive deeper into how to build it post high school doesn't sit well with me. Not everybody needs to know how to build a car to use a car. But everyone does need to learn basic maintenance, so that if their car does break down on the side of the road, they're not left stranded. And I would say that we should use the same type of reasoning when it comes to equipping gen students for an AI powered future. Cal
Christina Lewellen 56:02
and Paul and Tarja. Thank you guys so much. This has been an incredible conversation. I'm so grateful that you took your time to spend a little bit with us here at ATLIS. Everybody, please give a warm set of applause here. Thank you very much we are done for today did great.
Narrator 56:19
This has been Talking Technology with ATLIS produced by the Association of Technology Leaders in Independent Schools. For more information about ATLIS and ATLIS membership, please visit the atlis.org If you enjoyed this discussion, please subscribe, leave a review and share this podcast with your colleagues in the independent school community. Thank you for listening.