Skip to main content Skip to secondary navigation
Main content start

The future of educational technology

Far from being a disruptive force in education, artificial intelligence will have a bright future in the classroom of tomorrow, says educational expert.
Hand reaching through tablet for books.
How will AI be used in the classroom? | Stocksy/Yaroslav Danylchenko

Dan Schwartz is a cognitive psychologist and dean of the Stanford Graduate School of Education.

He says that artificial intelligence is a different beast, but he is optimistic about its future in education. “It’s going to change stuff. It’s really an exciting time,” he says. Schwartz imagines a world not where AI is the teacher, but where human students learn by teaching AI chatbots key concepts. It’s called the Protégé Effect, Schwartz says, providing host Russ Altman a glimpse of the future of education on this episode of Stanford Engineering’s The Future of Everything podcast.

Embed Code

Listen on your favorite podcast platform:

Transcript

[00:00:00] Dan Schwartz: You know, the tough question for me is, should you let the kid use ChatGPT during the test? Right? And we had this argument over calculators, right? And finally they came up with ways to ask questions where it was okay if the kids had calculators. Because the calculator was doing the routine stuff and that's not really what you cared about. What you cared about was, could the kid be innovative? Could they take another, a second approach to solve a problem? Things like that.

[00:00:33] Russ Altman: This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you're enjoying The Future of Everything podcast, please hit the follow button in the app that you're listening to now. This will guarantee that you never miss an episode. 

[00:00:46] Today, Dan Schwartz will tell us how AI is impacting education. He studies educational technology and he finds that there's a lot of promise and a lot of worries about how we're going to use AI in the classroom. It's the future of educational technology. Before we get started, please remember to follow the show in the app that you listen to. You'll be alerted to all of our episodes and it'll make sure that you never miss the future of anything.

[00:01:16] You know, the rise of AI has been on people's minds ever since the release of ChatGPT. Especially the powerful one that started to do things that were scary good. We've seen people using it in business, in sports, in entertainment, and definitely in education. When it comes to education, there are some fundamental questions, however, are we teaching students how to use AI? Or are we teaching students? How do we assess them? Can teachers grade papers with AI? Can students write papers with AI? Why is anybody doing anything? Why don't we just have the AI talk to itself all day? These are real questions that come up in AI. 

[00:01:55] Fortunately, we're going to be talking to Dan Schwartz, who's a professor of education and a dean of the School of Education at Stanford University about how AI is impacting education.

[00:02:06] Dan, the release of ChatGPT has had an impact all over the world, people are using it in all kinds of ways. And clearly one of the areas that AI, especially generative AI has made impact is in education. Students are clearly using it, teachers are thinking about using it or using it. You're the Dean of Education at Stanford. What's your take on the situation right now for AI in education? 

[00:02:33] Dan Schwartz: Okay, so lots of answers to that, but, but, you know, the thing I've enjoyed the most is, uh, showing it to people and watching their reaction. So I'm a cognitive psychologist. I study creativity, learning, what it means to understand. And you show this to people and you just see them go, oh my lord.

[00:02:53] And then the next thing you see is they begin to say, uh, what's left for humans? Like what's left? And then they sort of say, wait a minute, will there be any jobs? And then finally they sort of say. Oh my goodness, education needs to change. And as a dean who raises money for a school, this is the best thing to ever happen. No, whether it's good or bad, it doesn't matter. Everybody realizes it's going to change stuff. And so it's really an exciting time. 

[00:03:22] Russ Altman: So that is really good news. I have to say going into this and I have to reveal a bias. I have often wondered if technology has any place in a classroom. And I think it's because I was, uh, I was injured as a youth.

[00:03:37] This is in the 1970s when some teachers tried to put a computer program in front of me and I was a pretty motivated student and I worked with this computer for about six minutes, and I should say, I'm not an anti-computer person. I literally spent all my time writing algorithms and doing computation work. But I just felt as a youth that I wanted to have a teacher in front of me, a human telling me things. Uh, and so that is clearly not the direction, I hear you laughing. So talk to me about the appropriate way to think about computers. Because I really have a big negative reaction to the idea of anything standing between me and a teacher.

[00:04:18] Dan Schwartz: You must have had very good teachers. 

[00:04:19] Russ Altman: I might have. 

[00:04:19] Dan Schwartz: So Russ, you sound like someone who doesn't play video games. 

[00:04:23] Russ Altman: I do not play video games. 

[00:04:24] Dan Schwartz: So there's this world out there where people can experience things they could never experience, uh, directly. And no teacher can deliver this immersive experience of you in the Amazon searching for anthropological artifacts. There's also something called social media that people use. 

[00:04:43] Russ Altman: I've heard about this. 

[00:04:43] Dan Schwartz: Yeah. Yeah. 

[00:04:44] Russ Altman: I think we disseminate the show using it. 

[00:04:46] Dan Schwartz: So back in the day. 

[00:04:47] Russ Altman: Okay. So I'm a dinosaur. 

[00:04:49] Dan Schwartz: Uh, back in the day, you got the Apple 2 maybe, and it's about 64 K, maybe. It's got a big floppy drive and it takes all its CPU power to draw a picture of a two plus two on the screen. So I think things have changed a little bit Russ. But I appreciate your desire to be connected to teachers. I don't think we're replacing them. 

[00:05:14] Russ Altman: I'm not going to give you a lecture about teaching. But I will say this one sentence that was reverberating through my brain when I was getting ready for our interview, which was when I'm in a classroom, and this has been since I've been in third grade. I am watching the teacher trying to understand, how they think about the information and how they struggle with it to like understand it and then try to relay it to me.

[00:05:34] And so it is, that's where I'm learning. I'm, it's not even what they're saying. It's they're painting a picture for their cognitive model of what they're talking about. And that's what I'm trying to pull out to this day. And so that's why I have such a negative reaction to anything standing between me and this other human who has a model that is more advanced than mine about the material that we're struggling with and I just, I'm trying to download that model. 

[00:06:01] Dan Schwartz: Wow. You're, you are a cognitive psychologist, Russ? 

[00:06:03] Russ Altman: I don't know. 

[00:06:05] Dan Schwartz: Like I had a buddy who sort of became a Nobel laureate. And he talked about how he loved take apart cars, and I'd say I love to watch you take apart cars, just to figure out what you're doing. No, so I think, let's separate this. There's the part where you think the interaction with the teacher is important. I don't know that you need it eight hours a day. You know, that's an awful lot of interaction. I'm not sure I want to be with my mom and dad for eight hours a day trying to figure out their thinking. So you don't need it all the time.

[00:06:34] On the other side, you know, we can do creative things with the computers. So for example, I wrote a program where students learn by teaching a computer agent. And so they're trying to figure out how to get the agent to think the way it should in the domain. Turns out it's highly motivating. The kids learn a lot. The problem was the technology quickly became obsolete. Because after kids used it for a couple of days, they no longer needed it, 'cause they'd figured out sort of how to do the kind of reasoning that we wanted them to teach the agent to do for reasoning. 

[00:07:06] Russ Altman: That's exactly what I was talking about before, about my relationship with my teacher. And you just flipped it, but it's the same idea, which is that there's a cognitive model that you're trying to transfer. And by doing that transfer, you get in, you introspect on it and you understand what it is that you're thinking about. 

[00:07:22] Dan Schwartz: I think that's right. You know, so the concern is the computer does all the work, right? And so I'm just sitting there pressing a button that isn't relevant to the domain I'm trying to learn. But you know, uh, one of the things computers are really good at, like as good as casinos, is motivation. So some computer programs, they gamify it. I'm not sure that's a great use of it. Because you, you know, you try and you learn to just beat the game for the reward. 

[00:07:49] Russ Altman: Right.

[00:07:49] Dan Schwartz: As opposed to learn the content. But things like having, teaching an intelligent agent how to think. There's something called the protege effect, which is you'll try harder to learn the content to teach your agent than you will to prepare for a test. Right? So we can make the computer pretty social. 

[00:08:08] Russ Altman: Okay. So you are clearly a technology optimist in education. And in addition to the amazing fundraising and like, there's so many questions to be answered. What I think a lot of people are worried about is, are we at risk of losing a gen. We've already lost a few generations of students, some people argue, because of the pandemic and the terrible impact it had, especially on, uh, on people who weren't privileged in society and in their education.

[00:08:34] Are we about to enter yet another shock to the system where, because of the ease of having essays written and having, and grading papers, that we really don't serve a generation of students well? Or do you think that's a overhyped, unlikely to happen thing? 

[00:08:51] Dan Schwartz: No, it's a good question. You know, that part of this is people's view about cheating, you know? And so it's too easy for students to do certain things. But there's another response that I want to hang on to. I want to ask you, Russ, are you using, you teach. 

[00:09:07] Russ Altman: Yeah. 

[00:09:07] Dan Schwartz: Are you like putting in all sorts of rules to prevent students from cheating, or are you saying, use it, do whatever you can. I'm going to outsmart your technique anyway.

[00:09:17] Russ Altman: It's a little bit more on the latter. So we, uh, I teach an ethics class, which is a writing class. And we allow ChatGPT because the, my fellow instructor and I decided, and this was the quote, we want to be part of the future, not part of the past. So we said to the students, 

[00:09:33] Dan Schwartz: Sorry, The Future of Everything, Russ.

[00:09:34] Russ Altman: Thank you. Thank you. Thank you. And thanks for the plug. So, uh, we allow it. We asked them to tell us what prompt they used and to show us the initial output that they got from that prompt. And then we, of course, have them hand in the final thing. And we instruct the TAs and ourselves, when we grade that we're grading the final product with or without a declaration of whether ChatGPT is used.

[00:09:56] We do have engineers as TAs, which means that they did a careful analysis. Students who used ChatGPT, and I don't think this is a surprise, got slightly lower grades, but spend substantially less time on the assignment. So if you're a busy student, you might say, I will make that trade off because the grades weren't a ton worse. It was like two points out of a hundred, like from a ninety to an eighty-eight, and they completed it in like half the time. 

[00:10:25] Dan Schwartz: Uh, do you think they learned less? 

[00:10:28] Russ Altman: So we don't know. We don't know. And, uh, the evaluation of learning is something that I'm looking to you, Dan. Uh, how do I tell? So, um, so we do try to use it. But we are stressed out. We have seen cases where people say they used ChatGPT, but tried to mislead us in how they use it. They said, I only used it for copy editing, but it was clear that they did more than copy editing with it. And so there's at the edges, there are some challenges. But in the end, we said motivated students who want to learn will use it as a tool and we'll learn. And the students who we have failed to motivate, and it is our failure, you could argue. They're just going to do whatever they do, and we're not going to be able to really impact that trajectory very much. 

[00:11:12] Dan Schwartz: Yeah, you know, you sort of see the same thing with video, video-based lectures. So I'm online. I've got this lecture. Do I really want to sit and listen to the whole thing? Not really. I'm going to skim forward to find the information. I skim back. I'm probably going to end up doing the minimum amount if it's not a great lecture. 

[00:11:29] Russ Altman: Yeah.

[00:11:29] Dan Schwartz: So I'm not sure this is a ChatGPT phenomenon. It's just, it's sort of an enabler. I think the challenge is thinking of the right assignment. So like, you can grade things on novel and appropriateness. So, are they novel? You know, if they use ChatGPT like everybody else, they won't be novel. They'll all produce the same thing. 

[00:11:48] Russ Altman: It's incredibly, yes. It, so it is, um, there's the most common type of, uh, moral theory is called common morality. And it turns out that ChatGPT does pretty well at that one because there's so many examples that it has seen. And it's terrible at Kant. Deontology, it really can't do. Okay, so let me. 

[00:12:07] Dan Schwartz: So let me get back to your question. 

[00:12:09] Russ Altman: Yeah. 

[00:12:09] Dan Schwartz: So here's what I see going on right now. There, there are like, uh, big industry conferences. Because they're going to, they're producing the technology that schools can adopt. Right? And there's a lot of money there. And twenty years ago, there were zero unicorns, and about, uh, I think last year, fifty-four billion dollar valuation companies in ed tech. So this is a big change. So what are they doing? They're basically creating things to do stuff to students, right? 

[00:12:42] So maybe they're marketing to the teachers, but it's, you know, it's, I'll make a tutor that, uh, is more efficient at delivering information to the students. Or, I will make a program that can correct their math very quickly. And so what's happening is the industry is sort of using the AI in the way that nobody else uses it.

[00:13:04] Because everybody who's got this tool wants to create stuff, right? Like, uh, my brother. It's my birthday, what does he do? He has ChatGPT to write me a poem about Dan Schwartz at Stanford. What he doesn't know is that there's a lot of Dan Schwartz's and so evidently I wear colorful ties, but this is what everybody wants to do. They want to create with it. Meanwhile, the field is trying to push towards efficiency. Can we get the kids done faster? Can we get them through the curriculum faster? Can we correct them faster? In which case the kids are going to optimize for being really efficient, right? As opposed to just trying to be creative, innovative, use it for deeper kinds of things. So this is my big fear. 

[00:13:42] Russ Altman: And so you're watching these companies and I'm guessing that they don't always ask your opinion about what's, what would you tell, so let's say a, one of these unicorn billion dollar or more companies comes to you and says, we want to do this right. We want to use the best educational research to create AI that can bring education to people who might otherwise not have quality education. What would you tell them? 

[00:14:04] Dan Schwartz: So this is a challenge, right? This is something we're actively trying to solve. So we've created a Stanford accelerator for learning to kind of figure out how to do this. 'Cause I've been in this ed tech position for quite a while. And the companies come in and they say, we really want your opinion. And then they present what they're doing. And I go, uh, have you ever thought of, and they go, wait, let me finish. And this goes on for fifty-five minutes. Where they're telling me what they want to do. And I'm trying to say, you know, if you just did this. And the way it ends is I say to them, look, you, if you do these three things, I'll consider being an advisor.

[00:14:42] Russ Altman: Right.

[00:14:42] Dan Schwartz: They never come back. 

[00:14:45] Russ Altman: So the message you're sending them is just not in their worldview. 

[00:14:50] Dan Schwartz: It's because they have a vision. Everybody wants to start their own school. 

[00:14:53] Russ Altman: Yeah. 

[00:14:53] Dan Schwartz: They have their vision of what it should be and they're urgent to get it done. And you know, it's a startup mentality. So trying to figure out how can we educate them? You know, I think we know a lot about how people learn that, uh, that we didn't know twenty years ago when they went to school. And the AI, you know, one of the things it can do is implement some of these theories of learning in ways that don't exist in textbooks and things like that.

[00:15:17] So that's the big hope. And the question is, how can you take advantage of industry? You know, education is a public good, but they still buy all their products. And so going through those companies is one way to sort of bring a positive revolution. But again, I'm a little worried that the companies are, and they're sort of optimizing for local minima.

[00:15:41] Russ Altman: Yeah. 

[00:15:41] Dan Schwartz: You know, to accommodate the current schools and things like that. 

[00:15:44] Russ Altman: Should we take, so what, should we take solace in the teachers? So many of us are fans of teachers, grammar school teachers, middle school teachers, high school teachers, but many of these folks are incredibly dedicated. Will they be a final, um, uh, a final filter that looks at these, uh, educational technologies and says, absolutely not. Or yeah, we'll use that, but we're going to use that in a way that makes sense for my way of teaching. Or are they not in a position to make those kinds of, what you could call courageous decisions, about kind of modifying the use of these tools to make them as good as possible in, uh, on the ground? 

[00:16:21] Dan Schwartz: So it's pretty interesting. The surveys I've seen, uh, sort of over the last year, the different groups do different surveys. It, it sort of, if I take the average, about sixty percent of K 12 teachers are using GenAI, right? And about thirty percent of the kids. If I go to the college level, about thirty percent of the faculty are using GenAI in teaching and about eighty percent of the kids are using it. So I do think in the pre K to 12 space, the teachers are making decisions. They do a lot of curriculum. There are, so a great application is, um, project-based learning. So project-based learning is a lot of fun. Kids learn a lot. They sort of develop a passion, a certain depth. As opposed to just mastering sort of the requirements, but it's really hard to manage. You know, when I was a high school teacher, I had a hundred and thirty kids, right?

[00:17:11] If all of them have a separate project, I have to help plan them and make them goal, you know, learning goal appropriate. So the GenAI can help me do that. It can help me, uh, have the kids sort of help use it to help them design a successful project. Uh, it can help me with a dashboard that helps manage them, hitting their milestones, things like that.

[00:17:31] And there, you know, it's, it, the, teacher is like, I can do something I just couldn't do before. 

[00:17:35] Russ Altman: Yeah. Yeah. 

[00:17:36] Dan Schwartz: It's different than the model where you put the kids in the back of the room who finished early and say, go use the computer. I think, you know, most schools, kids are carrying computers in classes. So it's a little different. It's more integrated than it used to be. 

[00:17:52] Russ Altman: This is the Future of Everything with Russ Altman. More with Dan Schwartz, next.

[00:18:06] Welcome back to The Future of Everything. I'm Russ Altman and I'm speaking with Dan Schwartz, professor of education at Stanford University. 

[00:18:12] In the last segment, Dan told us about AI, education, some of the promises and some of the pitfalls that he's looking at on the ground, thinking about how to educate the next generation.

[00:18:23] In this segment, I'm going to ask him about assessment, grading. How do we do that with AI and how do we make sure it goes well? Also going to ask him about physical activity, which turns out physical ness is an important part of learning. 

[00:18:39] I want to get a little bit more detailed, Dan, in this next segment, and I want to start off with assessment, grading. I know you've thought about this a lot. People are worried that um, AI is going to start to doing, be doing all the grading. Everybody knows that a high school teacher with a big, couple of big classes can spend their entire weekend grading essays. It is so tempting just to feed that into ChatGPT and say, hey, how good is this essay? How should we think about, maybe worry about, but maybe just think about, assessment in education in the future? 

[00:19:11] Dan Schwartz: Yeah, this was, uh, you remember the MOOCs? 

[00:19:14] Russ Altman: Yes. 

[00:19:14] Dan Schwartz: Massively online, open courses. And, uh, you're hoping you have ten thousand students, and then you gotta grade the papers for ten thousand students. So what do you do? You give a multiple-choice tests, which can be machine coded, right? So, so I think that's always there. I'm going to take it a slightly different direction, which is, uh, I'm interacting with a computer system and while I'm interacting with it, it's, it can be constantly assessing in real time, right?

[00:19:41] And so there's a field that's sometimes called educational data mining or learning analytics. And there's thousands of people who are working on, how do I get informative signal out of students interactions. Like, are they trying to game the system? Are they reflecting? And so forth. So this is something the computer can do pretty well, right?

[00:20:02] It can sort of track what students are doing, assess, and then ideally deliver the right piece of instruction at the moment. So yours, you could use the assessments to give people a grade, but really the more important thing is, can you use the assessments to make instructional decisions? So I think this is a big area of advancement, but here's my concern.

[00:20:25] We've gotten very good at assessing things that are objectively right and wrong. Like did you remember the right word? Did you get two plus two correctly? For most of the things we care about now, they're like strategic and heuristic, which means it's not a guaranteed right answer. And so what you really want to do is assess students choices for what to do. So for example, uh, creativity, it's just for the most part, it's a large set of strategies. Right? There's a bunch of strategies that help you be creative. The question is, do the students choose to do that or do they take the safe route? 'Cause creativity is a risk, right? 'Cause you're not sure.

[00:21:02] So I think this is where the field needs to go. Is being willing to say that certain kinds of choices about learning are better than others. Uh, and it's a, it becomes more of an ethical question now. Instead of saying two plus two equals four, there's no ethics to it. 

[00:21:16] Russ Altman: Are you going to be able to convince non educators who hold purse strings, let's call them the government, that these kinds of assessments are important and need to be included? Because my sense is that when it filters up to boards of education or elected leaders, a lot of that stuff goes out of the window. And they just want to know how good are they at reading comprehensive and can they do enough math to be competitive with, you know, country X? 

[00:21:43] Dan Schwartz: Yeah. Yeah. So different assessments serve different purposes. Like the big year end tests that kids take, those aren't to inform the instruction of that child. They're not even for that teacher. They're for school districts to decide are our policies working. And so it's really a different kind of assessment than me as a teacher trying to decide what should I give the kid next. So I think it's going to vary. You know, the tough question for me is should you let the kid use ChatGPT during the test? Right?

[00:22:14] And we had this argument over calculators, right? And finally they came up with ways to ask questions where it was okay if the kids had calculators. Because the calculator was doing the routine stuff. And that's not really what you cared about. What you cared about was, could the kid be innovative? Could they take a, another, a second approach to solve a problem? 

[00:22:34] Russ Altman: Yeah. 

[00:22:34] Dan Schwartz: Things like that. 

[00:22:34] Russ Altman: We, so I teach another class where it's a programming class, the students write programs, and we have switched, um, and we've actually downgraded the value. So as you know, very well, just as background, there is now an amazing, ChatGPT can also write computer code essentially. And so a lot of coding now is kind of done for you and you don't need to do it. We are trying to make sure that they understand the algorithms that we ask them to code. And so what we're doing is we're downgrading the amount of points you get for working code.

[00:23:04] You still get some, but we're upgrading the quiz about how the algorithm works. Do you understand exactly why this happened the way it did? Why is this data structure a good choice or a bad choice? And so it's forcing us, and you could have argued that we should have done this twenty years ago in the same class, but this is making it a more urgent issue, because if we don't, people can just get an automatic piece of code. They can run it. It'll work. They have no understanding of what happened. And so it's really a positive. It's putting more of a burden on us to figure out why the heck did we have them write this code in the first place? 

[00:23:39] Dan Schwartz: No, this was my point. It makes you sort of rethink what is valuable to learn. And you stop doing what was easy to grade. So I have an interesting one. This is a little nerdy. 

[00:23:51] Russ Altman: Okay. I love it. I love it. 

[00:23:52] Dan Schwartz: I teach the intro PhD statistics course in education. And lots of students say, I took statistics, right? And I'm sort of like, well, that's great. Let me ask you one question. And I say, I'm going to email you a question and you'll have five minutes to respond. You let me know when you're ready for it. And I ask them, uh, this is just for you, Russ. But why is the tail of the T distribution fat in small sample sizes? And I, what I get back usually is because they're small sample sizes.

[00:24:24] Russ Altman: Right. Or because it's the T distribution. 

[00:24:27] Dan Schwartz: Or it's, yes, even better. And then I come back and I sort of say, well, have you ever heard of the standard error? And I begin to get at the conceptual stuff, right? And, uh, I suspect if I gave it, uh, so there are ways to get conceptual questions that are really important. But you know, being able to prompt or write R code, you know, that's a good thing. You want them to learn the skills as well. 

[00:24:50] Russ Altman: Exactly. 

[00:24:51] Dan Schwartz: So I don't know, you know, when the calculator showed up, there's a big debate, right? What should students learn? Can they use the calculator? The apocryphal solution was you had to learn the regular math and the calculator now. You just had to learn twice as much. And so maybe that's what it's going to be. 

[00:25:08] Russ Altman: And that's a very likely transitional strategy and then we'll see where we end up. Okay. In the final few minutes, I, this seems like it's unrelated to AI, but I bet it's not. You've done a lot of work on physical activity and learning. You've even been on a paper recently where you talk about having a walk during a teaching session and whether you get better outcomes than if you were just standing or sitting. So tell me about that interest and tell me if it has anything to do with today's topic. 

[00:25:37] Dan Schwartz: I can make the bridge. I can do it, Russ. Right. So we did some studies. Um, I've done a lot of it. It's called embodiment where, yeah, there was, I got clued into this where, uh, I was asking people about why, about gears. And I say, you know, you have three gears in a line, and you turn the gear on the left clockwise. What does the gear on the right do? Far right. And I'd watch them, and they'd go like this with their hands. They'd model with their hands. And then I was sort of like, well, what's the basis of this? And I'd say well why? And they say because this one's turning that way that one, I go but why. And in the end, they just bottom out. They just show me their hands. They didn't say things like one molecule displaces another. 

[00:26:20] Russ Altman: Right. 

[00:26:21] Dan Schwartz: So that sort of clued me in. 

[00:26:22] Russ Altman: This pinky is going up and this other pinky is going down. 

[00:26:26] Dan Schwartz: Yes. 

[00:26:26] Russ Altman: What don't you understand about that? 

[00:26:28] Dan Schwartz: Pretty much. Well, it was nonverbal. 

[00:26:31] Russ Altman: Yeah. 

[00:26:31] Dan Schwartz: So we went on, you know, we discovered that the basis for negative numbers, right? Is actually perceptual symmetry. And we did some neuro stuff. And so the question is sort of how does this perceptual apparatus, which some people, we're just loaded with perception, right? The brain's just one giant perceiving. So how do you get that going? So part of the embodiment is my ability to take action, right? And so this is where we started, right? Right now, the AI feels very verbal, very abstract. Even the video generation, it's amazing, but it's pretty passive for me. So enter virtual worlds, they're still working on the form factor where I can move my hand in space. 

[00:27:16] Russ Altman: Yeah. 

[00:27:17] Dan Schwartz: And something will happen in the environment in response to that. You know, I think medicine is, you know, really been working on haptics so surgeons can practice. Uh, there was a great guy who made a virtual world for different heart congenital defects, and you could go in and practice surgery and see what would happen to the blood flow. So I think, uh, that embodiment where you get to bring all your senses to bear, it's not just words, but it's everything, can really do a lot for learning, for engagement, uh, not just physical skills. 

[00:27:49] Russ Altman: So that's a challenge to, I'm hearing a challenge to AI, which is as an educator, you know that this physicality can be an critical part of learning. And by the way, would this be a surprise? I mean, we're, we've been on earth evolving for several hundred million years. And, uh, you would be surprised if our ability to manipulate and look at three dimensional situations wasn't critical to learning, and yet that's not what AI is doing right now. So this is a clear challenge to AI among other things. 

[00:28:17] Dan Schwartz: Right. So, uh, I have a colleague, Renate Fruchter. And, uh, she teaches architecture, and she has students make a blueprint for the building, right? And then she feeds the blueprint to a CAD system that creates the building. She then takes the building and puts it into a physics engine, it can basically render the building and make walls so you can't move through them, and it has gravity and things like that.

[00:28:42] She then puts the, uh, original student who designed the building in a wheelchair and has them try to navigate through that environment. At which point they sort of understand, oh this is why you need so much space so they can turn around, so they can navigate near the door. I am sure that is an incredibly compelling experience that allows them to be generative about all their future designs.

[00:29:03] So yeah, this is a challenge and part of the co-mingling of the AI and the virtual worlds, I think this is a big challenge. It's computationally very heavy, but it will open the door for lots of ways of teaching that you just couldn't do before. 

[00:29:17] Russ Altman: Thanks to Dan Schwartz. That was the future of educational technology.

[00:29:21] You've been listening to The Future of Everything and I'm Russ Altman. You know what? We have an archive with more than 250 back episodes of The Future of Everything. So you have instant access to a wide array of discussions that can keep you entertained and informed. Also, remember to rate, review, and follow. I care deeply about that request. 

[00:29:41] And also, if you want to follow me, you can follow me on X @ @RBAltman, and you can follow Stanford Engineering @ StanfordENG.