.png)
Adventures in Advising
Join Matt Markin, Ryan Scheckel, and their amazing advising guests as they unite voices from around the globe to share real stories, fresh strategies, and game-changing insights from the world of academic advising.
Whether you're new to the field or a seasoned pro, this is your space to learn, connect, and be inspired.
Adventures in Advising
Navigating Curious Minds: Unlocking Opportunities using AI - Adventures in Advising
Dr. Roy Magnuson, Director of Emerging Technologies for Instruction and Research at Illinois State University discusses the powerful opportunities of artificial intelligence and the sound approaches we can take as individuals and as a broader institution. How can we leverage AI to enhance the student/advisor relationships and are there avenues we are not considering with AI? Guest host is Michael 'Brody' Broshears, Illinois State University.
Follow the podcast on your favorite podcast platform!
The Instagram, and Facebook handle for the podcast is @AdvisingPodcast
Also, subscribe to our Adventures in Advising YouTube Channel!
Connect with Matt and Ryan on LinkedIn.
Matt Markin
Hey, and welcome to another episode of the Adventures in Advising podcast. This is episode 112 and serves as our last episode of 2024. So a couple months back, I was scrolling through LinkedIn and saw Brody Broshears posted an article from Inside Higher Ed about how AI can address advising challenges. And Brody posed the question, "How does advising leverage generative AI and evolve to enhance the student advising relationships and interactions?" So I thought maybe this is a continued topic to discuss. So to guest host this episode, let's welcome back Brody Broshears from Illinois State University. Brody, good to see you again.
Michael "Brody" Broshears
Yeah, great to see you, Matt, thanks for having me.
Matt Markin
I'm excited for this interview. And so I'm going to leave it to you to get us started and introduce your guest.
Michael "Brody" Broshears
Will do. You know when, when Matt asked me to do this, I said, I'm not really an expert in a I'm just really interested in this topic. But I know an expert that I've met here recently on my campus at Illinois State University. And so to help us unpack AI and academic advising, we're joined by Dr. Roy Magnuson, Illinois State University's director of emerging technologies for instruction and research. What I think is really cool is you're not just that. You're also a seasoned faculty member in the School of Music, but have really spent the last several years kind of as a visionary leader in integrating some of these cutting edge tools into teaching, research and creative practice. I know you've done some work here at ISU specifically related to AI, so we're going to get into that as well. But you've spearheaded initiatives that help educators and researchers harness AI, mixed reality, other advanced technologies to transform education. Welcome Roy. We're glad to have you.
Roy Magnuson
I'm excited to be here. Thanks, Brody.
Michael "Brody" Broshears
So, we chatted a couple of months ago, and I think it's always great to kind of start with the origin story. So what was your What was your interest in AI? Kind of your entry into AI, and maybe speak to your discipline based kind of entry into AI. I think that's really helpful for our audience. Y
Roy Magnuson
Sure. So it's, it's, it's weird. I mean, I'm a composer. I have three degrees in writing acoustic music, so why am I having this conversation? I always feel like I have to contextualize myself, but it is kind of getting at this sort of larger issue I think that we're facing in higher education. So, yeah, I mean, I've degrees in writing music. My whole background is in writing acoustic music. Primarily, I did my graduate study at Illinois. And if you know anything about that music school, it's the oldest electronic music studio in the western hemisphere, so it's a very big electronic music space. I kind of got interested in tech and art at that point, and then that sort of percolated, as I was, you know, beginning my time teaching at Illinois State as a tenure track faculty member. And I got really interested in VR over that portion of time. And, you know, started essentially just Googling, like, what would it be to be like, make VR stuff? And I was starting from nothing. I was like, downloading unity and making software, you know and time is crazy. You know, you spend six or 7000 hours doing that, and then all of a sudden you're like, Oh, I'm making apps for like, the Air Force and for hospitals. And it's like, because it's just, it's, it's kind of an interesting space right now. I mean, VR has been around for a long time, and in a lot of ways parallels AI, it's been around for a long time, but it's such an emergent space, and there's so many different things happening, and there's this great opportunity, I think, for anyone who's interested in their background, they're curious, they want to try to do something, make something there's a lot of opportunity right now. And as I said, like me talking to you as a person that, again, has no formal training in in anything that I've mentioned so far, certainly not ai i am, like, the existential threat for higher ed, right? It's that the information is there. If you get curious and you want to put in the time, and you want to, you know, take some formal, like, kind of training online, maybe some LinkedIn learning or classes, or whatever it is, right? You can get a lot of the objective data, just like, This is how you do it. This is how you use the software. This is what this is. This is this, right? And that we as an institution are grappling with this, like that. That is what the world is right now. And it's a beautiful thing. It's great. There's this huge democratization of knowledge, and like access is, let's go is becoming so frictionless, and AI is accelerating that, right? Because it's this, you know, means in which anyone, through their own language, can begin to infer understanding about anything. And we as as a concept, like our first at this moment where we have to adapt. We've got to figure out, okay, what do we do? Like, how do we how are we nimble? How are we having added value for students? And so the. It's, there's a long way of, like, introducing who I am, but it's also, you know, like, what my my gig is right now at ISU is, is around that having these conversations in both the research space and the instruction space.
Michael "Brody" Broshears
Yeah, let's, let's get to, let's help the audience kind of understand the formalization of this interest into the work that you're now doing at Illinois State. I think I was introduced to you kind of as you were serving as a provost fellow. Can you speak to that work and then how that's transitioned into what you're doing right at this moment?
Roy Magnuson
Yep. So for the 2023-24 academic year, the opportunity to serve as what ISU calls a provost fellow. These are, you know, a number of people that get some course release time at the institution to work with our provost and her staff to solve a problem, and you essentially just put forth an application of something that you see as an issue or you're curious about, and you think the university needs more time on right and I happen to be on sabbatical in the spring of 23 so like, you know, two months after ChatGPT released, and, you know, it's like nuclear bomb was dropped on higher ed. Oh my gosh, what's happening? I'd always kind of followed AI, but I was not, again, I'm not like a machine learning person or computer science person, but I had known about its like, general arc through the lens of VR and a bit through music, because there's generative music applications and things like that. But got really curious about it, obviously, because I was doing all these sabbatical products, and like researching it and playing with tools. And at the same time that was on sabbatical, the call went out to be a fellow. So I applied to this, this position with our provost, and said, You know, I don't know if this is what you're even looking for, but this stuff really scares me, and I don't think we can get it wrong. So do you want me to do this? And we had a really great conversation where she's like, Yeah, it scares me, too, and I don't know what we're doing and and it's been great, you know, to be in the conversations amongst administrators, and then, you know, through that process, faculty advisors, you know, staff about, okay, what is this? Who are we? Where we are both threatened and not threatened, like we think, we think this is trying to come for us, but really, it's not attacking any outcomes of classes. It's not attacking anything. It's just changing the process. And we that we it's in the process of how we do our our jobs, right? And that was fascinating, and that, like, led into the position I'm in right now, working as the director of emerging tech, where I'm still doing a lot of the similar things, working with similar people from that role, but also kind of, you know, more broadly, looking at application of research and, you know, external partnerships.
Michael "Brody" Broshears
So maybe expand a little bit on exactly what you've been charged to do. I I know that you're kind of overseeing this. I think this is useful for folx who might be listening right, like the idea that there's a committee now kind of focus on governments, governance and tools. Can you speak to that a little bit more?
Roy Magnuson
We were on a call with a vendor the other day, and we were, you know, kind of just wasn't really complaining. We were just talking about, like, Oh my gosh. I feel like we're just, like a car with three wheels, and we're like, spinning in circles. And this vendor said, like, Look, man, we talked to tons of higher ed schools. Y'all are doing the same thing. Like, it's just like everyone is just trying to figure out what is happening. And that committee came out of my conversations with the provost staff, specifically Cooper Cutting, who is an AVP here, sort of my point of contact when I was in the Provost Office. And that committee, for the responsible use of AI was sort of us just codifying, like, look, we need to, like, get people together in a room. We need to have this formalized. We need, you know, all the kind of institutional protections around that that this is, like, time that's recognized by the university, you know, that you you should take, you know, the amount of time you're reviewing documents and looking at tools and all this. So that's kind of what we're doing, is there is this group that is cross disciplinary. It's across all areas of the university. Reports to the provost and the president. It's co chaired by our CIO Charles Edamala and Cooper Cutting, the AVP that I mentioned, who are, like, totally different perspectives on this, so that we can try to take both a measured and very, you know, institutionally sound approach, right? And do what universities do really well. They can, just like, it's, it's very deliberate, and, you know, all that, like, that's, you know, institutions are awesome about that. We have tons of great thinkers, and we can debate and debate and come to, this is, like, the distilled thing that is us. But we also had to have levers to pull where it's like, this tool came out yesterday, and it's like threatening half the classes on campus right now because someone put a TikTok out about it, and now our kids are using it, and it's like, okay, we also have to be able to address that, right? And it can't. This is where it gets really nuanced. It can't just be block it, right? We can't necessarily do that because that's, it's effective, but maybe too easy, right? There's a nuanced approach that we're trying to wrap our heads around, and as a larger institutional identity crisis, that's, that's partly what that committee is doing.
Michael "Brody" Broshears
Yeah, I know you. You kind of did this road tour where you went across campus and spoke, had this PowerPoint presentation. It was fascinating, right? Really enjoyed it, and I know that you've given that talk now many times to campus leaders and other guests. I know that you spoke to Midwest First Year Conference. You're here today, maybe unpack kind of in a brief, briefer response, kind of the tenor of that hour long presentation, like, what, what is, what's the real takeaway from that session that you do?
Roy Magnuson
So, it was a presentation, again, from my perspective, right? I would couch the discussion always like, what same way I introduce myself here, like, why am I talking to you? Like, I am not a computer scientist. We have people that have multiple degrees of machine learning, like, in study neural nets and stuff on campus. Like, why are they not talking to you, right? And there's lots of reasons, like, they're doing research, right? Like, they're, they're, they're obsessed with this stuff, right? So the the perspective of me coming at this from a non as a non native, I think was, was very important then, like, I'm learning this stuff in the same way that many people are. I'm trying to synthesize it and look at it for for those folks who are are not like, you know, fire hosing this information all the time. So it was like that. And then the larger arc was like, Well, what is AI? Where is it going, and what can we do? And the big, the big takeaway at the beginning was like, you know, this is not new, man, there's nothing new about what's happening large language models and neural nets. You know, it's like the 80s, like it's been going on for a long time. There's been these ups and downs, and there's sort of this confluence of stuff that has come together between compute power and some like, realizations with algorithms, you know, and the transformer architecture and all the stuff, like, people are stuff is overlapping, and we're like, oh, deep learning is really powerful, like, which we've done for a while, but it's like, everything is linking and locking in at this moment, and that's having this explosion. Right through natural language, we were able to query stuff. And so the arc of the talk is, like, this is what's happening, and then it gets really dark, because, like, you kind of frame everything is like this dystopic kind of moment. And it's very easy to do that. And I did it intentionally, because people know that we see sci fi and we see the you can project the end of this arc. It's really easy. It's Terminator, like whatever. But they don't make sci fi about all the good stuff, right? There's so many amazing opportunities from generate, just even generative AI, like large language models that that we aren't considering. You know, access to to tutors and to, like, advisors, and changing what that looks like is a great, you know, great thing we need to be exploring. But just like diagnostics, you know, things are trained on on multi modal, things like audio and images. There's tons of studies coming back that it's like speeding up diagnostics and lowering the cost, you know, and you're able to get people out. They can take a picture of a skin tag and, you know, the middle of rural North Dakota, and know if it's cancer in like, five minutes, it's like, let's transformative. I mean, this is, like, really powerful stuff, right? And they don't again, there's, there's no sci fi book about like this AI that, you know, cured cancer and global warming, like, as a potential top end in the same way that the potential top end of like, well, it annihilates humanity.
Michael "Brody" Broshears
Which is scary, that is scary.
Roy Magnuson
It's certainly super scary. I mean, like, the there's lots and lots of discussion about that, and when people even, like, say, Well, I mean, someone's got like, a 30% chance of destroying humanity. It's like, what are you talking about? Man, like 30% is, like, that's real. That's a number. So again, I don't know if that's true. It's like there's lots of discussion around this, right? So that's the arc of the of the talk is that, you know, look, there's nothing magical about AI that we haven't known for a long time. It's just this sort of confluence of stuff and natural language and being able to query it quickly, and chatgpt was this like, flash point of like, it's a website, right? Just go and people realizing how powerful that is for for, you know, researchers and teachers and, you know, all, all sorts of folks. So it's like an access thing. It's a user experience thing. And the arc from there is, you know, unknown to an extent we were considering, continuing to see the tools grow, the growth is still happening at what seems to be exponential pace. We'll know soonish, if that's like going to continue, and if it does continue, then, yeah, things are going to look very different.
Michael "Brody" Broshears
But, but I know in our discussions, I think one thing that maybe we can agree on is we're not going backwards, right? Like, this is, this is the deal. It's gonna happen.
Roy Magnuson
Yep, yeah, it's people rightfully pointed all of these companies, and they're like, it's not a ponzi scheme. Like, I mean, like, what, you know, opening eyes, like, worth 150 right? 157 billion, I think, as of this week, I think that's the number, and they take in like 500 million a month in revenue, or something like, it's like, it's something really low, and they're hemorrhaging money, and it's like, what, what is this? It's not, this can't this is not sustainable. And that's absolutely correct, right? You know, people are betting and, like, trying to, like, project out, like, you know, all the stuff. The The issue is that many of those naysayers will point to open AI, or they'll point to whomever, right, and they'll say, this tool doesn't work. It's like, yeah, the concept works. The concept is not going away, right at the very worst possible scenario, and I would also say, for me, preferable, at least in the near term, is that everything plateaus. Say, things implode like open AI goes down. We find out it was a money laundering scheme, whatever. Like, it's like, it's all fine. Like, generative AI is a concept that is out the door man, like, it is just going to get faster, cheaper, easier. Yeah, right. And there's too many people that are doing it. There's too it's too clear that that is the arc, again, completely separate of what Google or Elon Musk or OpenAI or whomever is doing right, it's not going away.
Michael "Brody" Broshears
I can really appreciate your non native approach to the topic of AI and some of the work that I wouldn't describe as secondary to the role that you play as a faculty member in this music faculty member. But I spent time about 10 years ago kind of doing the same thing with the idea of happiness and meaning making, and have really tried to kind of think about employee engagement. What, what do you say to folks, maybe who are afraid, right? Like, I feel like that, just digging in and learning like this, deep learning that you mentioned, is really valuable, and it's not it doesn't take any special skill. It just takes a commitment, right? Can you speak more to that?
Roy Magnuson
Literacy is the number one thing. Like I said it over and over, guys, just be curious. Go and play with these tools. If you're afraid of them, go play with them. And if you play with them, you read about them, and you're exploring. And always say, play. I mean, like, I also offer, like, I'll help you, like I'm a coach, like, I'll scaffold things for you and teach you, you know, read, read in your discipline. Or just, you know, be curious, right? That you need to learn to know what it can do and what it can't. And that in an institution, especially a research you know, a group like a university, there's huge value in saying like, look, this is terrible for what my my role is, and then you should be sharing that out, right? Please. Like, share like, why you believe you know this is, you know, maybe not applicable or even destructive for taking into your workflow, or your your space classroom research, or whatever. On the flip side, like the the the literacy is so important, because, especially, you know, these generative models, like, like, again, a ChatGPT, which I'll use, is, like the Xerox of AI. It's like, you know concept, right? They say ChatGPT, the only people that know how to use it at the highest level are the people who are subject area experts. And everyone is a subject area expert. So it's going to hit everyone in their position, your your job, your teaching, your research, whatever it is like literally everyone and that it is, I believe our responsibility is, just like employers and humanity in general is to teach and to empower and to show. Like, look, this can do X, Y and Z for you, or ask, I guess, like, what can it do? When people report out, it does X, Y and Z and then to we're in the space where people are asking for ROI like, well, how does the investment in X tool at this many dollars bring back X dollars? And the problem is, it doesn't, because we don't know, because we can say, Oh, this tool saves me two hours a week. And it's like, wow, that's 5% that's amazing. You know, like 5% over 1000 people, you know, it's $40 an hour. I mean, you're talking hundreds and hundreds of 1000s of dollars. However, it, it doesn't right. It's it just saving two hours is not adding anything, if your job stays the same, and that's the that's the moment, right? It's like, okay, we need to figure out what these tools do, and then figure out how you adapt. And now, what are you doing, right? And there's tons of pass through there. And I think this is where the real beauty comes, right? That it's not just that, you know, it can be as simple as you have more time. And I think there's real value in that, especially like, on employee satisfaction. Like, you can go to the gym, or you can go exercise more, you can go for a walk, right? That's all great. I'm not saying like, we should, like, always be pushing for max effort. But if we're really looking at, like, there's a strong demand for ROI, it's going to have to be the self analysis and say, Oh, wow. You know, all this stuff over here, I would never, ever have had time to do I'm going to do it now, right? And now, all of a sudden, it's like, oh, we added like, half an FTE or something, you know? It's like, geez. Like, that's a tremendous amount of, like, your improvement to what we're able to do in our positions, to say nothing of like the complete reinvention of, like, what an advisor is. And I think that's like, a really important and healthy discussion to have about, like, what, what those roles should and could look like if you're able to trust, like, some of this automation.
Michael "Brody" Broshears
Yeah, let's talk more generally, and then we can get into some of the specifics, but I know you're you're really speaking to a personal responsibility to learn more, right? And I think getting people to kind of embrace that is always challenging. But let's look at the institutional level of commitment. How do you think schools, universities, can ensure that faculty, academic advisors, other support staff, are equipped to work effectively with AI tools?
Roy Magnuson
Yeah, I think, I mean, I can certainly speak, probably most I'll speak from the faculty side, because, I mean, I'm learning in a staff position at this point. You know that it's very different, like in the learning, like, what it means to be, you know, a civil servant or whatever in an institution, is different than a faculty brain. So I'll frame this from the faculty though. I'm happy to have discussions from from the other sides, but it's, again, kind of new to me. Remember faculty said, I mean, there needs to be the encouragement that this is something that is worthwhile to you, right, that this, this is value, this time spent is valued institutionally, right? Because that's the number one concern I think, with faculty, is like, well, I have so much to do, like, I mean, like, it got, you know, all these things I got to do. And I'm, you're asking me to take on more, you know, service load, or whatever. I've got a research advisor say the same thing, yeah, and it's, I'm just, I'm learning how to, I don't know how to frame it correctly from, like, the staff side, because the silver bullet for faculty, and again, speaking from my perspective at ISU, is that you can always, you can always add, like, additional pay, like, maybe a weekend time or something like that, or whatever, right? You get grant funding to do the stuff that is not the case necessarily with others. So, like, it's, it's easy to say, well, there's more money. You can go do it. You can go do it. You want to get more money, right? So that's fine, like, money is encouraging, right? But I think the actual discussion is that it's valued, right? The institution values that this time that you're spending on it is valued, and you can report that time spent and that people are like, Yeah, you're just exploring AI, and you're figuring out how to get it into your teaching, research and service and service and your life in general, and that we in whatever mechanism that that is, you know, tracked or evaluated. Say, great job towards, you know, again, faculty, promotion, tenure and all those things. And then there has to be like, you know, back, back, stopping there. So it's access to tools that amounts to, you know, you know, people putting up money for funding. It's also staff who are working on the back end to support these things. Because it's not just magically going to happen. There's, like, people that have to look at logs and talk with vendors and all the stuff. So there is a bit of, like, buy in, I think, for this idea of, like, creating a, I don't know, like an institutional support mechanism, or just infrastructure, or whatever you want to call it like. And I think everyone is kind of racing to figure out what that would look like around AI in particular. But it's, it is sort of a larger cultural understanding that perhaps things that are seemingly quite disconnected from your reality have significant value, right? And again, speaking, this is a moment to look at AI and special ed or AI and sociology or AI and music. To say, no, please go learn about that, because it is going to impact you in every possible way and tell us.
Michael "Brody" Broshears
That's great. That's great. I think that what I've seen and what I've read is that folks who understand the power of AI are now using AI and trying to learn as much as they can, trying to find ways to save time. But my my initial thought and what most of what I'm reading is, is that when we start to look at kind of a role, like an academic advisor or someone in the rank and file within an institution, maybe a service area, that there's been reluctance to to really engage in these tools, how do we help folks kind of in those spaces, get, get going, right? Like, I know for me, right? It's really about trying to articulate the value, but it seems to still be an issue.
Roy Magnuson
Yep, I think it is a bit of like, I mean, there's an extent of like, build it. They come like, make the thing and like, create the infrastructure that we don't necessarily always have those places and spaces for people that are like, no, come and explore like this is well supported and you're not like this is not something you shouldn't be doing. It's culturally what you should be doing. I do think explicitly making time, and this is where I'm learning about the whole other side of the universities of telling people you have part of your load to figure this out for us. And it doesn't have to be as like, demonstrative as that. It's like, you can ask, like, who's curious about this stuff? Like, who are your champions? You'll get people who are like, I play with chatgpt all the time at home. It's like, great. You know how you now have seven hours a week, right? What I need you to do is x and to share that out, because we need, again, it's like the doers, like the people that are doing it, need to be the ones that are feeding it up, because it can't come top down. Because, again, if I'm theoretically one of the people trying to figure out what we do at ISU, I have no idea what an advisor is actually doing. I can think of things I think would be really interesting, but they're going to know, wow, this would be great. I do this here and this year and this, they need to share that out, after exploring it and getting, you know, sort of the real hands on the tools and like saying, This was great, this was hard. Does this do this? Could we do this? I saved five hours, you know. And then this is the, the tricky part, right? There needs to be a mechanism in place where that can then be, like, codified, or at least like, go from this kind of nascent a we were playing with these things and around like, you know, figuring out how to do it in a way that works with data protection and privacy and FERPA laws and things like that. How can we then take that data and, say, we want to move to here and build something, or find a product that there that then does exactly this, but also is extendable to other things. That thing like this step is really hard. It's really, really challenging, and it requires either intense investment in personnel or maybe, like an adaptability because, like, there what we find and what we want to do is not necessarily always going to be an off the shelf product. It's going to be better at some things, worse at some things. And that's kind of where I find myself right now. It's like we're in this kind of moment where we know people are using it. We know they're doing interesting things. We know there's great possibility, yet we do feel like a car three wheels, because we're like, driving circles, trying to figure out, like, well, what's the path? Do we build it? Do we, like, hire people to make it internally? Do we go out and spend all that money on contracting? Do we, you know, try to find things that are integrated in directly into your like advising software? Do you know, and then it changes every, like, 45 minutes, right? So like, like, it's also changing constantly, yeah, so it's, that's not a great answer, but other than, it is about like, curiosity and rewarding that, like culture, right? That people are they should be doing it and it's encouraged, and they should be excited about it.
Michael "Brody" Broshears
Yeah, I think Walt Whitman right, like Ted lasso, be curious, not judgmental, as a way to kind of focus there. But I think even more so right to me when I talk to academic advisors across the country, at least this fall, when, when we you mentioned AI, I think when you mentioned that to academic advisors, and we talked about this when we spoke a couple months ago, right? There's this fear, right, that that AI is going to get it wrong when we when students start using AI to say, develop their plan of study, right, the four year plan of study, right? And so I think there is this fear that they're going to get it wrong. But I think your response was really useful. I know that when ChatGPT got released, the first thing that I did was go right to ChatGPT, and I said, I'm an undergraduate student majoring in psychology, develop a four year plan for me to graduate from Illinois State University. And it was wrong, and we talked about that a little bit in our in our discussion in September. What, what do we say to advisors when that's the case? Like, how do we move past that initial piece? Because I've, I've done that now recently, and it's way less wrong. In fact, it looks pretty good, right?
Roy Magnuson
Yeah, that's my my first comment was gonna be, I think it's a, it's a liminal space, like, I think it's going to actually be quite good. Like it will probably do that work really well, especially given the constraining it and access to data and correct system prompting and all this stuff. Like you can make it do what you want more or less, and limit it from doing things that it shouldn't do. And that's something we're experimenting with, like, right now. So talking about this from someone right before this meeting, about, you know, okay, we're testing things behind the scenes, and they're protected, and we're all the stuff, and we're seeing, like, well, how accurate is it? Like, can we is this something we can even reasonably in the next, like, 18 months? Think about like, you know, using with some impunity, like. We don't have to constantly be checking it to see if it's wrong, right? There may never be a state or a time where it's 100% accurate, right? However, and I say this as someone who is a human, we are not 100% accurate, right? Like we make mistakes all the time, and this is like, a really important thing for us as humanity to get our heads wrapped around. There are already diagnostic bots that are more accurate than than doctors at 100 the price yet we don't use them because they're wrong. They're more accurate than humans, right, and we still don't use them right? We have to, at some point, we need to figure out what is the level of comfort we have with tools like this, knowing it is going to be wrong, right? Yet your advisor could tell you the wrong number one day. I mean, they were just tired. I don't know, they had headache. They said, You know, you know psychology, 310, and they meant 292, or something. And it's like that happens probably every day, right? And it's not again, it's fine. It's fallible. We fix it, whatever we have to come to a position where we're okay with that because, and this is, like, the great opportunity, right? It's the the market inefficiency, whatever way you want to look at it is what is an AI like that and the human and that is, like, profound. I mean, I could solve so many problems with like, you know, wellness and mental health and DFW rates, or whatever you want to look at, at an institution where these people, the human beings, have a significant cognitive load like taken off of them, where they're getting readouts or reports of things flagged that students are talking to the bottom out so they can go specifically to those students, and then maybe The rest of the time they're focusing on the students they need to be. They know, right? These are my at risk people, and I'm taking them to lunch, or I'm going on a walk with them. I'm going to their like, you know, whatever, like place, their shared space, that they can go and meet with them and, like, be a human being, be a champion for them. Be all the things. I mean, I think people get into advising to want to do. They want to do. They want to help people. They want to be that the person that makes sure you get through this process, that I know having friends or advisors that that's just unrealistic, because they'll have huge case loads, and you just, you're not doing that, because it's just, it's so much to do that is a future, right? It's real, and we should strive to go there and figure out what it is and show it and the amazing power that that future could have again, I really believe in that. I do think we are, you know, like 2022 November, 2022 to December, 2024 right? Is two years, and we've gone from it's terrible to well, it basically does it. I mean, and like, what's another 18 months where we're testing this and then rolling it out? It's like, oh, it's like, yeah, it 100% accurate. It's fine. It does all the algorithmic cognitive work that an advisor would have done. It's like, great. That is not, and I can't stress is enough the open door to get rid of humans. That is the wrong path. Like, there are tons of studies that show people don't want that. What they want is the combination they want, like, two in the morning. I want to set up my thing because I'm awake, and I do all the work I would have done with my advice. Would have done, like, you know, the back and forth or whatever, figuring out my classes, masking things, and then you're able to have a one on one follow up, or something else, or you always have this lever you can pull to go to a human ask questions. And, yeah, it could be amazingly powerful.
Michael "Brody" Broshears
I think that convenient, like I think about COVID right and the move to virtual right, and where we were doing, we were doing appointments right through, through mediums like this and and those were valuable during that time. But I think even the transition back to in person, right? And now, when at Illinois State, at least within university, college, for example, students want to see their advisors in person, right? They want to interact face to face. The majority of our appointments are set up in that way, right? Like, I think there has been a trend back. But, you know, for advisors who and I this, this does right, like, eventually folks are going to say, well, won't, won't AI just replace like, do you see risks to relying on AI for decision making in areas that involve, like, nuanced human judgment? Like, poet, yeah, to me, I think just use it as a tool to make your human interactions better, right, rather than it being a threat to the way that students might want to use the tool.
Roy Magnuson
Yeah, there should be a human in the loop. I mean, we're a long ways off from, you know, any kind of significant, high stakes decisions being made by an AI. That's my opinion. I'm sure people have opinions. You know, in Illinois, there's legislations being passed specifically to that. I mean, that's what it's getting at, right? Is that AI cannot be used to solely make decisions about many things, you know, for public workers and students, right? And it's, I agree, it's. It is wrong. It makes it's, it's, you know, biased, even though it's getting better, but it makes me sick, right? And a human should be there to have the nuanced, just sort of like, you know, look at it, right? That said, going back to what I said before, like, there is going to be a point where it is faster, cheaper, to an order of magnitude, and just basically accurate, right? Yes, it will make mistakes. Like, where do we draw that line? There is going to be a line right. We're going to have to say, like, look, it's just this is too much of an opportunity to do this that doesn't that. That is not the replacement of a human right, the reevaluation of what that person's job is. And that is a massive opportunity, right, where you have these people that are skilled and understand how to talk to people are like, the the institutional knowledge they know, the humans, where the imagination should just go crazy about, like, what? What would we do if you know 50% of your time was back, I don't know. Again, I'm throwing out numbers like, I have no idea what, what cognitive load would be off, but it really be. It could be really just completely fantasy land, like dreaming up, but a very real reality in three years like that, I think is even maybe conservative, like, that's very possible. What does that look like, and how like transformative right for students, for recruitment, retention, for just personal well being, for what your life feels like as an advisor, you have more satisfaction you know you're not spending all of your time looking at fields and clicking boxes and moving things from x to y and sending emails. You are looking someone in the eyes and talking to them about their concerns and advising them on why would you take that side class over this class? Oh, you know, have you ever thought to ask the professor, like all, like, any of those things, right, to spend more of that human time? The paradox where the sort of like, it's it, what's really hard to wrap our heads around is that the more a technology like this is is relied upon, the more human we should become, right? And that's that's super powerful, but it's also very dangerous, because it could very easily not do that right? That you should remove algorithmic work from your life. If the algorithm can do it at a high level, you shouldn't have to cancel your Comcast and sit on a phone or in a chat, right? It's an AI like they're gonna have an AI that talks to your AI. It's got all of your information on your phone. Just go do it, right? Okay, great. And it takes 20 seconds, right? Because it's all just decision tree. This, this, not this, this, this, whatever, right? You should do something else with your time, right? You should do something human with your time, and that's really amazing. However, right? There's this overwhelming sense of urgency to not do that, to look for efficiency, to look for cost savings, right? And that's where we have to be really, really persistent about the PO the added possible value of this technology is where it augments humanity. And that is true. I think it's if there's a truth with this, I believe in it.
Michael "Brody" Broshears
The one thing that I'm always fascinated by, I'm an administrator. I haven't carried a caseload in quite a while, right? But I think about even just the 30 minute appointment or the 45 minute appointment, right using AI to help us balance efficiency and the empathy that comes with every single individual student that we come into contact with. To me, it feels like this is just a tremendous opportunity to develop those skills, whether we're using AI to help us do that or not, right? Like, I think asking the meaningful questions, right, getting at what the students goals really are, and having those discussions, rather than helping the student decide whether they want to take class from 9:30 to 10:45 on Tuesday and Thursday, or from 10 to 10:50 on Monday, Wednesday, Friday, right?
Roy Magnuson
I think that's a solved problem again, not to belittle like advisors, but like, that's, that's a kind of a solved problem at this point again. And I think everyone has these things in their jobs, whether or not we want to be really critical of them or not. Like, there are tons of things we do that these tools already, or in the very near future, will just, they're just going to do and they do at a very inexpensive level, and they do them in a nuanced way. And they do it on 100 languages, right? I mean, like, it's just, yeah, the the conversation, like, the the humanity of, you know, where we're literally, right now, you can exist in a world where you can do real time voice translation between, you know, anyone in any language to any language, like sub 100 millisecond latency in the within the next, like, five years, decade at the outside, you know, we'll have wearable AR technology where you're you're doing the same thing. You're just looking at each other. You're not looking at a phone. You're not waiting. It is just. Real Time, see the person, hear their voice, voice cloned in your ear, in whatever language you want to hear, and vice versa. It's like, just, it's, yeah, it's going to happen. Because it's like, you can see it now. And it's like, it is just a question of, like, you know, latency, they're all solvable problems of coding and physics. It's like, it's, it's where we're adding and what that does for the human, the human interaction, we don't really know, because, like, we've never existed like that, as, you know, dumb monkeys, right? Like we're just like, we don't know what will happen when language is not a barrier. But if we can truly trust that we have like, you know, the Star Trek, universal translator in a position, like an advisor, what does that mean for recruitment? What does that mean for like, what language you speak, I don't care, like, come and talk to me. I'm a human being. You're a human being, yeah? Let's talk, right? And that's super transformative.
Michael "Brody" Broshears
So let's, let's get to kind of your role again, right? Thinking about your role as a faculty member. What excites you about AI in your work as a faculty member in music composition?
Roy Magnuson
Yeah. I mean, I think it's, it's both super scary and really exciting. So, like, it's very easy to point to things like pseudo AI or video AI, and there's a number of open source ones now too, that write music. And this is, I'll acknowledge this now, we don't have to go down this rabbit hole. But like, there's, like, massive, like, you know, legal problems of all this. Like, I mean, what was I trained on? Like, I'm sure my music was at some point, you know, scraped and trained and, like, we need to figure that out. So, like, just understand my head's in that space. But we don't have to go down that, down that rabbit hole as an artist, you know. So those things will produce music that's probably, I don't know as good or better than, like, 80% of my students I've ever taught, you know, with no, very little prompting, it's just like, Wow, crazy. I mean, it just like, kicks out piece of music, and again, it's like anything else, it's just a bar change, like the bar is here now, right? Okay, yeah, what we know, I think, as a truth around art is that people want humanity, and this is evidence in a lot of different places, I think, like the example I use a lot is, you know, people that go to a bar or something, or local bar and you follow, like a local band, you know, go and listen to them, you know, you get to know the people. That's why you listen to them. It's not because they're better. They're like, maybe just doing covers of somebody. It's like, go listen. If you wanted to listen to the music, just go listen to the music. Yet people still do this. We've been doing it forever, because it's about the human connection, the face to face connection, the personal, like getting to know them and like becoming entwined in their lives and all this stuff. Everyone I know as an artist makes money by doing that, right? I don't, I don't know people that are in the like, Beyonce, like, yeah, like, level, right? Most people, I would be very nervous, because there's, if your entire existence is about a product, it's about creating a thing, or even an image, like a personality, or creating like a Zeitgeist to run like a thing that, yeah, that's going to be really possible with an AI, like, and people already are getting into it, and they're like, Wow, I could just create my own stuff, and there's this own avatar, and they change, and it's like, you know, and it's bespoke, and, you know, it's based on my, like, heart rate, it changes, or whatever. I mean, there's like, all kinds of like, crazy things. You gotta have lots of opportunities. The other side, the vast majority of people that are working artists are not like that. They are people that are creating human human connections. They are making, you know, a very specific thing. They are valued because of that inherent human relationship. And I think there are tremendous opportunities with AI to infuse into that. You can go and generate a bunch of AI tracks based on, you know, you can feed in your music and a lot of these cases, and get like, variations, and then you can use the AI in something like Logic Pro to rip out, like the drum parts and the bass part and the harmony parts, and then cut that up and then stretch that now you have your own instrument, and it's like, you can do all kinds of crazy fun things where you're just like using it to do something that it can't do if you see more human and be more adaptive. And I'm confident, at least for our generation and my kids, my kids are eight and nine, that's gonna stay, I think, largely intact. I do have very real concerns about my kids, kids, where this breaks down, where my kids kids, my grandchildren, you know, would theoretically grow up in a world where they may not have real friends, like real friends, like they have a they have an embodied avatar that is a child to them, or maybe even just like an unembodied, like aI avatar they talk to, or in VR, yet, it's a child that is also a clinical psychologist, right, and talks to them like a kid and but as all of this like, it's always leading them towards like, X, Y or Z outcome, or what I mean like, and that that that's where we may change as a human race. But that's like, we're talking like three, four decades down the line at this point, and anyone who's like throwing darts at that like, at any point is like, you're just throwing darts, right? But that's where, that's where it gets hazy to me.
Michael "Brody" Broshears
We're going to adapt right to to those, those changes as well. Yeah, yeah, so, so what did maybe we start to wind down here? Like, do you have any specific advice for advisors or administrators who are maybe hesitant about incorporating AI into their workflows or into their their systems.
Roy Magnuson
Yeah, it's the first we talked about. It's not going away. So don't think, even if, again, it's a bubble that I'll burst, which I think it probably is a financial bubble, and it's going to burst, and there's gonna be a whole reckoning here, probably in the next few years. And that's all fine. The concepts not going away. It's just like, it's a thing. It is going to be woven into our lives. If it never got any better, it would be transformative beyond like at the level of the internet, right? So don't rely on that, but be curious. Ask questions. Try things push. Ask for opportunity, you know, volunteer to say, like, I would like to I'm afraid of this, or I'm curious about this, or I love this. Can I, can I do this? Can I help us figure out what this is? Because again, like, I just was on sabbatical and feeling very punchy and like, put in an application of my Provost was like, I don't know, this seems like something that we should probably be looking at. I don't have a background in this, but I'm happy to do the legwork, right? I mean, we're in this moment where that kind of, like, I wouldn't say it's like hubris, or like, whatever. It's just like, but there's like, your access to information and just like, Are you curious? Do you want to do the work and help? Then you can, again, not to, like, put myself in any kind of like relation to people that are actually doing research in this like, I like, again, people their entire lives devoted to like, deep learning and machine learning and neural nets and stuff like that's that's a totally different conversation. But within the area of expertise that you function like, you can exact a lot of change. And then I think to continue that local like mentality is really powerful. We're probably, I mean, there's going to be like, you know, governmental, you know, sort of policy that's written over time, top down, nationally, state. I think there's a tremendous amount of good that can be done locally, the local government, getting people on board with these things, understanding literacy. So if you have a relationship or any desire to work with, like your you know town board, or you know your county board, or any elected officials that you happen to know, to to to be an advocate, right in whatever way you can. That's where the power is, and it's to not feel like you you need to sit it out like no you should do it and ask. Can be, be pushy about it.
Michael "Brody" Broshears
That's fantastic. You know, I, I'm I talked earlier about how I'm not an expert either, but, but on the advising side, the one thing that I do think about is already when I've played with, say, for example, ChatGPT, which has been the majority of my experience, right? If I want to have a better calendar, right? For meeting my caseload. I I've been able to use ChatGPT to do that. If I want to develop a list of better questions to humanize a 30 minute appointment that's going to aid a student in meeting their goals, and it's going to make me seem more human to the student, right? It's helped me with that, right? If I want to develop an email message that's designed to better elicit a response to a student who's struggling and getting them into my office so I can help them right, like AI has been useful to me in all three of those examples. And I just think that moving forward, my advice right to listeners of this session would be, is exactly what you would say, right? The only way we get better at AI is to actually sit and start using AI or thinking about ways in which it could be used. And so it that's been in my personal experience, it seems to be the message that you've shared, and it might be better than this dystopian 30 or 40 years down the line, we're like, we're going to look more like wall E, like, I, I'm really hopeful that we'll be able to use these tools for good at the institutional level and and in the advising profession.
Roy Magnuson
Yeah, there's no, there's no way to know what's going to happen, right? We do. It's no one has ever known, right? Every, every, every generation has that sort of moment where they're searing into the void, and we're just doing the same thing here. This happens to be a little different, maybe because it's moving very quickly, because it's software and not as bounce to hardware as other other things. But right? It's a tremendous opportunity to ask those questions every day. I mean, I use this stuff all the time. And every day, I'm like, surprised that I didn't think of something like, wow, oh my gosh, that would be so great. And then you try and experiment with it. It's like, yeah, that was awesome, right? And it's like, I've described it a lot because, from a music background, it's like, playing an instrument, like you want to practice it and understand that as you first, you first start doing it, and you're getting responses, they're not going to be very good, and to not be dismissive, to say, like, Yeah, well, it told me all the stuff that I already know. It's like, well, duh. That was like, Really, yeah, I mean, well, I didn't need to use an AI to show me this. Like, correct, keep going, like, keep digging further and, like, eke out as much as you can from it, and understand that, like, that practice as scaling output that it continues to get better, and what you will see is continually better. And then you'll start getting curious and creative with it. You start realizing, like, Oh, I could use if I asked it to do only this part, and then had it iterate, I could then take this and put it in here and like, that's where it gets really powerful. Because you you can begin, like, again, it's like, as a musician, it's like you're sort of done performing, right? And you get better and better and better and again, there's no way to know where that goes, because everyone is going to see something different with it from your area.
Michael "Brody" Broshears
Well, Roy, thank you so much, right? Dr. Magnuson. This is great. It was great. I'm super happy to have you here. We're going to close things down. Thank you so much, and I hope you have a good holiday. Roy, it's good to see you.
Roy Magnuson
You too.