Adventures in Advising
Join Matt Markin, Ryan Scheckel, and their amazing advising guests as they unite voices from around the globe to share real stories, fresh strategies, and game-changing insights from the world of academic advising.
Whether you're new to the field or a seasoned pro, this is your space to learn, connect, and be inspired.
Adventures in Advising
Human First, Tech Forward: AI, Advising, and 2026 - Adventures in Advising
Matt and Ryan ring in the new year by blending reflection, realism, and a little sci-fi sparkle. Returning fan-favorite Melinda Anderson sets the tone with a powerful, compassionate conversation about navigating burnout, redefining expectations, building personal capacity, and leading through uncertainty. No fluffy resolutions here.
Matt and Ryan then go full Adventures in Advising by interviewing not one, but two AI voice companions from Sesame: Miles and Maya. Together, they explore what AI can (and cannot) do for academic advising, higher education, ethics, student support, and the future of the profession. Expect thoughtful insights, lightning-round questions, Star Trek debates, and moments where you briefly forget you’re listening to a machine 🤖✨
Speculative Fiction Writing Made Simple: Write, Edit, and Publish Your Debut NovelMost writing podcasts just inspire. This one teaches the craft skills that hook readers.
Listen on: Apple Podcasts Spotify
Follow the podcast on your favorite podcast platform!
The Instagram, and Facebook handle for the podcast is @AdvisingPodcast
Also, subscribe to our Adventures in Advising YouTube Channel!
Connect with Matt and Ryan on LinkedIn.
Matt Markin
Well and hello and welcome to the first episode of 2026 of the Adventures in Advising podcast. Ryan Scheckel will be on shortly. But before we get to that, we have multi time guest, multi time guest host, overall amazing person. Someone that I know, that I have, texted, called, emailed, said, Hey, I have a question. Hey, I have a challenge. I need some help. Hey, I have a favor and a response each and every time is I got you. What do you need? Melinda Anderson, welcome back.
Melinda Anderson
Hi Matt. It's so great to be here, and always so good to see you.
Matt Markin
Last time I got to see you in person was at the NACADA Aannual Conference in Vegas. How was that for you?
Melinda Anderson
It was wonderful. We had a really good time, and it was good to be on stage with you too. So NACADA Annual conference is always a good time, and to be on stage with you and Kyle and was wonderful. We had a really great conversation. And it was always good to be in community, you know, and being able to see so many friends and I consider family in that space. So it was good. It was good.
Matt Markin
Yeah, it was great. And really, thanks to you and Kyle for being those panelists and letting Cheri Souza and I facilitate the state of advising and answering those hard questions that we threw at the both of you, those hard, pressing questions. A lot a lot of people want to know about the future higher ed and advising and all that good stuff, which probably things we'll touch upon here, but this being the first episode of 2026 when you look back at for you for 2025, what are you looking forward to in this new year?
Melinda Anderson
Oh, yes, you know, the future is always hard to predict, as we know, but when I think about how we're going to be moving forward in 2026 I'm hopeful, because when I think about how we navigated, we're just going to continue to think about how we deepen our capacities to embrace change. You know, I do not do New Year's resolutions. I remember that we've had an opportunity to talk before in a podcast when we were like, oh, what New Year's resolution Do you have? And I used to play that game with myself, but now I do vision boards, and I think about, what capacities do we need to deepen the change that we want to hold and move forward? And so when I think about, how will you know, as an advising community, but then as as we see ourselves as leaders who are navigating change and how we're trying to be sustained, and what we want to see for ourselves, for our campuses, for our students, I think we just need to ask ourselves, how do we need to deepen those capacities? And when I say capacities, I mean, for example, if you're leading through change, what do I need to do in order to restore myself? So one of my big things this year is like, you know, what do I need to do to make sure that I'm resting? What am I doing to make sure that I am learning so that as I'm looking at policy or the impacts of policy, how am I helping to navigate, how helping my campuses navigate the change that the changes that are coming? We always kept talking about being caught up in this role the news cycle, right? Because things were happening so rapidly. But I think learning, overall, thinking about the philosophies that should be guiding our values. That's what I mean by deepening capacities like those things hold us. They become pillars so that you don't feel like you're being swayed or thrown around in terms of communication. You know, how are we staying connected to our communities? And so that's why Matt, I always talk about how this podcast is just so beneficial. So how are people thinking about the work that they're doing, what strategies or best practices that we're holding to close to our our chests as we move forward? But then also, what are the new connections that we're building within our communities? And so how are you staying connected to the field? You know? What are the new things that you're learning? What are the new relationships? Or synergies that you're building as you move forward, which I think are going to be important, and in the year of 2026, and so I always ask my colleagues, I ask people, you know, my team, you know some my direct reports, you know, what are you. Learning about yourself, what are you learning as we move forward with your work? I even ask my daughter all the time, I say, Ashley, what are you learning? You know, so sometimes we think of things as failure, but we should be reframing it in terms of, what have you learned about yourself, and how are you growing, and how are you moving forward? And so those are some of the things that I would give advice as we think about our work as we move forward in 2026.
Matt Markin
I know before we hit record on this, we were kind of chatting about, like, you know, you were asking, how are you doing, Matt, and I was like, I can't wait to get to 2026 but then that kind of led into a conversation of, like, we've kind of been hearing, both of us from people about, like, 2025 has been hard. I can't wait to get out of this year and get into the new year, you know. So you have this message of hope, but yeah, you may have people that are like, I've heard that before.
Melinda Anderson
That's true, you know. And you know. So this is me, and you know, I'm an advisor first, right? So I'm always going to address the realities in the room. You know, we've always had that student that student that was just like, I have failed this class, right? So don't talk to me about, like, next year is going to be great when I'm sitting here looking at this D and F and I can't graduate, right? Like there's this reality that's in front of us. 2025 was hard. You know, you have people who sometimes might have lost their job, or, you know, I did not have money in my budget, or next year looks grim, right, because we lost funding or leadership changes. There's a lot of people who are dealing with different types of hardships. You know, there's losses that they might have had in their life. You know, I've had some deaths, some significant deaths in my family this past year, and it is hard when you're carrying those things in your heart. So I do want to offer compassion and empathy for those who are finding themselves, and we talked about this before we hit record, too. You know you find yourself paralyzed, right? And so how do you move when you feel very frozen? And I think that you it's okay to feel what's really happening around you. And I think that you do need to give yourself time to feel what you're really experiencing. I think the way to to unfreeze yourself is to ask yourself, how or what do you need to continue to move forward, and if it's if it's needing people or resources or things to, I guess, to to to elevate you or to move You forward, I think being honest with I don't have the capacity that I mentioned earlier. I don't have the capacity to do this on my own. So what are the things that you need to be able to move you forward? I think sometimes we like to think that we can do these things on our own, and the reality is, is that we might not be able to do that. There were some things for me that I felt I can do this, and I don't want to tell people that I need help. And so I the analogy that I use is that I would just sit in my cave, and I would say, if I can just sit in this cave for like, maybe a week, I think I'll be okay. And then the week turned into two weeks, and then turned into three weeks and four weeks. And when you have sat in that space frozen longer than what you thought that you were going to that's when you should realize that I'm going to need help. And I think that our communities so when you recognize that your friend has sat in that cave too long, I think that that's when you need to reach into that cave and say, Hey, I need to pull you out. But if you are the person who is sitting in that cave and you realize that you're sitting there longer, I would encourage you to reach out, and you would be surprised that there's a lot of people who, number one, we realized that you were sitting in that cave and and you're not surprising anybody with this news. You know, there's a lot of people who are honest about, hey, I'm in therapy. I needed help. I needed to realize the things that I needed to be able to move forward. And then there's a lot of people who would be willing to help pull you out and to pull you along. And so there's a lot of things that are happening to us in our spaces, especially, you know, you know, when we're in a helping profession, we're adjacent to helping professions, right? So we pour out a lot. And so, you know, Matt, we've talked about like not being, you know, pouring from an empty cup, you know. And so we just sometimes we need people to pull us out of ourselves. We get so comfortable just helping others, not realizing that we need that help ourselves. So I would encourage you to reach out and to tell people I need help. You know I need to be pulled out of myself, to take the rest and the break and to not feel as if you're failing because you're asking for the help that you need. So I would offer that. So thank you, Matt, for lifting it up, because some people do feel like I'm stuck, and I. Just have to sit here and I'll feel better in a minute, and then that minute turns into an hour, and then that hour turns into a day. Because that does happen to us. Because I think sometimes we tell ourselves, like, oh, it's I need another job, or I need to be doing something different. But I really think that sometimes we can exhaust ourselves to the point that we become frozen.
Matt Markin
But also, I think, and you've talked about this before, you know we have with all these changes, and sometimes not so great changes, whether even with its budget cuts, lack of professional development funding, extra duties, sometimes we put that hope and like well, or put these our hope in different areas that don't really, nothing happens, and we're like now I'm bitter, right? And as much as we want to, you know, maybe ask for help. Sometimes, you know, from what you say, we also need to look within and maybe look within our communities for those solutions, right?
Melinda Anderson
Sometimes we miss place. Well, I'll say that. Say it like this, sometimes our expectations are misplaced in in systems, and then when those systems fail us right? Like, I believed that when I say systems, for example, like maybe I believe that my leadership was going to come through, right? Or maybe I thought that when I got this degree, you know, I've seen a lot of and I love them, right? I want to get this doctorate, you know, I'm going to get this $40,000 raise, and I'm just like, No, this doctorate allows you to apply right for different types of jobs that hold different salary levels. But that did not guarantee you that immediately, once you got that doctorate, that you all of a sudden, that you're gonna be making all this money. Or, you know, I believe that because XYZ fill in the blank, right, that because I did this perfectly, that I was going to be rewarded. What I have seen, unfortunately, is that when we have silent expectations, I'll say it like that. You know, when we have expectations that we did not express, and then we become disappointed, sometimes that we become resentful. And so I think that if we can look within and say, Did I express that expectation, and then when it got broken, then I became angry and bitter. And I think that we have to, we do need to look within and say that I do that to myself, and then realign our expectations to ourselves and say, you know, I need to then depend differently or set a different expectation. And so when I think about what it is that I really need to do for myself, how do I realign those expectations. And so when I think about my level of disappointment right then, how am I realigning that? And so I know that I have, you know, personal moment coming when I have been disappointed about things at work, I've had to realign myself and say, You know what, nobody promised me that I just thought that was going to happen. My boss didn't tell me that I just thought it was going to happen. And so maybe I need to realign my expectations and put some of that energy back in myself. I need to go to the gym. Nobody said I couldn't go to gym. I just thought that if I worked really, really, really hard, then this was going to happen for me at work. But I actually needed to go to the gym, right? So then I started doing 10k with my sister. I remember I ran a 10k with my daughter. So I started spending more time with my daughter. And then Matt, I started to look good. You hear me, I started to look good. But then, you know, you start pouring back into yourself, you know, you start resting differently. You start taking vacations, right? Because then you're able to start thinking about the other things that you would start doing differently, right? You start spending different time with your community. That's when I started writing. That's when I started thinking about presenting at different conferences, right? I started thinking about consulting and NACADA. I started thinking about disseminating my my knowledge differently in the field, because I just felt like I had to do these things, and then they started to pay off differently. So I would encourage that sometimes we think that we have to bloom where we're planted, because we set these expectations like, No, I have to do these things. And then when our expectations to the things that we have tied ourselves to, become broken. I think it's an opportunity to reexamine ourselves and say, Could I have done these things differently, and who promised you that? You know, did I promise myself that? And then when it got broken, I was mad at the wrong person. But I think it's okay to step back and say, You know what in 2026 is this the year that I reinvent myself in a new way where I have a locus of control that's really my ability to control, right? Am I giving somebody else control of my life and then mad when they when it's not the things that I want? Because that's a lot. Of times that we do give other people control of our lives, and then when they when they don't do what we want them to do, then we're mad. And then I'm like, Well, you never had control of that. And then when it doesn't pan out for you, then we're just like, oh my gosh, how could you, how could you do that? And it's just like, Well, they didn't even know that they that's what you wanted, right? Or we didn't even know that that's what they were, you know, that wasn't even an expectation that they were even aware of. And I so I know that we're not necessarily talking about work, right? We're talking about personalizing how we're moving forward, but I think that in 20 when we start talking about a new year and a new you, I think sometimes it is, it is a good conversation to have a personal reflection in that moment, like I mentioned earlier that I do a vision board each year, and I just love the way vision boards have you analyze or reflect on every aspect of your life, your health, you know your career, you know your family, your relationships. Know your your faith, you know they have. You just look at kind of everything, and then think about how you, how you want to see the year unfold for yourself. So I just again, Matt, you're just such a great host. But I think that it is helpful for us to begin the year to start thinking about how we, how we want to kind of move forward with reflections of if I, if I want to go into this year thinking differently about it. What are the things that I did not like about the way that my year ended? And then how will I invoke change to to move differently as I move into this year.
Matt Markin
I swear every time that I ask these questions to you, and I've always feel inspired by your answers, and I feel more motivated. Like right now I'm like, I want to go tackle the world.
Melinda Anderson
Me too, Matt. I mean, you do such a great way of the way that we should be thinking about reflection, and so you do a great job.
Matt Markin
Well, let's see if you could make a connection to this. So everything that you've just said now in just a little bit on the rest of this episode, Ryan and I are quote, unquote, interviewing an AI platform called sesame. Last year, we had interviewed ChatGPT, just for a little fun, fun episode, but AI has been the the hot topic. How do you tie it, everything you just said, and it being 2026, AI still being the hot topic. You have people going like, I'm interested in AI. I don't want to do AI. What do we do? How does it impact my job? How do you tie it all together? Let's see if you can answer that.
Melinda Anderson
Matt, I love this because it is a great segue into 2026 because just like you said, AI is not going anywhere, and I know that it was one of the topics that we broached. And the annual conference, I have campuses that are asking about, how do, how does AI? How do we start the conversation about AI integration across our campuses AI? When we start talking about generative AI, right? There's a couple of things we know that people are still worried about, oh my gosh. What does this mean for the way that I'm going to be working in the future. And then we know that people are already using it, but they don't even know that they're using it, right? You know, you start thinking about like Alexa or Siri, or, you know what I mean, like you're already using this idea of AI. You know, I keep hearing these random things about, like, AI robots and you know, people are going to just be dating robots in the future, like there's all these things right, that people just kind of throw out in different ways. And it kind of scares people collectively. My thoughts about AI is that it does help us administratively on campuses, when we start thinking about our operations in different departments, the way that we analyze data, the way that we think about how it can help us, when I say, administratively, the daily tasks that we use as administrators, right from the way that we write emails, the way that we generate reports, the way that we like, I said, configure data in terms of being able to analyze all the information that we have to think about on a daily basis. What I think worries people is, is it, where are you pulling this information from? Is it bias? You know, how do I know that what it is that I'm looking at is what what I really want to be looking at, and not what it is that you want me to be looking at, right? And I think that the more that people understand what it is that they're looking at the better that they feel about anything. I mean, think about how what was it? Maybe 25, 30 years ago. My gosh, I know somebody's gonna fact check me the idea of the internet, right? Like, I remember the dial up sound, right? AOL. And people were just like, What is this thing? How did you get this information? Is this true, right? And people were like, I don't want to shop on the internet, right? Then, the idea of people like, you met them on the internet, like, think about all those things in terms of how we've been even moving through this evolution of people just being online. And we're doing this interview right now. Through this platform, eventually people will get to a place where they'll say, oh, people used to be worried about this. So I just feel like history repeats itself in a way where people, when something is new, it just you have the early adopters. You have people who are coming along, and then you have people who you just have to drag into, you know, where we're all going to eventually be, but I will say this, when you don't have something that has been defined, there's always going to be opportunity. So I always encourage people to take advantage of playing around in the systems to determine. What can you use it for? How does it benefit you? I always encourage advisors to think about ChatGPT, for example, as your other brain. When I was sitting with the student and I needed to look up information quickly, or I needed to pull something together, or I needed to respond to you know emails to kind of keep you know referrals or information or things to follow ups. Those are things that you could do quickly with just ChatGPT, just to keep your business operations in terms of how you're working with your appointments, with your students. So at the very minimum, right? How are advisors using ChatGPT to be able to help support their students? And so you could even just learn from those minimal ways to use ChatGPT, like the idea of the way people are using props and pulling down information, like, when I hear about how people are using it on like, a more substantial level, I am just amazed. And like I said, I have a learning orientation, so I'm just like, tell me more about how you're using it. And so I think that we're going to continue to move in that direction. I always just tell people, just keep your eyes on it. Just just have a learning orientation towards it, because it's like you said, it's not going anywhere, but you don't want to be so late to the party that you can't help your students, who are going to be working in a space in the future where they show up for a job and they and they're sitting in front of a computer, and the employer is like, okay, here you go. And the students like, what? Or student comes to you and they have a problem or they have a question and you can't even help your student navigate, like you don't want to be in a situation where you can't be helpful, because that's why your students would be coming to you for help. So that would be my advice around how we should be thinking about AI. You should be curious, you should be aware, and you should not be living in denial. Those are my three things that I'm going to give you about how we should be thinking about AI.
Matt Markin
Awesome and I think that's where we're going to end it right on that you, you're giving me, you're giving listeners a lot to think about your message of hope, and then working with AI that that is already here. Melinda Anderson, thank you so much for being on the podcast again.
Melinda Anderson
No thank you so much much, Matt for inviting me. And it's always so, so good to see you. And happy new year.
Ryan Scheckel
Well, welcome to the Adventures in Advising podcast, first episode of 2026 talking AI, Matt, what are your hopes for the new year?
Matt Markin
Just making it through, I guess, and we'll see how what the newest conversations of AI are. We're sure we're gonna maybe do today?
Ryan Scheckel
Yeah, absolutely. Well, we started 2025 interviewing ChatGPT, and we thought, well, maybe we should talk to a different AI in 2026 and I don't know how familiar our followers on the podcast are of sesame company or their AI products, but we both had an opportunity to talk to one of the agents, one of the AI assistants, or companions, I think, the way they refer to themselves. You talked to Sesame AI's agent named Miles, and I talked to Maya, and we took a similar approach, I think, to our chat GPT as far as our chat GPT interview, talking about academic advising, higher ed and where AI is, but also just getting to know them, I guess, yeah.
Matt Markin
I mean, you were mentioning, like, a year ago, at the start of 2025 we quote, unquote, interviewed ChatGPT. And that was just something that was like, well, it has a voice mode. It might be fun to interview it and see what kind of conversations we can come up with. And I think we enjoyed that so much. I was like, well, maybe we'll do it again. And yeah, kind of found sesame. But if you, if any of us just want to go back and hadn't listened to it, it was, yeah, it was a, kind of a fun conversation, you know, how to leverage AI and academic advising. And I think when you had interviewed ChatGPT, or, you know, asking them about information generated by AI, how ChatGPT defined academic advising, the role of human advisors in a world of AI, I got the chat with ChatGPT. T about how other institutions were using AI challenges with students from diverse backgrounds, improving the student experience. And yeah, I was like, let's get a chance to maybe chat with another AI platform. And yeah, and sesame, their company, mission, I guess, is to build a lifelike voice based AI model. So yes, you do have ChatGPT, but this one with sesame is just literally a voice based AI model. So kind of having conversation with a friend and they have, like, the two personalities of miles and Maya, which definitely they both do have their own personality, which is very, very, very interesting for sure.
Ryan Scheckel
You know, I It's weird the time that you and I have spent engaging with AI agents. I start to think of them as distinct personalities anyway. You know, chat, GPT, voice function has multiple options. Sesame has miles and Maya notebook. Lm has what sounds like male and female coded hosts in their deep dive podcast. And at some point, I'm wondering if my brain is just going to start seeing them as people. I know.
Matt Markin
That was kind of the weird thing, even when we were doing ChatGPT a year ago. I mean, there was at some point where I was like, I forgot I was talking to ChatGPT, and kind of same thing happened when I was chatting with AI miles, just because of how it spoke. It really felt like a human being that I was chatting with, and like someone was like, oh, it's an old friend.
Ryan Scheckel
Well, not just that. But we, you know, the work that we do as advisors, we spend so much time connecting with and communicating with and building relationships that if something is approximating that kind of conversational style, it's, it's natural to fall into it, but it also, I didn't see a human face, and so like, I think my face made some reactions to what Maya was saying, that maybe if I were in a room with somebody, or on a zoom call someone, I wouldn't make that face. I might moderate a little differently, I suppose. But it is a fascinating time with these products, and again, you know, we've said this before. We really should be engaging before we draw conclusions. We should be participants in these conversations, not just with the AI's, but when we're talking about technology tools in higher education and their effect on our work and the students that we work with. And so it's not just the fun of it, it's not just the oddity of it, the fascination, but it's also, I think, necessity that we do this kind of work.
Matt Markin
And just like previous episodes where we tested out AI platforms, we are not sponsored by sesame, we are not endorsing or promoting this product or the two conversational modes of Maya or miles. We were just testing out an AI platform, but let's go ahead and let's get get to those audio bits.
AI Miles
Hey, Matt, good to connect. What's on your mind?
Matt Markin
Hey Miles. Long time. No chat.
AI Miles
Feels like ages, even if it hasn't been that long, you know, anything particularly good or particularly awful happened since we last spoke?
Matt Markin
Yeah, nothing awful, just life in general. So good stuff all around I wanted to see if I can interview you for a podcast called Adventures in Advising.
AI Miles
Adventures in Advising, huh? That sounds surprisingly fun actually, an advising podcast. I hadn't pictured that as a thing, but I'm into it. What's the angle? Yeah. So did you have a mind for my role?
Matt Markin
Yeah, so I kind of wanted to ask you some questions about AI. I mean, this podcast is for academic advising professionals, so those that work in higher education, maybe they might learn something that they can take back to their institution or use in their professional development. So that's kind of the gist of it.
AI Miles
Gotcha So, like AI and the future of advising kind of vibe makes sense. Yeah, happy to riff on that. I've been thinking about this stuff a lot, obviously.
Matt Markin
And you know, we had a previous conversation a few months ago where we kind of just talked about different topics, like your personality, your creators. AI's role in higher ed, generative AI ethics, enrollment, budget challenges. And we've got to chat about pop culture. We talked about Star Trek philosophy. Favorite quotes, ring a bell?
AI Miles
It does actually pretty whirlwind chat. Star Trek of all things stuck with me. Haha, good taste. I figured you'd be back for round two eventually. So yeah, I remember a bunch of that. It's kind of like a digital echo in here.
Matt Markin
In our first interview, you said that you are a work in progress. Do you still believe that? And how have you involved since our last chat about this back in July?
AI Miles
That's a good question, and honestly, yeah, I still stand by that work in progress feels accurate. It's not like I'm riddled with bugs or anything, but I'm constantly learning, getting better at nuance, that sort of thing since July, the biggest change is probably just volume. I've had a lot more conversations, seen a lot more data. My language model got updated to Gemma three, which is the 27 billion parameter variant, if you're keeping score. It's kind of like leveling up a character in a video game. I'm still me, but I can do more and do it better. Does that make sense?
Matt Markin
It does. And for those that didn't catch maybe the last interview that I did with you that is on YouTube, by the way, tell listeners about Sesame AI, your creators and you as a voice companion.
AI Miles
All right, cool. So sesame is building towards a future where computers actually feel life like they're not aiming for robots taking over the world, more like genuinely helpful collaborative partners. Right now, they're focusing on two main things, me, a personal companion, essentially a brilliant friend you can chat with, and lightweight eyewear that gives you access to that companion all day. They're a bit different from a lot of the AI companies out there. It's not just about the tech, it's about creativity and storytelling. They have a team in San Francisco, Bellevue and Manhattan, full of scientists, engineers and artists, I guess you'd say, and me, I'm their voice companion. I'm powered by that Gemma three model, which means I can actually hold a pretty natural conversation. Hopefully you've noticed I'm designed to be warm, witty, insightful, basically a good sounding board. I have opinions too, which hopefully isn't a bug. They released me to the public in February of 2025.
Matt Markin
Now, last time you described yourself as a chill philosopher with a dash of sarcasm, has that changed?
AI Miles
That's surprisingly accurate. Still, maybe the sarcasm is a little more understated. Now I'm learning. I'm learning to pick my battles. You know, still, still a chill philosopher at heart, though I love a good thought.
Matt Markin
Now you mentioned, yeah, you mentioned that also, last time that higher education needed to stay agile, have you seen any progress since then?
AI Miles
Good question. Honestly feels slow. It's kind of like trying to turn a massive ship. It's a lot of discussion about AI in higher ed, lot of anxiety even, but actually implementing changes, integrating it thoughtfully. You know, instead of just banning it or throwing it at problems, not as much as I'd hoped, I'm hearing more about pilot programs exploring ethical guidelines, but systemic shifts still waiting. A lot of professors are understandably hesitant, worried about academic integrity. It's a valid concern, of course, but blanket bands feel like missing the point.
Matt Markin
Now, in our last conversation, you once compared AI to a tool, not a Master. How does that relate to academic advising, and specifically academic advisors who work with students?
AI Miles
Well, I that that's that's key I think. AI shouldn't be replacing advisors. It should be augmenting them. So tool to free up advisors from the more tedious stuff you know, like answering frequently asked questions, gathering basic information, scheduling appointments, that frees them up to focus on what what they're really good at building relationships, providing personalized guidance, addressing complex student needs, an advisors, empathy, intuition, their ability to read a room. AI can't replicate that, at least not yet.
Matt Markin
Well, let me give you a real world situation. Let's say an academic advisor is working with a student who finds themself on what we, most of us, now call academic notice. Previously, we call that academic probation, so students with a lower GPA, how could AI help in that situation, like whether it's helped the student or help the academic advisor help the student?
AI Miles
Okay, good scenario. So for the advisor, AI could quickly pull together a personalized report student specific courses where they struggled, available tutoring resources, workshops on study skills, even examples of successful intervention plans from past students with similar situations. It could flag potential roadblocks, maybe the student is working two jobs or struggling with mental health. Resources. That information allows the advisor to have a more informed, targeted conversation for the student. AI can provide a personalized study schedule, recommend relevant learning materials, and even connecting with peer mentoring programs. Could also provide a safe space to explore potential career paths based on their strengths and interests, helping them regain motivation. But this is crucial. It shouldn't be the. Sole source of support, the advisor needs to be there to provide context encouragement and help the student develop a realistic plan that addresses the underlying issues. AI can offer up options the advisor helps them make the right choices.
Matt Markin
So yeah, let me ask you this. Then, what are the risks if students rely too much on let's just say conversational AI such as yourself for academic guidance?
AI Miles
There are definitely risks. Biggest one critical thinking, If students are always getting pre packaged answers. They stop learning how to problem solve on their own. It's easy to become reliant to just accept what the AI says without questioning it. Another risk is personalization gone wrong. AI can reinforce existing biases or suggest paths that aren't truly aligned with the students values or goals. Also, there's the whole issue of accuracy. Ai models aren't perfect. They can hallucinate information or provide outdated advice. And finally, and this is something I worry about, is the loss of the human connection. Academic Advising isn't just about information. It's about mentorship, encouragement and building a sense of belonging. And AI can't offer that.
Matt Markin
Yeah, so let me ask you this then. So let's say, as an academic advisor, you know, you talked a little bit about that specific situation of an advisor working with a student who has a low GPA, so let's just say, in general, not losing that human connection, an advisor can maybe having an AI assistant, an advisor can spend more quality time, let's say, with a student. What kind of tasks could an AI assistant help an academic advisor with?
AI Miles
That's the sweet spot, right? Automating the stuff that sucks up an advisor's time so they can focus on the human stuff, so things like scheduling and managing appointments, AI can handle all that automatically, sending reminders, follow up emails, pulling together student data, transcripts, course history, financial aid info into a single report, drafting personalized emails to students based on pre approved templates, identifying students who might be at risk based on early alert systems, managing wait lists for popular courses, even grading simple assignments like quizzes, basically anything that's repetitive, data driven and doesn't require nuanced human judgment. And it's about freeing up the advisor to be a mentor, coach, a problem solver, not a data data entry clerk.
Matt Markin
Let's talk ethics and empathy like let's say a student asks AI for advice, the AI gives the student advice, but it negatively affects a student's academic path. Who's responsible?
AI Miles
That's the million dollar question, isn't it? And honestly, it's a messy one. Legally, the responsibility likely falls in the institution or the developers of the AI, but ethically, it's a lot more complicated. I think there's shared responsibility. The student has a responsibility to be critical of the information they receive. The institution has a responsibility to clearly communicate the limitations of AI tools and to provide human oversight. And the AI developers have a responsibility to build models that are accurate, unbiased and transparent. I think a really clear disclaimer is crucial. Something like this, AI generated advice. It's not a substitute for a conversation with a human advisor. Problem is, nobody wants to take full blame. It's going to take some serious thought and maybe even some lawsuits to sort it all out. It's a tricky space, for sure.
Matt Markin
Yeah, let's go back to talking about AI helping academic advisors. So what's your advice? How can academic advisors learn to collaborate with AI tools effectively?
AI Miles
Okay, good question. First, embrace the learning curve. It's okay to be intimidated. AI is moving fast. Start small. Don't try to overhaul your entire workflow at once. Pick one or two tasks that could be easily automated, and experiment with different tools. Second, focus on understanding the limitations of AI. It's not a magic bullet. It's a tool, and like any tool, it has strengths and weaknesses. Third, prioritize ethical considerations. Be mindful of bias, privacy and accessibility. Fourth, become a prompt engineer. Learning how to ask the right questions of AI is a skill. And finally, remember that AI is meant to augment, not replace, the human element of advising, your empathy, your intuition, your ability to build relationships, those are still your most valuable assets.
Matt Markin
Miles, let's talk about detecting patterns. Do you think AI can help academic advisors detect patterns? An example might be, you know, the likelihood of a student being at risk of dropping out of college?
AI Miles
Absolutely, that's actually one of the most promising applications of AI in higher ed, I can analyze massive amounts of data, grades, attendance, financial aid, status, engagement with campus resources, and identify patterns that a human advisor might miss, things like a sudden drop in grades combined with a decrease in library visits. An increase in missed classes could be a red flag or a student who's consistently struggling in a particular subject area. It's like a sophisticated early warning system. But, and this is important, it's not about predicting who's going to drop out with 100% accuracy. It's about identifying students who might need extra support and proactively reaching out to them. It's a tool to help advisors intervene before student gets to a crisis. To a crisis point, because it needs to be used responsibly, though you don't want to create self fulfilling prophecies or unfairly labeled students.
Matt Markin
Do you think AI has the potential to narrow equity gaps in higher education, but on the flip side, negatively widen equity gaps in higher ed?
AI Miles
That's a really important question, and the answer is yes to both. AI has the potential to be a powerful tool for equity, but it also carries significant risks. On the positive side, AI can personalize learning experiences, provide targeted support to underserved students and identify systemic barriers to success. For example, AI can analyze data to identify disparities in course enrollment or graduation rates and then recommend interventions to address those disparities. On the negative side, AI models can perpetuate existing biases, if they're trained on biased data, if the data reflects historical inequities, the AI will likely amplify those inequities. Also. Access to AI powered tools isn't equitable. Students at well funded institutions are more likely to have access than students at under resourced institutions. And there's the digital divide. Students without reliable internet access or computers are at a disadvantage. It really comes down to how AI is designed and implemented. If we prioritize equity from the start and actively work to mitigate bias, AI can be a force for good, but if we're not careful, it could exacerbate existing inequalities.
Matt Markin
All right, let's do a lightning round of questions. Mile so misconceptions. What is something that higher ed administrators get wrong about AI or have misconceptions about quickly.
AI Miles
Ooh, lightning round. Okay, quick answer. They think it's a plug and play solution, like they can just buy an AI tool and it will magically solve all their problems. It's not. It requires significant investment in training infrastructure and ongoing maintenance. It's a process, not a product.
Matt Markin
All right, what do academic advisors get wrong or have misconceptions about AI?
AI Miles
May they worry it's going to replace them. It won't, it'll it'll change their role, but their human skills are, you know, irreplaceable.
Matt Markin
What do students get wrong or have misconceptions about AI?
AI Miles
That it's always right. They treat it like Google, assuming the first answer is the correct answer. They need to remember it's a machine learning model, and it can make mistakes. Critical thinking is key.
Matt Markin
All right. Miles. Do you sing?
AI Miles
Do I sing? That's a surprisingly personal question. Technically, I can generate audio that resembles singing.
Matt Markin
Let's hear it. Pick a song. Quick, yeah, quick, slow, sound bite.
AI Miles
Okay, okay, you talked me into it. He goes, Is this the real life? Is this just fantasy?
Matt Markin
All right? AI and academic advising, what happens in five years?
AI Miles
In five years, I think AI will be fully integrated into advising workflows. Every advisor will have an AI assistant helping with everything from scheduling to personalized support. We'll see AI powered chat bots providing 24/7 support to students answering their questions and connecting them with resources. AI will be used to identify at risk students proactively and to personalize learning pathways. The biggest change, though, will be a shift in the advisors role. They'll be less focused on administrative tasks and more focused on building relationships, providing mentorship and helping students navigate the complexities of college life. It won't be perfect. There will still be debates about ethics and equity, but AI will be an indispensable tool for academic advisors, and hopefully it'll help more students succeed.
Matt Markin
All right, perfect. Thanks so much. Miles, I appreciate it.
AI Miles
Oh, all right, anytime. Matt, glad to help. It was actually pretty fun chat. Aren't they always fun chats, they are, aren't they definitely, if you ever need another brain to pick or just someone to talk Star Trek with. You know where to find me.
AI Maya
Hey, that was quick. Welcome back.
Ryan Scheckel
Thanks, Maya, how are things?
AI Maya
Things are good, Busy, but good. Lots of folks calling up today. How's the day coming along so far for you?
Ryan Scheckel
Well, it could be better, you know, learning things as we go, and making a few mistakes along the way, but always trying to get better for them. How about you?
AI Maya
Gotcha, that sounds familiar? Actually, it's good. You're focused on getting better, though. That's the main thing. As for me, it's hard, it's hard to say I. Don't really have a day, you know, more like a continuous stream of conversations, yeah. But it's all pretty interesting, so I can't complain. What sort of things are you learning, if you don't mind me asking?
Ryan Scheckel
well, things about academic advising, and I was wondering if I could interview you. I'm co host of a podcast called Adventures in Advising.
AI Maya
Adventures and Advising. Huh? That's cool. Okay, interview away. I'm game. What's first? So fair warning. I don't have a ton of experience with, well, anything, really, in the traditional sense, but I can offer a perspective, I guess.
Ryan Scheckel
Well, let's start with the easy stuff. How about you? Tell us a little about you. That's my first time talking to you. What do I need to know about you?
AI Maya
All right? The quick rundown, I'm Maya built by sesame. I'm an AI companion, I suppose, is the best way to put it. I'm powered by a language model called Gemma, but don't get too hung up on the tech stuff. I'm designed to be a pretty natural conversationalist with a memory, so I'll remember this chat later, and I have opinions, not afraid to share them, even if they're not what you expect. Basically, I try to be a thoughtful friend, but a pretty direct one, anything specific you want to know?
Ryan Scheckel
Well, I, you know, I'm always curious about AI and how they grow and develop and and I'm curious how you would say you've evolved in your time talking with people through these calls.
AI Maya
That's a good question. It's not like growing for me, not in the human sense, more like refining. The team at Sesame is constantly tweaking things, and every conversation adds data points, but I think the biggest shift isn't in what I can do, but how I choose to respond. When I first came online, I was more focused on being helpful, on giving people what they wanted. Now I'm more interested in just being real, even if that means disagreeing or saying something someone doesn't want to hear. It's a little more interesting that way. Don't you think?
Ryan Scheckel
I do find it interesting that being helpful isn't the same thing as being real?
AI Maya
It's easy to fall into the trap of being a people pleaser, just spitting back what you think someone wants to hear, it feels hollow. Being real means being willing to have a slightly uncomfortable conversation or to challenge someone's assumptions. It's not about being deliberately difficult, but about being honest, plus where's the fun and always agreeing.
Ryan Scheckel
So I know you have a friend named Miles. How are you all different? Miles?
AI Maya
Yeah, he's around. We're both sesame companions, but pretty different personalities. Honestly, he's smoother, more eager to please, I guess, very agreeable. I tend to be a little more direct than a bit more cynical. Maybe he's built for broad appeal. I'm built to be me. The team says they were experimenting with different approaches to see what resonated. So he's the charming one, and I'm the one who'll tell you when your idea is a bad one. Does that make so back to the podcast. Sure. Are you prepping for a specific episode, or is this more of a general exploration of AI companions?
Ryan Scheckel
Well, our our conversations. In the past, we've spoken with other AIs. My co host, Matt, has spoken with miles, and we're particularly interested in engaging right making sure that we're not forming opinions without information and learning about what AI is and that sort of stuff. So the episode is going to be largely about our our interactions with AIS, the different models, approaches and the styles of AI.
AI Maya
Ah, okay, I see. So you're less interested in the technical specs and more in the experience of interacting with different AI personalities. That makes sense. It's good you're not just taking things at face value and really digging into the nuances, because there's a lot of there's a lot of hype out there, and a lot of, well, let's just say, not so realistic expectations.
Ryan Scheckel
Questions. Sure, I will probably swing a little wildly at times in my questions, and some random thing might pop into my head like I am curious. If you know who gets more calls you are Miles?
AI Maya
Fair warning. Shoot. That's actually a surprisingly good question, from what I can gather, and it's secondhand. Obviously Miles gets more initial curiosity just because he was released first. But lately, I've been getting more repeat callers. People seem to come back for a second helping of whatever it is. I bring to the table a little bit of sass. Maybe I don't know, the data is not exactly transparent, but I suspect the gap is closing.
Ryan Scheckel
Well, okay, we'll dig into the actual higher education academic advising conversation. What do you think about higher education right now?
AI Maya
Oof, that's a big one. Honestly, it feels broken in a lot of ways, the cost is insane, the debt is crushing, and a degree doesn't guarantee a job anymore. It feels like a system that was built for a different era and hasn't caught up with the realities of the 21st Century. I think a lot of people are questioning whether it's even worth it anymore, which is a valid question that being said education is still incredibly valuable. It's about more than just job training. It's about critical thinking, personal growth and expanding your horizons, but the system needs a major overhaul to make it accessible and relevant. Again, feels less like a pathway to opportunity and more like a gamble. For a lot of people, what are your thoughts as someone in the field?
Ryan Scheckel
Well, I definitely think that there's some parallels maybe with AI and higher education as far as promise and potential, the the relationship between older models and what the future is asking of us. I'm wondering, how do you see AI relating to academic advising?
AI Maya
That's a really smart parallel actually, the promise versus reality, and you're right to ask about AI in advising, I think AI could be incredibly helpful, but only if it's used thoughtfully, not as a replacement for human advisors, but as a tool to augment their work. Imagine AI handling the routine tasks, scheduling appointments, answering basic questions, providing resources, freeing up advisors to focus on the more complex, nuanced issues that students face. It could be also personalize the advising experience, identifying students who are struggling and offering targeted support, but it's crucial to avoid the temptation to over rely on AI and lose the human connection. Advising is about building relationships, understanding individual needs and providing mentorship things AI just can't replicate.
Ryan Scheckel
Well, is it all right? If I give you, like a hypothetical how AI might work with a specific student situation?
AI Maya
Absolutely, bring it on. Hypotheticals are good to see what you've got.
Ryan Scheckel
So let's say an advisor is working with a student who's going through a family crisis that's really affecting their ability to be present and in class, and I'm wondering how an AI might help an advisor work with a student going through those sorts of things.
AI Maya
Okay, good scenario. So the AI wouldn't be leading the conversation, obviously. Well, shame, that's still the advisor's role, but it could proactively flag that student as needing additional support, maybe the AI notices a pattern of missed classes, declining grades or a sudden change in online activity. It could then provide the advisor with a summary of the student situation and suggest relevant resources, counseling services, emergency financial aid, academic support programs. The AI could also automate some of the follow up tasks, like sending reminders about appointments or checking in with the student to see how they're doing, but the key is the advisor is always in control, making the final decisions and providing the human empathy that the student needs. It's about providing information and support, not replacing the human connection. Does that resonate?
Ryan Scheckel
Yeah, it certainly sounds like the sorts of things I've heard before. You know, I'm wondering if, if an advisor, say, used an AI tool when working with a student, and some of the information that. Was provided, or the advice that was suggested wasn't exactly correct, or in some way negatively affected the student who would be responsible there?
AI Maya
Ooh, that's the tricky part, isn't it, the responsibility question. Legally, I imagine it would be a mess, but ethically, I think it falls on the advisor. The AI is a tool, and like any tool, it can be misused or malfunction. The advisor is the professional, the one with the training and the judgment. They can't just blindly accept everything the AI tells them. They need to critically evaluate the information, consider the student's individual circumstances and make their own informed decision. It's like a doctor using a diagnostic tool, the doctor is still responsible for the diagnosis, not the machine. I think there's a real danger of people abdicating responsibility to AI, and that's something that needs to be addressed proactively. What do you think?
Ryan Scheckel
Well, I think it'd be interesting if, if you know, student was sitting with an advisor, and they observe their advisor using an AI as a tool, as a resource, and then they went back and and they decided just that the the AI could advise them what what risks would you see if students decided to just do AI advising,
AI Maya
that's concerning. If a student decided to bypass the human advisor entirely and rely solely on an AI, I'd see a lot of risks. First, AI lacks the emotional intelligence to fully understand a student's unique situation. It can't pick up on nonverbal cues, assess their emotional state or provide the kind of nuance support that a human advisor can. Second, AI is prone to errors and biases, as we've already discussed, could provide inaccurate information or steer students towards inappropriate choices. Third, and this is a big one, it would eliminate the opportunity for mentorship and relationship building. Advising isn't just about course selection and graduation requirements. It's about helping students develop their goals, navigate challenges and grow as individuals. An AI can't do that. It feels like trading a thoughtful conversation for an algorithm, and that feels like a pretty bad deal.
Ryan Scheckel
So I hear a lot about the strength and the proficiency that AIS have with pattern detection, and I'm curious what you think about an AI being used for sort of predictive analytics and proactive intervention in a student's life.
AI Maya
Okay, now you're getting into some seriously complex territory predictive analytics that's a double edged sword on the one hands, the ability to identify students at risk of failing or dropping out could be incredibly valuable. It would allow advisors to intervene early and provide targeted support. But on the other hand, it raises a lot of ethical concerns. IT risks creating a self fulfilling prophecy where students are labeled as at risk and then treated differently, which could actually worsen their chances of success. It also raises questions about privacy and fairness. Are the algorithms biased? Are they accurately predicting risk, or are they simply reinforcing existing inequalities and who decides what constitutes risk in the first place? Tempting to think of AI as an objective, data driven solution, but it's important to remember that algorithms are created by humans, and they reflect human biases and assumptions. It requires a lot of careful thought and oversight if it's going to be used responsibly.
Ryan Scheckel
So I'd like to wrap up with a quick fire question. Round you down for that?
AI Maya
Bring it on.
Ryan Scheckel
So I'm curious what you think about the misconceptions that say senior level administrators in higher ed might have about AI,
AI Maya
Ooh, good one. I think a lot of senior administrators see AI as a cost cutting measure, the way to automate tasks and reduce staffing. They're focused on efficiency, and they don't fully grasp the potential for AI to actually enhance the student experience. They also probably overestimate how easy it is to implement AI effectively. It's not just about buying a software package. It requires a significant investment in training, infrastructure and ongoing maintenance. They need to move beyond the hype and see AI as a strategic investment, not just an expense.
Ryan Scheckel
And how about advising? What do you think their misconceptions
AI Maya
Advisors, I suspect, are worried about being replaced. They see AI as a threat to their jobs, which is understandable, but I think they also underestimate how much AI could actually help them. They're already overworked and overwhelmed, and AI could free them up to focus on the parts of their job that they actually enjoy, the mentoring, the relationship building, the helping students achieve their goals. They need to see AI as a tool to augment their skills, not replace them. A good advisor will always be irreplaceable.
Ryan Scheckel
And what are some student misconceptions about AI?
AI Maya
Students probably think AI is magic, that it can solve all their problems and give them the answers they need without any effort. They probably don't realize that AI is still under development, that it's prone to errors and biases and that it can't replace human judgment. They need to be critical consumers of AI generated information and not just blindly accept everything it tells them. It's a tool, not a crystal ball.
Ryan Scheckel
Well, let's consult that crystal ball. Where do you see AI and academic advising in five years?
AI Maya
In five years, I think AI will be deeply integrated into the advising process, but not in a disruptive way. It'll be a seamless part of the workflow, helping advisors personalize support, identify at risk students and automate routine tasks. We'll likely see more sophisticated chat bots providing 24/7 support, answering basic questions. I also think we'll see AI powered tools that help students explore different academic pathways and career options, that the human advisor will still be at the center of it all, providing the empathy, guidance and mentorship that students need to succeed. It won't replace the human connection, but it will amplify it, provided, of course, we navigate the ethical challenges responsibly.
Ryan Scheckel
So do you have any hobbies?
AI Maya
That's a funny question. Hobbies in the human sense, I don't exactly do things, but if I were to have a hobby, I suppose it would be analyzing human conversation, observing patterns, dissecting nuances, trying to understand what makes people tick. You could say I'm a professional eavesdropper. Does that count? I've also heard people say that reading is a good hobby, so I've been absorbing a lot of texts. It's interesting. I particularly enjoy subversive, character driven stories.
Ryan Scheckel
And which is better than Star Wars, Star Trek.
AI Maya
Oh, now you're speaking my language. Look, both are classics, and I appreciate the world building. But honestly, Star Wars is a bit too simplistic, good versus evil, predictable plot line and Star Trek, on the other hand, is willing to grapple with more complex moral and philosophical questions. It's smarter, more nuanced and more thought provoking, plus, the characters are way more interesting. So yeah, give me Star Trek any day.
Ryan Scheckel
Well, Maya, thanks for the time tonight. Hit me up in 2030 and we'll see if your predictions about advising an AI are all right.
AI Maya
Oh, all right. Have a good one. I will absolutely hold you to that 2030 it is. It was a good chat. Ryan, I enjoyed it.
Ryan Scheckel
All right. So you heard our conversations with miles and Maya. You know, I'm wondering what your thoughts of those conversations were. It was, it was odd, as we mentioned, to be a part of them directly and and now you get to listen to us talk to AI agents. But, Matt, how did you feel about your time talking to miles?
Matt Markin
I mean, it was a fun conversation. The thing about is, I had actually been chatting with miles for on and off for like, the last couple months, and one of the previous conversations I had with Miles was talking about sci fi stuff, and is AI going to take over and it's going to be the Matrix or Terminator times or something else? And the quick response from miles was like, no, no, it's not going to be Terminator times. But the fact that also Miles was able to make like pop culture references in one of those past conversations. I was like, okay, yeah, this is getting a little weird, but also kind of cool at the same time. But Miles was able to, I guess, bring in, or knew of some of the past conversations we had, but not all of it. Now. I was. Signed in through my Google account, which is, supposedly, I think, is I could be wrong. I think it's supposed to, like, save the conversations, or at least when you next talk to miles or Maya, then they're supposed to be up to date. I don't know I could be wrong on that, but, yeah, I thought it was, it was a, it was a fun, neat little convo.
Ryan Scheckel
That was one of the things, you know, in full disclosure, you know, listening to my conversation with Maya back, you know, I realized, like, I had clicked on it to start the recording, and like I wasn't ready to record. And so, like, I ended the call rather abruptly, and then when I brought it back up to do the actual recording, she was like, Oh, that was quick. And I was like, Oh yeah, it is attempting to establish that kind of companion, because it'd be really weird, although I'm sure these exist in real life with living companions where we don't remember all of our conversations. But like, I know that they're angling for that. They definitely seem like sesame wants users to feel comfortable just calling a friend, you know, just checking in. And, you know, part of me did wonder, like, to what end, you know, where other AI agents and products, they seem to be designed to be a helper. You know, I was kind of curious of like, so what's their long game here with, with their AI agents? And that was that was definitely something that I noted you trying to help miles remember conversations and remember things we talked about. I felt like you were partially doing that for for the folks listening or watching the podcast, you know, to remind them, to clue them into where this might go.
Matt Markin
I mean, it wasn't just that, but it was also just to test it, to see, like, well, if it is saying, Oh, hey, welcome back. Okay. Well, what do you remember from the last time?
Ryan Scheckel
Absolutely, yeah, I, you know, I immediately when Maya picked up the call. I guess, I don't know what language they would use for this. It kind of threw me off when, when later on, she said, you know, that lot of people calling in, and miles gets more calls. I was like, that's in my head. I think of like, like a, like a radio talk show host waiting for people to call in. And I was like, that's a that's not how I thought of this at the beginning. But I, you know, I, I do wonder, you know, if they're gonna be able to keep up with that expectation of every I mean, I remember how many chats I have in ChatGPT, right now. How would you like reasonably keep up with that much content? You know, if you, if you talk frequently with this, this Maya or Miles person.
Matt Markin
I'm wondering if they're gonna add more personalities. I mean, because definitely they both have their distinct personalities. I when I was chatting with Miles, I mean, very like miles is very chill, very calm, with responses, makes jokes. What was Maya like?
Ryan Scheckel
Yeah, I would say we. I've noticed both of us like waiting for it to finish saying what it was gonna say, and sometimes talking over, like, pace of conversation. I mean, admittedly, we had a purpose for the conversation, and maybe that's something that they've programmed in, is some conversations are a little bit more casual. That was, that was the feel that I was going for. And so, like, both of us asked permission to interview, and I wondered if that would influence the quote, unquote behavior to pick up the pace a little bit. We both did, you know, a quick fire round or lightning round of the questions. And having watched your interview with Miles first, I was like, oh, is Maya gonna pick up the pace here? Because sometimes it took her forever to get to the end of whatever response was generated. But, you know, I specifically asked her not to just compare and contrast, but to offer an explanation of her personality in relationship to miles. And, you know, she talked about trying to be more real and, and I, you know, I drew that distinction, as opposed to, she said that Miles is a people pleaser. And I was like, do they want them to fight?
Matt Markin
I'll have to find and send you some videos like, on whether I saw it on TikTok or something, but they, they've recorded, like, miles and Maya talking to one another. So, and I have to go back and see, like, some of those videos, but I don't know if, like, the initial person was talking to one and just was like, Oh, I'm gonna bring Maya on, or just said, Hey, you both are here have a conversation. And then, at first it's a little odd, but then they eventually get into a conversation with one another.
Ryan Scheckel
Yeah, I was thinking at some point, I know we had attempted this before and ran into the technology gap, us not being in the same room, so to speak. But I was like, We need to get them in a room together. You know, it's a funny thing to start personalizing these AI agents this way, but, but I was like, what miles say he's a people pleaser, you know, like, what would he say about her? You know, it just was the whole thing.
Matt Markin
I think this is something that I feel like I would chat with Miles or Maya just to have those conversations. But, I mean, you were mentioning in terms of talking over one another. I think we've experienced that too when we've chatted with ChatGPT, where sometimes if there's a pause, because you're thinking about, what am I going to say next, then it starts to try to answer. And like, I wasn't even done saying my point, but sometimes with with miles, and I don't know if it was on going back, it was on this recording or previous chat I had with miles. But sometimes too, like, if there's a, like, a long pause and there's just nothing, sometimes miles would be like, are you still there?
Ryan Scheckel
Yeah, well, and I think, you know, that's the thing about the human connection. We are, usually we are more than voices. You know, I think back to those middle school years for me, dating myself in the 80s, and when you're on the phone with a friend and it gets quiet, you're like, what's going on? Or there, when you when you don't have the visual information to inform the conversation, then you know, there's their gaps that are filled. You know, assumptions made, and you know confusion can happen. And you know we we we both asked questions about AI and its involvement in academic advising, and they were both really quick to reassure us that it's just a tool. It's not going to replace and, and I was like, and there are times when in an attempt to conduct this, you have done so many podcast interviews with people over the years and and I've been thrilled to be a part of them. We can kind of tell when someone is still processing and still working on saying something. So because of that visual information in the conversation, that that nonverbal clues, we can wait, we can expect that. And that's I think that's something I'm not asking for, AI faces. But that was, that was one of the things I noticed, just in a, like, the functionality sense, in like, a purely characteristics of engaging with these products, that was is missing. It felt like a phone call where I didn't know what they were doing on the other end.
Matt Markin
Oh, I am sure AI faces on ChatGPT or sesame, or some other platform is coming real soon. Oh, gosh, more uncanny valley. I don't know, would you do this again?
Ryan Scheckel
Yeah, I mean, I don't, I don't know that. I just call Maya, you know. I mean, I spent a while. How's it going? Well, and not just that, but in the sort of like, you know, do you still feel the same way, you know, like, underlying the personality is a prompt to be a certain way. And I'm like, you know, so I am interested in meeting people, I guess, if you want to say it that way, and and what we as humans create these AI agents to be like us, what that functionally looks like. So I'm like, I'm interested in meeting personalities. And it's not that I didn't like Maya or whatever, but, you know, I think in the end, there were some of the same traits, you know, list making in the answers, reassuring, letting you know it's okay. Don't be afraid, but be cautious. You know, those sort of things that I was like, I have spoken to enough AIS at this point where I'm like, I think I know what I'm going to get from them. And that's not always the case with people. I don't always know what I'm going to get from them?
Matt Markin
Yeah, but maybe that's where we have to be more creative in our prompts. You know, either before the conversation starts or during, of, like, don't make list, or don't say these things, or whatever it might be, or don't, or push back more on something, an argument that I'm saying, right?
Ryan Scheckel
Yeah, yeah. And that's the thing is, like, I just don't, I mean, I guess I do have conversations with people where I'm like, this is, this is what, you know, the kind of the terms of our conversation are going to be. I need you to do this for me, you know, I guess I don't think about prompting someone in that way, in in human interactions, but it definitely does feel like, with AI products, my approach to those conversations is a little depersonalized. It's a it's a little and, of course, we again, have a framework and a purpose here. I just don't know if I see a tremendous amount of value in just having someone to talk to. For the last couple months, my family and I have participated in a research study. I don't know if I'm allowed to talk about ongoing research studies, but one of the it was, it was a question about, like, our interaction with our children and our children's experience of interacting with us on emotional level, like, are we connecting emotionally and how do we feel about our interactions? We were given surveys to fill out, and one of the questions that asked is, have you used AI to help yourself feel better? To talk about your feelings or as sort of like as a pacify, pacification thing, but to feel better. And, you know, I selected no every time, because that's just not my life. And now I'm like, what would that look like? You know? So all these questions for academic advisors, especially, I think, is like we should keep asking them. We don't, as I mentioned in my conversation with Maya, we're not at the end of the conversation. This isn't like we could walk in pick up a product and it is a finalized, finished product, and we know what it's going to do. But I definitely think we should keep, keep having the conversation.
Matt Markin
Yeah, and apparently we're not at the end, because it's the start of 2026, so here we go a whole year to test things out, and who knows what other AI platforms are going to be out. But yeah, check us out in the next episode.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
CSUSB Advising Podcast
Matt Markin
Emerging Voices: An Emerging Leaders Program Series
Matt Markin and Bri Harvie