Adventures in Advising

AI Reviews the Podcast: A Deep Dive Using Google NotebookLM - Adventures in Advising

Matt Markin and Ryan Scheckel Season 1 Episode 157

In this meta installment of Adventures in Advising, Matt and Ryan toss two wildly different previous podcast episodes into the digital cauldron of Google NotebookLM to see what kind of AI-brewed concoction bubbles up. Unpack the AI-generated “deep dive,” marvel at its narrative gymnastics, side-eye its dramatic metaphors, and wrestle with questions that higher ed is still learning to ask. Can advising frameworks tame the chaos of GenAI? Can trust survive the algorithmic whirlwinds? And should we be concerned when robots create podcasts about podcasts?

Follow the podcast on your favorite podcast platform!

The Instagram, and Facebook handle for the podcast is @AdvisingPodcast

Also, subscribe to our Adventures in Advising YouTube Channel!

Connect with Matt and Ryan on LinkedIn.

Ryan Scheckel  0:00  
All right, welcome back to Adventures in Advising, the podcast about academic advising. You know, it was about a year ago or so that Matt and I interviewed and talked through this AI stuff, maybe for the first time. I think that was the first time, wasn't it Matt?

Matt Markin  0:32  
Yeah, the two of us doing it, yeah, and we actually had a little fun. We've had quite a few episodes, but yeah, last year we were interviewed Santa GPT, Santa voice mode at ChatGPT, we, quote, unquote, interviewed regular ChatGPT, about academic advising, sure, but yeah, we've had various episodes about ai ai platforms, including Notebook LM.

Ryan Scheckel  0:59  
Yeah, And we thought it might be interesting to revisit, given what we know and what we've learned over the year, talking about AI, talking to AI, thinking through that, we thought we might test the product again, so to speak, and give it a couple of episodes of the podcast to develop its deep dive podcast about that's a little bit meta for me, a podcast about podcasts, but it did provide an interesting, interesting way to think about not only what we're doing here, but what AI products can do to help academic advisors.

Matt Markin  1:38  
Yeah, and if you're interested in wanting to check out that original episode that was back in April of 2025 so episode 124, I guess is a good point to say, just like we did on the last one, we are not sponsored by Google. We're not endorsing or promoting notebook. LM, this is just an episode created to purely have some fun, test out an AI platform, and maybe spark some conversation Absolutely. And if you're like, what is NotebookLM? So just to give a little bit of context, Google's NotebookLM, in a way, you can turn material so, like, if you're a student, course materials, maybe as an advisor, articles, and you can kind of create personalized learning companion so you can upload articles for different readings, lecture, slides, notes, research sources, YouTube, links, podcast episodes, and generate summaries, study guides, flash cards, official overviews, But also, like Ryan was saying some audio AI podcast episode. And also, since mid 2025, they also have, like, their an extra thing, like a video overview, where you can generate narrated a slideshow style.

Ryan Scheckel  2:55  
Yeah, it's the it. It's like your friend who can talk through any of the sources that you give it to think about and to learn about and to evaluate, and the deep dive is what they choose to call the the podcast output of NotebookLM.

Matt Markin  3:19  
So for this, we decided, let's pick two random episodes different topics. So we ended up picking episode 92 of this podcast titled reducing barriers through connection and engagement. That one had Dr. Dana Zahorik from Fox Valley Technical College, and Dana talked about the creation of their cares, purpose, driving advising and kind of what that means to educators and the student experience. And then we also fed NotebookLM, the podcast link for episode 136 titled trust tech and teaching integrity at stake. And that was with Dr. Jamie cCawthra, who's a lecturer at University College London, and that one focused on trust and its role in higher ed staff training around generative AI. So two very different topics, and we kind of want to see, well, how does this deep dive for this AI podcast through NotebookLM, what do they say about each of these episodes and the topics, and also, will they somehow connect the two? So let's go ahead and play. It's about little over 50 minutes, and then we'll come back and kind of deep dive into the deep dive. I guess. 

Notebook LM Voice 2  4:35  
Welcome to the deep dive. Today we're looking at this massive dual challenge facing modern higher education. It really is. The sources we're analyzing show that institutions just can't expect students to show up college ready anymore. I mean, that whole model is basically dead, right?

Notebook LM Voice 1  4:52  
The mandate now is for the institution itself to become student ready.

Notebook LM Voice 2  4:56  
Which means fundamentally redesigning everything to meet learning. Learners where they are, and all of this is happening at the same time as this, you know, radical technological change from generative AI.

Notebook LM Voice 1  5:07  
It's essentially a two front war for the modern university, and our mission today is to give you a kind of blueprint for navigating both of those fronts. Okay, so what's the plan? Well, on one front, you need these practical, systematic models for proactive support, for retention, making sure students actually stay and succeed. And on the other, on the other, you have this core philosophical challenge, mostly driven by AI that questions the very nature of academic integrity, the foundation of trust, really so we're going to synthesize material that gives us a model for that robust support, and also explores the conditions you need to foster honesty in this well, this complex new reality.

Notebook LM Voice 2  5:48  
And for that, we have some really powerful insights today, from Dr Dana zahorik on systematic retention using models like cares, and then from Dr Jamie cathra on the dizzying dynamics of trust and Gen AI.

Notebook LM Voice 1  6:02  
And the goal is to connect them.

Notebook LM Voice 2  6:04  
We want to show you how institutional structure directly impacts student support, and how that philosophical alignment that trust impacts academic integrity.

Notebook LM Voice 1  6:12  
So let's start with that foundational shift, right, this move to being student ready. 

Notebook LM Voice 2  6:17  
Dr. Zahorik's philosophy on this is so personal. It's really relatable. She shares this story about how she was a high achieving student in high school, a 3.8 GPA, third in her class. All the signs of being college ready, right? But she walks into her first college semester and earns a disastrous 2.0 GPA.

Notebook LM Voice 1  6:33  
And that's the perfect illustration of what happens with the, you know, the hidden curriculum of college. She wasn't deficient, she was just using high school habits.

Notebook LM Voice 2  6:42  
Like studying the night before, cramming.

Notebook LM Voice 1  6:45  
Reading chapters right before class, and those habits are just totally inadequate for college level work. But the critical moment wasn't her failure, it was the intervention. Her faculty advisor actually stepped in. Asked how she studied, not just if she studied precisely, and that advisor helped her enroll in a college success class, she went from that 2.0 all the way back up to a 3.65 the very next semester. That turnaround is a testament to early, intentional support.

Notebook LM Voice 2  7:13  
So if the problem isn't that students aren't smart enough, but that they just don't have the right tools, what does being student ready actually look like in practice?

Notebook LM Voice 1  7:22  
Well, it means moving away from that old gatekeeping mentality. Historically, institutions assume that certain metrics, you know, high scores on the ACT or placement tests, guaranteed readiness. And if you didn't meet them, the message was, you're not ready, so you can't come in. The student ready model completely flips that it says, Okay, you're here now, and we are going to provide tailored support to help you get the skills you need. So things like actual study skills, yeah, teaching things like the Cornell note taking, system, time management techniques and even those soft skills that are so critical, like how to draft a professional, respectful email to your professor when you have a problem.

Notebook LM Voice 2  8:02  
To see this model in action, we have to look at where Dr. Zahorik developed this, which is Fox Valley Technical College, FTTC. And that context is so important, it really is. It's a two year public college in Wisconsin, and the demographic they're dealing with. The average student age is 24.

Notebook LM Voice 1  8:17  
So we're not talking about traditional 18 year olds straight from high school.

Notebook LM Voice 2  8:21  
No, these are non traditional learners. Most are part time. They're juggling full time jobs, families, just complex lives. They face these huge logistical and financial barriers that have nothing to do with the academic potential.

Notebook LM Voice 1  8:34  
And the institutional buy in here is key. This wasn't just some small pilot program. The College Board approved a million dollar budget for a new counselor and three more advisors. Wow, yeah, to significantly reduce that advisor to student ratio and that investment, that's the statement. It shows they see proactive advising, not as a chore, but as their primary retention strategy. And all that led to the framework. We need to dig into the cares model. 

Notebook LM Voice 2  9:03  
So this is where we get into the nuts and bolts. Let's walk through that cares acronym, right?

Notebook LM Voice 1  9:08  
So the C is for connect and engage, and it's all about early access in this model, advising starts at the point of inquiry the very first moment someone reaches out, the very first moment they have a policy of no dead ends for student help. They make sure students know who their assigned person is right away. The whole goal is approachability.

Notebook LM Voice 2  9:25  
I loved the example they gave the Corgi postcard. Yes, an advisor named Lindsay Fox sends out these casual welcome postcards with a picture of her Corgi on them. It sounds so small.

Notebook LM Voice 1  9:35  
But it's huge. That kind of personalization is crucial for a student who might otherwise feel completely lost in a big institution, it immediately addresses that fear and anonymity that causes so many students to drop out. Okay, so that's C What about A? So A is for access, group planning and exploration, and this involves a tool called the master advising plan or map. It's optional, but they found that the most success. Full students complete it and what does it do? The map asks students to rate their certainty about their career choice on a scale of one to 10, and if a student rates their certainty as anything less than a 10, it triggers an intervention.

Notebook LM Voice 2  10:12  
That's a really clever triage mechanism. It uses their own self reflection to generate the need for support, instead of, you know, waiting for a bad grade to signal a problem.

Notebook LM Voice 1  10:21  
Precisely. If they say seven out of 10 advisors proactively refer them to Job shadowing or connect them with faculty advisors who serve as career mentors. The faculty aren't just teaching. They're integral to career exploration.

Notebook LM Voice 2  10:34  
Let's talk about the R reducing barriers and utilizing referrals. I get the feeling this goes way beyond the academic.

Notebook LM Voice 1  10:40  
Oh, it absolutely does. The map includes this checklist on factors affecting college success, and we're talking about basic needs here, like transportation, transportation issues, access to reliable child care for single parents, housing stability. Completing this checklist automatically flags students who need case management, not just course selection help. It acknowledges that you know you can't focus on calculus if you don't know who's watching your child.

Notebook LM Voice 2  11:04  
And the policy changes they made, I mean, they seem almost revolutionary. You mentioned a $700 book stipend. How does that work?

Notebook LM Voice 1  11:11  
It challenges a massive traditional barrier, most financial aid. It gets dispersed after the semester starts, which means students have to somehow float the cost of textbooks for weeks or just go without.

Notebook LM Voice 2  11:21  
Which puts them behind from day one.

Notebook LM Voice 1  11:24  
FTTC changed the policy to provide a $700 stipend several weeks before the semester begins. That immediate cash reduces so much financial pressure. Right when they need it most, it's the institution challenging its own bureaucracy for the sake of the student experience.

Notebook LM Voice 2  11:40  
That is such a powerful demonstration of being student ready. But I do have to ask this level of personalized care, the Corgi postcards, the case management, it sounds incredibly resource intensive. Can a model like cares truly scale to a massive public university with, say, 50,000 students?

Notebook LM Voice 1  11:58  
That is the friction point. Yeah. And the source material suggests that while the intensity might have to change, the philosophy has to scale the structure, you know, the map, the communication, calendar, that early connection, those are scalable systems. The personalization then relies more on technology and intentional training, maybe rather than just small numbers. But the core idea has to remain. The institution has to show it cares enough to see the student as a whole person. So what comes next? In the acronym that brings us to E engaging early on, two weeks before the semester even starts, advisors are sending out notes just asking students what they need help with. And they also created something called Tech fest. Tech fest, yeah, it's an in person event to help students troubleshoot all the common tech issues, logging into the learning system, dealing with online testing software, addressing that tech hurdle early just dramatically reduces anxiety. And finally, the S. S is for student engagement and completion. This is a systematic communication calendar using different methods. They found texting works especially well to stay connected all semester, and they use a check in, check out model with an advisor for the whole registration process so students feel supported from start to finish.

Notebook LM Voice 2  13:08  
It's not just here's your schedule. Good luck. Not at all. And what I found most inspiring was the emphasis on collaboration. It's not just the advising department. The source cited a case where a maintenance staff member assisted a non English speaking student with a crucial issue, an issue that might have otherwise led to that student just leaving.

Notebook LM Voice 1  13:26  
Right the cares model requires the entire institutional culture to shift. It's a truly holistic support system.

Notebook LM Voice 2  13:34  
But as we established, these very human centric systems are now running headlong into this radical challenge of generative AI.

Notebook LM Voice 1  13:42  
How does this culture of trust and support hold up when the very nature of academic honesty is being challenged? And for that, we have to turn to Dr. Jamie Cawthra research.

Notebook LM Voice 2  13:53  
And his focus is on trust in higher education. 

Notebook LM Voice 1  13:57  
Dr. Cawthra's work, which includes research at places like the University of York and Maastricht University really forces us to redefine what we mean by trust in an academic setting. He notes that true trust, it doesn't rely on oversight.

Notebook LM Voice 2  14:10  
It's not about checking security cameras or demanding a receipt.

Notebook LM Voice 1  14:13  
No, it relies on shared values. When your values align, that oversight becomes secondary because there's a mutual expectation of ethical behavior.

Notebook LM Voice 2  14:22  
So if you apply that to a campus, it means students trust staff if they feel there's a genuine stake in their well being, not just in their tuition check.

Notebook LM Voice 1  14:30  
And conversely, staff trust students if they perceive a keen sense of integrity and curiosity, a real desire to learn, not just get a degree as a rubber stamp. And Cawthra points out that in those close one on one, relationships like advising clashes and values are much more damaging.

Notebook LM Voice 2  14:46  
And that fragility has just been magnified by generative AI.

Notebook LM Voice 1  14:51  
Dr. Cawthra, he lectures with University College London's arena team, and he's deeply involved in training staff on Gen AI and. He noted the sheer dizzying speed of the change. How fast are we talking? He mentioned having to completely rewrite training slides he'd made in September, less than a year later, the technology is just moving faster than any policy could possibly keep up with.

Notebook LM Voice 2  15:14  
And students are all over it. The Higher Education Policy Institute UK survey confirmed that 92% of students have at least tried using Gen AI tools. But here's the kicker. This is the critical ambiguity, right?

Notebook LM Voice 1  15:25  
Yes, only about two thirds of those students could agree on a single appropriate use, which was using the tool to help explain and understand concepts. So you have this widespread adoption coupled with genuine confusion about the rules, and that, as kosher puts it, creates a powder keg risk.

Notebook LM Voice 2  15:42  
And this leads right into the security myth that so many teachers hold.

Notebook LM Voice 1  15:45  
The belief that they can reliably spot AI generated content which they can't. The reality is they can't. Gen AI detectors do not work. They're unreliable, and they often flag predictable writing, which disproportionately hurts students who have English as an additional language. 

Notebook LM Voice 2  16:02  
This is where it gets really interesting, and frankly, a bit ironic, the source material cited an example where the US Declaration of Independence was put through a common AI detector. It was rated as two thirds Gen AI, the foundational document of American democracy flagged as plagiarism. 

Notebook LM Voice 1  16:21  
That finding should send shivers through every assessment committee. It just proves that trying to detect AI use with technology is futile, and it actively risks penalizing honest students.

Notebook LM Voice 2  16:32  
So that default institutional response trying to constantly redesign assessments to stay one step ahead of AI.

Notebook LM Voice 1  16:38  
It's a sysyphean. It's exhausting. It's ineffective because the AI's capabilities change too quickly.

Notebook LM Voice 2  16:43  
So if detection is failing and redesigning assessments is impossible, where is the leverage point for maintaining integrity, for fostering that shared value system?

Notebook LM Voice 1  16:51  
For that, we turn to fraud deterrence literature, specifically something called the fraud triangle. It's a great framework for academic misconduct. It says you need three things for fraud to happen, pressure, opportunity and rationalization. Okay, break those down. Pressure is the external stress, rising cost of living, high workloads over assessment. Opportunity is easy access to Gen AI online learning environments, and that's pretty much impossible to remove in the modern age. So that leaves that leaves rationalization as the key leverage point. Rationalization is the story a person has to tell themselves that cheating is all right, or that it's justified, and this massive uncertainty among students about what's okay to do with AI is a summary cheating is proofreading okay. That creates a huge void in the rationalization process.

Notebook LM Voice 2  17:40  
So the solution isn't to block the tools. It's to redefine the boundaries of those shared values. 

Notebook LM Voice 1  17:46  
The only effective way to address that rationalization void is through clear, open, honest dialog with students about the expectations and, crucially, the why behind them.

Notebook LM Voice 2  17:56  
It has to go beyond just a line in the syllabus.

Notebook LM Voice 1  17:59  
It has to it must address what value are we expressing by asking you to do this task in this particular way? If an institution can't clearly explain why honesty matters in a specific context, students will rationalize using the AI.

Notebook LM Voice 2  18:15  
And if we bring these two big ideas together, the cares model and the fraud triangle, you see this fascinating Interplay.

Notebook LM Voice 1  18:21  
You do the proactive cares model is structurally designed to address two legs of that triangle. How so well the support services and cares actively work to reduce pressure by offering financial aid, early childcare resources, and they decrease opportunity by providing skills training and tech vests, which removes excuses for falling behind, but this new gen AI challenge, it demands that institutions triple down on the shared values to address the rationalization side directly.

Notebook LM Voice 2  18:49  
It all comes back to continuous, proactive engagement. It's the polar opposite of passive oversight. Dr. Zahorik had this powerful quote from 2008 that is just so relevant today. What was it advising students is similar to tending a fire. The fire goes out when unattended. That proactive engagement, whether it's a corgi postcard or a transparent talk about AI ethics, that's what keeps the fire of integrity and success burning.

Notebook LM Voice 1  19:15  
And that's the final synthesis institutions have to continuously ask students what they need not just make assumptions to ensure that all their resources, whether it's a model like cares or a new gen AI policy, are actually useful. That continuous feedback loop is what reinforces the trust you need to reduce rationalization and foster shared values.

Notebook LM Voice 2  19:35  
So we've covered the practical strategies for institution support and the philosophical shift you need to maintain integrity in this rapidly changing environment, you really have a blueprint now for systemic change, for moving from passive expectations to a proactive partnership.

Notebook LM Voice 1  19:48  
And here is a final provocative thought for you to consider if a university successfully implements a truly student ready model, built on transparent shared values, one that. Radically reduces that external pressure and those logistical barriers like the cares model tries to do. Does that inherently make the Assessment Integrity challenge from Gen AI easier to manage, or are these two distinct battles destined to remain separate forever.

Notebook LM Voice 2  20:14  
Something to think about. How much does institutional support truly influence a student's decision making when it comes to academic integrity, ponder that connection and keep applying what you've learned to your own work. Thanks for diving deep with us.

Ryan Scheckel  20:27  
So that was a lot to listen to. You know, the I don't I don't know if notebook LM has a limit criteria where you can choose how long or how short, but if you made it all the way through, congratulations. That might be the first time you've listened to two robots talk about a podcast on a podcast that they made.

Matt Markin  20:48  
Maybe next time, we'll feed it like a five minute audio clip and then see if it does another 15 minute Deep Dive.

Ryan Scheckel  20:56  
Well, I you know, one of the things that I found interesting was just structurally, the approach of the podcast, you know, having done a few now myself, and the way that my brain conceives of our approach like, I definitely saw the model, you know, that is like, let's give you the general introduction to the topic that we're gonna be covering in the episode, and a Little bit of banter, you know, a little bit of fun. And then, you know, let's tie this up at the end. Definitely, definitely knew when I was in the listening to their summaries of the episodes, and started, obviously, in our preparation for this, to listen for, okay, well, where, are they going to make the two meet? And I found the way that they did that pretty interesting.

Matt Markin  21:45  
Yeah, I mean, if you're just kind of listening to it and passively like it, the transition to like from Dana's them talking about Dana's episode and the cares model, to going into Jamie and talking about trust and AI, it very much seemed like a seamless transition. It was just essentially, like one sentence now, so go and talk about this. I mean, again, like two, like we're before we played this, it was like we mentioned. It was two very distinct topics really have nothing to do with one another, but they were able to, in a way, make it work.

Ryan Scheckel  22:18  
Yeah, I guess I was also, I mean, I don't, I don't know if my expectations were too low, but last time we did this, it was PDFs, it was journal articles that we fed it. And so I was kind of curious if it would get distracted by the fact that hosts were a part of the podcast. And I found that sort of, I guess I was proud of it, if that's the right way to think of it, that they weren't like and then there was this Matt person and this Ryan person, you know, like it really did a good job of staying away from distracting what I would think of as you know, identify the data points and make sure you represent the data points, and us as hosts, wasn't a part of their deep dive, which makes sense, that we are not the content, even though we were part of the content that it received.

Matt Markin  23:09  
Yeah, we were just the ones to kind of guide the conversation. And yeah, the fact that they totally ignored us is absolutely fine. I'm not offended by that.

Ryan Scheckel  23:17  
Yeah, I will say I wrote down the sentence, the transition sentence, because that was something I was waiting for. And they whatever, whichever voice said, these very human centric systems are running headlong into artificial intelligence technologies. And I was like, you know, like, sometimes when interacting with AI's, I feel like, you know, the prompt or the the expectation is only part of the equation. It's only part of the algorithm. And there was some language that was used in the deep dive episode that was generated that was, I would say, pitched, you know, I don't know if I would say hyperbolic or extreme. But they started with, like a war on two fronts, kind of metaphor. And then these two different things, human focused, and then technology focused headlong, like there was this competition or this conflict. I found that kind of interesting.

Matt Markin  24:22  
And then, I mean, towards the end, when they kind of also tried to tie it both topics together. And then, you know, tried to say it was like a continuous feedback loop where you're really trying to, what are the student needs? Can, you know, can students trust you, and all of that. So, yeah, I mean, I found it interesting. I thought the it was a unique way for it to try to connect both the topics. But yeah, this was just a way for us to like, let's throw a couple random links, a couple random podcast episodes, and let's see what it does. But I thought overall, structurally, how it went. I mean, if I was a student and it was like, I don't want to listen. This episode, I do, but now I want to get a refresh of it. I thought for them to go one by one on each episode and what they talked about, you know, and then listening back to the actual episodes, I was like, pretty accurate. 

Ryan Scheckel  25:13  
So Matt, you generated that that podcast. What do you remember? What the prompt exactly was? Did you ask for them to find a synthesis of the two.

Matt Markin  25:25  
I don't think I did. I think I literally just uploaded, copy and pasted both the links and then hit the generate audio.

Ryan Scheckel  25:34  
So, you didn't provide any additional prompting for what the product should be. Interesting. Well, I'm kind of curious, from your read of it, what I what I found, also I think worth actually picking apart, like if I were to have done this, and you know the summaries of the episodes I'd listened to, the episodes I was part of one of them, like I knew what we had talked about. And it was good to get that refresher in the deep dive when the hosts summarized the episodes, but that synthesis was of interest to me, and what their claim was is that the cares model addressed two sides of the fraud triangle that Jamie talked about that they found so strange to talk about it like this, they found essentially a way of mitigating the rise of untrustworthy activity with AI using the cares model that Dana had developed at Fox Valley Technical which I I was like, I think I might have to look at that a little bit more to see if that really makes sense. Like, sure I didn't spend a lot of time justifying that synthesis that us that claim that the cares model addressed two of the three sides of the fraud triangle. So it actually made me want to go back and be like, does that even line up? Does that make sense?

Matt Markin  27:04  
But this could be maybe something that's where we might be more creative, where it's like, let's throw two random things and see if AI finds a connection, and then maybe we explore it more and be like, Oh, that makes sense. Or no, that totally does not make sense. But maybe it leads to some out of the box thinking.

Ryan Scheckel  27:23  
Yeah, absolutely my I'll reveal a little bit for our followers on podcast. One of my Clifton Strengths, strengths, quest strengths, themes, is connectedness. My brain always tries to find connections between two things, even if they're not specifically related seeing patterns maybe where there aren't patterns. And so when they said that, I was like, dang it, I didn't think of it that way. Yeah. So maybe it was a little pride that got me there.

Matt Markin  27:54  
I mean, I would imagine, like you just, we know, students do use this. Academic advisors, advising professionals, think this is something that they could use.

Ryan Scheckel  28:03  
Oh, absolutely, you know, I do. I do have that voice of caution in my head saying, like it may have skipped some things in an attempt to take 45 to 60 minute podcast episode, and distill it down into a 15 minute Deep Dive. It may have breezed past something that really is important simply because it was trying to be economical with the content or whatever. So I think if I were a student, I would want to be careful to make sure it didn't miss something important. And if I were an advisor using it, I would absolutely want to make sure that I was following along with whatever quote, unquote document it was fed, or documents, just to make sure that there weren't any gaps that just happened to be missing. And again, in its attempt to reduce, you know, people talk all the time about being reductionist and how you know, you know, oversimplifying things is where we can end up making mistakes. So I think that that's one of the concerns. But it to be honest, again, in full disclosure, when the deep dive hosts took cares and paired it with the fraud triangle that Jamie had talked about I had the same feeling in my head, is when I was at an annual conference, and Peter Hagen was sort of presenting his ideas of the continuum in the narratives we tell about higher ed. And he had this sort of, you know, straight line, sort of polar approach to understanding them. And then he combined them and made a matrix. And in my brain, I was like, that's amazing. And I had a similar feeling when it took cares and the fraud triangle and paired them up. And I thought, that's, I think that's valuable, yeah.

Matt Markin  29:51  
I mean, and off the topic of the audio overview, I mean, the fact that NotebookLM also has, like, the video kind of overview. So if you're more of that visual person. And maybe wants more kind of, like slideshow esque type of viewing, yeah, now you have that, I mean, the video. So we also use this to put the link for just the general link to our buzz sprout website in the podcast, and then it generated a, like, a short video overview that now is on, if you go to our YouTube channel, it's our trailer video that's there, and it was just something cool. I was like, you know, to me, I'm like, anything that's new, let's play around with it and see what's up. But, yeah, but I guess that gets us kind of towards the end of this episode. Any last words for this?

Ryan Scheckel  30:42  
Well, you know, if you've never checked out notebook LM before, and you've been wondering about a way of interacting with content, whether it's scholarship, of academic advising, the own, the your own sort of documents, whether it's your websites or reports or other things. I think it could be really fascinating to give it to NotebookLM and say, What do you see here? And you might have something revealed to you that you had never considered.

Matt Markin  31:12  
Great way to end it.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Pickup Meeting Artwork

The Pickup Meeting

Michael "Brody" Broshears and Kevin Thomas