.png)
Adventures in Advising
Join Matt Markin, Ryan Scheckel, and their amazing advising guests as they unite voices from around the globe to share real stories, fresh strategies, and game-changing insights from the world of academic advising.
Whether you're new to the field or a seasoned pro, this is your space to learn, connect, and be inspired.
Adventures in Advising
Trust, Tech, and Teaching: Integrity at Stake? - Adventures in Advising
Dr. Jamie Cawthra, lecturer at University College London, focuses on trust and its role in higher education, as well as staff training around generative AI. The discussion explores the challenges and opportunities presented by gen AI in education, including its rapid development, the difficulties in detecting AI-generated content, and the need for clear communication and shared understanding between students and staff regarding appropriate AI usage to foster trust and academic integrity.
Follow the podcast on your favorite podcast platform!
The Instagram, and Facebook handle for the podcast is @AdvisingPodcast
Also, subscribe to our Adventures in Advising YouTube Channel!
Connect with Matt and Ryan on LinkedIn.
Matt Markin: [00:00:00] Hey, and welcome back to the Adventures in Advising Podcast, where we share knowledge, best practices, and our advising stories. Hey, this is Matt Markin and hosting with me as always is Ryan Scheckel. Ryan, what does today's episode have to do with London?
Ryan Scheckel: Oh I can't wait to share our guest's perspective and then reconnect some of our excellent experiences from the UCA conference, but really talk about the things that we have in common.
Ryan Scheckel: The questions and the possibilities, maybe the challenges we're facing. Looking forward to today's conversation.
Matt Markin: That's a good segue. Let's go and introduce our guest, and that's Dr. Jamie Cawthra. Jamie is a lecturer with University College London's Arena team, their version of a Center for Teaching and learning.
Matt Markin: Jamie leads staff training around generative [00:01:00] ai. His academic background is in philosophy. Prior to joining UCL in early 2024, Jamie taught in researched data ethics, business ethics, and inclusive academic integrity at York, Hull and Bloomsbury Institute. Jamie's current research interests include trust and its role in higher education.
Matt Markin: His current non-research interests include the large quantities of baby waterfowl in his local park and their role in making him a go. Aw. So Jamie Cawthra, welcome to the podcast.
Matt Markin: Hello. Thank you so much for having me. Lovely to be here. I guess we, we really should start
Jamie Cawthra: Hello, Thank you so much for having me.
Jamie Cawthra: Ah, so it's springtime, lovely springtime in the uk. I have an affinity for birds of all kinds anyway, but at the moment there is just a sort of cade of goslings, ducklings, couplings, gre, blings, whatever you want. It's there. It's small, it's fluffy. It's very sweet. So if anyone's traveling to the UK this time of year, by all means, hit up all of the local parks, find the first pond you can, [00:02:00] and just start throwing frozen peas or breadcrumbs or something into the water and see what comes to you.
Matt Markin: So Jamie, tell us more about you. Now, we heard from your formal bio, but what's your origin story? What's your journey like?
Jamie Cawthra: I get I think my first ever teaching that I did was when I was about 14, 15. Which was teaching in this little drama school, just on a kind of a volunteer basis.
Jamie Cawthra: It was glorified babysitting, really looking back on it. But it, had some structured lessons, exercises, things like that. I started doing so by the time it came round to doing a PhD much, much later on. It was the teaching side that I was really excited to try doing, so I jumped into teaching as soon as I could.
Jamie Cawthra: Absolutely loved it, which didn't come as a huge surprise. And yeah, just kept focusing on that pretty much for the rest of my career. We'll find out if that was a wise move in the long run, but I'm certainly enjoying it.
Ryan Scheckel: So where you're at now certainly there's little hooks in the bio that Matt read.
Ryan Scheckel: But I [00:03:00] imagine we'll spend some time talking about, about ai, but I wanna start with the question of trust in the sense of I think most people have a concept of what trust means. But I don't know how it gets operationalized in an academic or professional sense. Can you share a little bit more?
Ryan Scheckel: About what the construct of trust looks like, at least in, in your perspective and your experience with it.
Jamie Cawthra: I'm very glad you asked Ryan, although I'm sure you will regret at some point in the near future jump in when this starts getting boring trust as a kind of concept.
Jamie Cawthra: Yeah. It shows up in, in basically any kind of walk of life where there's some kind of interaction between different people or different groups of people. So you see it in the business world. You talk about trust in businesses, we see it in interpersonal relationships, we also see it in government, charity.
Jamie Cawthra: Yeah. Basically, anytime people have to work with one another, people start talking about trust. So I was involved in a research project between the University of York and Maastricht University back in 2019 where what we are [00:04:00] really interested in was a trade off between. When health data is useful and when it's scary, so people share their health data and they want to be able to, have a good guarantee that the data that they share is gonna be well looked after.
Jamie Cawthra: But the more restrictions there are on the way that we share important data for research like health data, the less value it actually has as a kind of object of research and that kind of. Really came to a point not long after we started our 2019 project, when suddenly there was a global pandemic and sharing of health data on an international level in order to better research countermeasures became very important.
Jamie Cawthra: So part of the project was talking about the fact that. People often draw on trust as a way to, yeah, ease those kinds of relationships to make people feel more secure. That something like data is being used appropriately. But we thought that what people typically describe wasn't actually what we would consider trust.
Jamie Cawthra: So people talk about things like [00:05:00] oversight, right? Make sure that you trust us and how we use your data because we'll have a watch doc. Analyzing how we use your data, we think, or we thought I still think, I dunno about my coworkers. I haven't caught up with 'em recently. We thought that, that didn't really describe a trusting relationship.
Jamie Cawthra: Compare. If I ask you to take my tenor and run down to the shop and get me. I dunno. I was gonna say something like beer, but which should probably be more healthy and set good role models. I ask you to go get me two heads of lettuce for the shop and I give you a teter and I say, yeah, but also I'm gonna check the CCTV in the shop and I'm gonna check the receipt when you come back to make sure you gimme the right amount of change.
Jamie Cawthra: Right? That's the kind of oversight mechanism that we see in other places. But that speaks to me not trusting you to go down to the shop. Actually what trusting you would look is me giving you the tenor and going, all right, I'll see you in five minutes. So we're talking about what trust actually consists in that you get that kind of relationship.
Jamie Cawthra: And the key point that we came to, the key nature of this kind of analysis of [00:06:00] trust that we're performing was that it involves shared values, right? So I might ask you to run down the shop and get me something, and I might think that you'll do it because it's in your interest to do it. You want me to give you more tasks in the future?
Jamie Cawthra: You wanna be able to siphon off 50 p out of every tenor that I give you, whatever. That's not trust. Rather, what we are looking for is something where I give you the things to do. I give you the money because I think that you are generally interested in me having what I want from that interaction, right?
Jamie Cawthra: You actually want me to eat healthy, so you want me to get the lettuce. So if you go to the hairdresser. They'll give you a haircut, they'll give you a good haircut to the best of their abilities, but that's more because if they didn't, you wouldn't return. You wouldn't keep giving them your business and actually to get to the point where you go to the hairdresser because you think that they genuinely have a keen interest in you looking really good.
Jamie Cawthra: That might be after years and years of going to the same hairdresser and building up like a close interpersonal relationship. So that's the background of what we really meant by [00:07:00] talking about trust and why we got interested in it. But I think the application in education as well falls outta that quite neatly.
Jamie Cawthra: Students and staff want to know that. The other group is interested, invested in the same kinds of things as they are. So students trust staff, if they feel like the staff have a genuine kind of stake in that student's wellbeing and performance. Whereas staff trust students if they think that they're entering education with a keen sense of integrity, with a curiosity, a hunt for knowledge rather than just to get a piece of paper rubber stamped for a degree.
Jamie Cawthra: So that kind of shared values is a really powerful force to bring trust about in education. And this isn't the only definition of trust by any stretch of the imagination, but what you often find when people start talking about trust is that they assume that there is a shared understanding of what trust means, which we thought was actually pretty absent from a lot of the other places that you see the concept mentioned.
Matt Markin: So outta curiosity, where did [00:08:00] this interest for you, for trust come from in terms of this being a research topic or interest?
Jamie Cawthra: I guess there's two answers to that, so I'll do the worst answer first. The worst answer is that I was a recent postdoc. There was a research project running hey, I could be interested in trust, but the better answer is that a lot of my previous research drew on kind of notions of value.
Jamie Cawthra: My, my PhD thesis looked at philosophy of fiction and particular around how people engage with it and appreciate certain kinds. Particularly weird fiction. So I've always had a kind of interest in the way that people express their values and the way that people's values enter their dealings with other people and the way that they look at the world around them and the way that they learn.
Jamie Cawthra: So in, in some fictions, for example, people refusing to engage imaginatively because those works of fiction kind of breach particular principles that they hold. So this in philosophy, and again, stop me before this turns into a [00:09:00] philosophy seminar. This is called imaginative resistance, right? Or at least this is one kind of a family of things that we, that fall under imaginative resistance.
Jamie Cawthra: So there's a lovely example from a philosopher called Kendall Walton, where if we just had a short story that said Giselda was right to kill her baby, after all, it was born a girl. So hopefully that, that raises a couple of hackles on the back. Just hearing a story like that, which kind of illustrates the point.
Jamie Cawthra: But you don't have any trouble imagining that there's somebody called Giselda. You don't have any trouble imagining that Giselda has a baby who is a girl, but there is trouble imagining that Giselda is right to kill her baby because it is. So you can compare a story like Giselda was right to kill her baby.
Jamie Cawthra: After all. It was a change ling, and then suddenly it completely changes the wHull way that you feel. When you hear the story, so there are works of fiction that people resist imagining and some philosophers say that it's because they can't imagine things like that. Some people say it's because they won't imagine things like that.
Jamie Cawthra: [00:10:00] Like you, you refuse to, and it's not really clear which the answer is, but either way, it shows that the values that you carry do impact the way that you engage with things and if it impacts the way that you engage with fiction. I was gonna say it impacts the way you deal with the world, but then I realized how unbelievably obvious a statement that was in the first place.
Ryan Scheckel: The the idea that our values inform our interaction with other perspectives is worth restating. I think oftentimes when we are in the process of examining what might be thought of as our advising philosophy or our own values and how they influence our practice. It is worth spending time in that reflective space and saying, do I have any spots where I refuse to engage with a reality or a fictional reality?
Ryan Scheckel: That's one of the benefits of engaging multiple perspectives and and the safe option that fiction or story provides [00:11:00] even if they are someone else's story. It is always something we encourage here. So in your background, looking at the way people engage with fiction and your research there have you found any correlate corollaries with how people interact in the work that they do?
Ryan Scheckel: Particularly in education spaces or personal tutoring and advising spaces.
Jamie Cawthra: So I think in, in cases like advising this is where that kind of relationship of values really comes into the fore because first of all it's the sort of space where trust that value congruence, that sharing of values is more important almost than anywhere else in education, right?
Jamie Cawthra: That, that sense that you can really trust this person who you've been, put into this relationship with as a qt, that you know that you can take them. Problems and that when they respond to you, they'll do it out of a genuine sense of interest in your flourishing, your kind of wellbeing. Then that's really huge.
Jamie Cawthra: However, it's also a [00:12:00] space where there is a risk where relationships of trust can break down because it's a space where your values interact a lot more than they might interact in the classroom. It's very possible to go for a healthy debate or discussion. And I know when you last some people on from UCL, you had our provost, vice provost, Catherine Woods, talking about our initiative, disagreeing.
Jamie Cawthra: And that's a space where clashes in values can be fed into a healthy engagement and a healthy discussion in the classroom. Whereas when it comes to one-to-one relationships between staff and students like you find in tutoring and advising, it can be that clashes in values have a much greater risk of spilling out into the rest of the relationship.
Jamie Cawthra: It's harder to distance yourself from somebody's values in the same way as in a kind of structured or formal debate versus when you are having yeah these. Close low level almost relationships, right? It's much more bringing of yourself to them than you find [00:13:00] elsewhere.
Jamie Cawthra: I'm not sure if that's something that's been formally established in research, but I should give the disclaimer that tutoring and advising isn't one of my primary responsibilities or an area that I'm particularly well read around, I'm afraid.
Matt Markin: So I guess the segue off of that, because you were mentioning it.
Matt Markin: Maybe advise any personal tutoring, not necessarily what you're working on now, but maybe we could segue into what your role is. So tell us about the arena team, because I know in your bio it says it's a version of the Center for Teach, like a version of a Center of Teaching and learning. But some people may be like, I don't even know what that is.
Matt Markin: So what's the arena team? What's your responsibilities?
Jamie Cawthra: So the Arena team is an exercise in a couple of places I've worked actually, which includes Bloomsbury Institute, London, where a name has been chosen. That's exciting and interesting rather than necessarily particularly informative.
Jamie Cawthra: So Bloomsbury Institute, London used to be called the London School of Business and Management, which while yeah very clear, is very dry. Arena used to be, I believe, the Center for the Advancement of Learning and [00:14:00] Teaching. Although this was a decent time before I joined the staff, so I might have got that one slightly inaccurate.
Jamie Cawthra: And if you ever have Pete Fitch back on the show, don't tell him that I said anything. So the arena team is part of a slightly wider body in UCL called Heads, which stands for the higher education, development and support. Institute, which is deeply pleasing to me because it means the director of heads can legitimately claim to be head of heads.
Jamie Cawthra: But heads combines arena for staff development in kind of teaching and learning and looking at improvements that can be made to our approach to teaching and learning as well as I'll tell you what, I'll get onto that in a second. I'll explain the other half 'cause it's much, much quicker and simpler.
Jamie Cawthra: The other half of heads is our career service. So these are the people who put on events around employability for students who do things like one-to-one mock interviews, CV development, all sorts of different things all related to employability. But part of how employability gets expressed at universities is through particular attention [00:15:00] to skills capacities ways of doing things that get addressed through curricular.
Jamie Cawthra: So there was a weird kind of disconnect between the careers team and the people who work a lot with teaching and learning, which is the arena team and to us. So maybe the easiest way to get a picture into what the arena team does is to talk about the three hats that I have as part of arena.
Jamie Cawthra: So the first one and the most straightforward is that Arena runs professional accreditation, staff development for people at UCL. So normally that has been teaching and learning staff and academic staff, but increasingly extending into professional services as well, which is actually really nice.
Jamie Cawthra: Getting a lot more perspectives from that enormous side of what universities are all about. So the chief route for staff development that we do is advanced. Hes fellowships. UCL is accredited to run its own kind of slightly weird version of the advanced HE fellowship scheme, which we do the [00:16:00] arena fellowship.
Jamie Cawthra: So that comes for any listeners who aren't familiar with kind of advanced he, who they are what fellowship is. They are, I believe, a charitable body. I'm not gonna fact check myself in real time. Who, who do support and development for teaching and learning. Broadly speaking. Part of that is these fellowships, which are supposed to offer a chance for people teaching at university to get some kind of formal recognition of the fact that they pay serious attention to their teaching.
Jamie Cawthra: Rather than just having teaching as just a little thing that they dip into every now and then and it's not really a big part of what they do or why they get up in the morning. So we have four different levels of fellowship that run. So there is associate fellowship, which is for people who are have some sort of niche or specific role or a starting out on their career in supporting or delivering teaching and learning.
Jamie Cawthra: So you often see postgraduate teaching assistants as well as professional services staff who don't engage directly in teaching, but do important work to support [00:17:00] it. Behind the scenes, there's fellowship, which is the big one, which is usually people roughly at module leading level, people who are actually engaging with teaching and learning in a variety of different ways.
Jamie Cawthra: There's senior fellowship, which is for people who are in some way influencing, impacting on the teaching practice of other people around them. And then there's principal Fellowship, which is just, you're the business. You've been doing this a long time, you are in some sort of leadership position or strategic position around teaching and learning.
Jamie Cawthra: So that's a big chunk of what we do at Arena is enabling staff to actually demonstrate that. They do a lot of stuff, a lot of important stuff. Through yeah, a formal qualification. A recognized thing, and it's always puzzled me slightly that in the UK at least, there's no specific qualification that's required to teach in higher education, right?
Jamie Cawthra: So you can do A-P-G-C-H-E. Which is a qualification for teaching at higher education level, but it's not something that's always, or even [00:18:00] often required for a position, which involves teaching responsibilities in the uk. So the other kind of flip side of fellowships is that they enable people to dedicate some time to sinking into pedagogy, pedagogic theory, and yeah, matching that up to the practice that they do and showing that it's done in a conscientious and informed way.
Jamie Cawthra: Which segues in an uncanny way into the second stripe of what Arena does, which is more active supporting and professional development through things like workshops. So we each have different areas for which we are responsible. I lead, or I rather, I co-lead on a program called Arena for Lecturers on probation, which is even more confusing because it makes it sound like they're all in trouble.
Jamie Cawthra: But actually, in the sense of probationary staff who are still in the probationary period of their contract before they've completed that and fully joined. So we do an eight session program covering kind of the UCL approach to teaching, welcoming people to a community, setting them up with other people who've.
Jamie Cawthra: Recently joined, but also guiding them through the process of applying for fellowship the [00:19:00] FHEA fellowship at the Higher Education Academy, which confusingly is run by advanced. He previously known as the Higher Education Academy. So there's all that. The other strands that I lead or co-lead on, one is a kind of dissemination of practice through what we call micro CPD, which is these really short videos and short blog posts that just go, Hey, I'm an educator somewhere.
Jamie Cawthra: Within UCL and I've done something really cool and here's how you can do it too. So four lecturers by lecturers talking about good practice. The other one that I'm sure we'll talk about plenty, so I'll just mention it for now, is on star trading and development around generative ai. Ooh, that ghostly works first emerge finally.
Jamie Cawthra: And one of the most kind of fun and diverse and weird bits of it is that all of the academic members of the arena team are assigned to a different, one of UCLA's 11 faculties. So we are liaisons with those faculties and we are there as a kind of resource for them to use. So partly to help with approaches to teaching and learning.
Jamie Cawthra: If a faculty is [00:20:00] thinking about how best to set up peer dialogue right, then we might help and advise on different ways that they can do that. If they're getting feedback from students that something like feedback is not hitting all the notes that they wanted to, then we might talk through members of staff about different approaches to providing feedback.
Ryan Scheckel: I was gonna make the joke that people often ask me what happened with my hair? And I tell them, that came from all the different hats I was being asked to wear. You were wearing, you're talking about how arena has so many hats or you're involved with so many hats. We've also been talking about hairdressers and I was like, the confluence of these metaphors for understanding our lives are fascinating.
Ryan Scheckel: But you mentioned generative ai, so let's talk a little bit about that. What's your experience been, especially in your role working with educators and in a higher education setting and the the context or the opportunity or challenges that AI presents?
Jamie Cawthra: Absolutely, yeah. Okay. Maybe a good place to start then is how I've actually ended up in the generative AI kind of [00:21:00] sphere, because everything I've told you about.
Jamie Cawthra: Kind of my history doesn't really indicate at any point that artificial intelligence might be lurking around the corner. But when the kind of chat GPT Wave broke, which was back in was it late 2022, early 2023 I was at that point the academic lead for academic integrity, which is a slightly ugly title.
Jamie Cawthra: But one that kind of saw me having to pay a lot of attention to it at that point. The institute I was working at, we had recently lost our digital pedagogy advisor our learning technologist. So there was a bit of a bit of a skills vacuum around that kind of area. So when this stuff first started coming out, a I was exposed to talking to lots of students about it by nature of the role that I was doing.
Jamie Cawthra: But also b, we're all trying to dig deep and upskill in digital pedagogy a lot more. Anyway, so that combined with a bit of a research background in data ethics meant that when I landed in the arena team at UCL, [00:22:00] one of the first requests that I got from the faculty, which I was assigned to support, which is the medical sciences.
Jamie Cawthra: Was, could you come along to one of our kind of teaching away days and talk a bit about generative AI and education? In assessment specifically, and this didn't strike me in any way as a kind of unprecedented or unusual thing to request. It had been on front of my mind for quite a while before then.
Jamie Cawthra: So I wrote a few kind of little workshops for a couple of people to deliver for this session and went along and gave them. And then came back to what in my memory is a room full of people with open jaws staring, but in practice was probably actually just, one of the, one of the assistant directors of the team going, oh, if you've done that, would you mind doing that a little bit more?
Jamie Cawthra: And continuing to do these kinds of workshops. So I did. So currently at UCLI have a, an interesting role, which is leading on yeah. Workshops that, that do staff development around these kinds of topics. And that's given me a weird vantage point that I feel like a lot [00:23:00] of people have missed out on.
Jamie Cawthra: Which is that step back from the concern that teachers have around how do I best use this in my teaching? How do I best help students grow skills in this area? How do I address generative AI literacy, whatever that word actually means. I'm yet to hear a convincing definition of it, and actually I, I get to go one level up further and go oh my God, lecturers are one of the most kind of time poor groups of people that I'm aware of. And there's always a little bit more shoved onto a plate. There's always another straw being placed onto the camel's back. And now people who are subject experts who dedicated often, pretty much a lifetime, a career at least to becoming real experts in a very niche specific field, are being asked to upskill in a radical way in a technology which has upended the way that.
Jamie Cawthra: Teaching is delivered and the way that students engage with teaching that is delivered. So there's that kind of extra glimpse behind the curtains of [00:24:00] how people react to generative ai. And I think the big picture is always one of just kind of puzzlement confusion. There are, scattered, optimists, scattered people who either were already well versed in this technology and saw these kinds of changes coming.
Jamie Cawthra: Or people who've found the time, the dedication to jump in and actually really familiarize themselves with this kind of technology. And then I think the rest of the people, there's a lot of kind of feeling of isolation almost that, oh my God, now there's this thing that's, completely tripped me up.
Jamie Cawthra: That's prevented me from doing my job the way I'm adjusted to and comfortable doing it. And I know that some people will come in and go good, everything needs a shakeup. But I think that is a tiny bit cynical. Shakeups are good, but not because they, yeah. Drive people to distraction. You see a lot of kind of quite the spaces where people will share things in a more candid way.
Jamie Cawthra: So I'm thinking of things like internet [00:25:00] for subreddits, spaces like that where you see people working and teaching and learning at all levels of education, just burying their heads in their hands and going, oh my God, what the hell is this thing doing? It's a worry. So I get the privilege of a glimpse behind the curtains at the larger scale of these kinds of worries.
Jamie Cawthra: And it does color the way that I engage with generative AI myself, and it certainly colors the way that I approach training and upskilling staff. Of course, outside of my perspective, there's also the more kind of massive strategic picture stuff. Like how exactly should universities be engaging with things like tech companies?
Jamie Cawthra: Should universities be paying for these sorts of tools? Does that condone things that we ought not be condoning? Does it send a message to students that we ought not be sending? And those are things I'm more than happy to talk about, but that would be more speculative because they're above my can, as we might say.
Matt Markin: And I'm sure Ryan and I have opinions on that as well. Based off some past episodes and interviews that we've [00:26:00] done can you share any examples of how Gen AI is being used? Maybe responsibly and innovatively at UCL?
Jamie Cawthra: Yeah, absolutely. So we see a lot of spaces where people are and this kind of links back to how I was describing our colleagues in the higher education, development and support Institute.
Jamie Cawthra: You can see why they called it heads eventually. And their work on employability. So there's an interest in upskilling students to use tools like this effectively. And that's obviously particularly relevant for spaces where these kinds of tools might end up acting autonomously. So being based in or at least liaising with our faculty of medical sciences, that's a space where there is scope for autonomous AI to be relieving workloads, particularly around patient management.
Jamie Cawthra: But doing so might then start bringing in ethical risks or even on a more practical level, just risks. The technology doesn't quite do what it ought to do. So there have been quite a few members [00:27:00] of, teaching staff within the faculty of medical sciences who've really jumped at this and grabbed it with both hands.
Jamie Cawthra: There's been one item of assessment, for example, where students will write grant applications using generative AI before critiquing them. So I think this is, I know that this is a sort of assessment format that has been used in quite a few different spaces and it's a decent and a solid one because it not only scaffolds skills in.
Jamie Cawthra: That sort of work in what a good grant application might look like, but also helps students learn what AI can and can't do effectively. So I do think the kind of low level onboarding of things like that these sorts of approaches are a nice and effective way to start bringing in the kinds of generative AI skills, which lets students get a bit of a leg up.
Jamie Cawthra: The key issue here, I think. Is that if a skill can be done with only generative ai, then it's no longer something that offers any kind of reliable source [00:28:00] of employment or even possibly even joy for students. I know a lot of people who are, passionate artists, but now because everything's so washed out with ai, art have lost some passion for the work that they do.
Jamie Cawthra: So trying to fold everything back into generative AI and create assessments where students use generative AI exclusively, I think potentially at the moment is a bit of a bridge too far. The focus ought still be on critical and responsible use rather than just the most effective use.
Jamie Cawthra: And yeah, there's I'm sure your very clued in listeners will have followed research around the world. About how generative AI can impact upon our own critical thinking capacities and also more scarily, I think our own sort of internal biases. So in early days when a lot of these sorts of studies haven't been fully peer reviewed or the full impact hasn't really been ascertained, I think is really important to, yeah, engage with generative ai [00:29:00] cautiously I don't necessarily mean pessimistically.
Jamie Cawthra: I think there's lots of space for it to do good things. We should be treating this as, I dunno, almost a volatile technology. It has such a propensity for rapid change, but also for misuse that I think the best way to do credit to students and the way that I've seen people doing so around UCL, which is deeply pleasing to me, is by making the key learning outcome a healthy respect and a knowledge of how to apply responsible and critical principles while engaging with this kind of tool.
Ryan Scheckel: So somebody in your role at an institution like UCL having that confluence of perspectives is unique in some ways because, are for example, a teaching faculty member, a member of staff, instructional staff they come, as you said, from their expertise point of view and intersect with AI that way.
Ryan Scheckel: And I'm curious from where you're sitting in your institutional structure, what are you hearing? [00:30:00] About students and their use of ai. Are you engaged with student voices or asking students about it? Are you hearing it from others? The first time I heard a conversation about AI that I found interesting.
Ryan Scheckel: Was people conducting focus groups with students around AI use and I'm wondering what maybe you've heard just from the spot you're at.
Jamie Cawthra: Yeah, so it if there's one thing that makes me sad about having a role within Arena, it's that we don't get to work to directly with students enough. So arenas very kind of staff focused.
Jamie Cawthra: A lot of our sessions are exclusively for staff and that makes me very sad. However. There is a kind of a, an element of the higher education developed heads. I'm just gonna call it heads From now on, I'm gonna assume that everyone remembers what that stands for. There's an element of this sort of as part of the arena team, which is the change makers initiative.
Jamie Cawthra: So Change Makers is run by my colleague Fiona Wilkie, who sets up small funding for, or small amounts of funding for projects which [00:31:00] involve direct collaboration between staff and student. Encouraging co-creation, meaningful student staff partnerships. And there has been a real kind of flurry of interest in running projects like these on all different sorts of aspects of how generative AI works within universities.
Jamie Cawthra: So within research, within teaching and learning within kind of pastoral care, within even, and it shocks me that students would ever be interested in this, even the kind of administrative. Applications of it. So how it's used to organize things and autonomously sort through data. So as far as student perspectives go, I think perhaps the most kind of telling work has come out of the Higher Education Policy Institute, which is a research institute in the UK who have been running annual student surveys.
Jamie Cawthra: It's the kind of, to my knowledge, the largest level surveys that have been running, talking to students about their attitudes. Towards generative AI in the uk and the most telling statistics came outta the ones from the early [00:32:00] 2025 survey, which revealed that there had been a huge spike in student adoption of generative ai.
Jamie Cawthra: So it went up from, oh, this is where all my bluster about being able to remember figures, runs out very quickly. I can't remember what it started out as. It was somewhere around the sort of 60 seventies. And then spiked in this latest 2025 survey to 92% of students saying that they had tried using generative AI tools, and it went up from the kind of mid-range forties, fifties, up into the eighties for students who'd used tools like this.
Jamie Cawthra: In assessment conditions. Now, the survey didn't capture, and I would've loved for it to capture was whether these uses were ongoing. If this was just something that they dabbled in or something that they were now making part of their regular working schedules. But also whether the use in assessments was above board, whether it was acknowledged, whether it was used in a way continuous with academic integrity policies.
Jamie Cawthra: And I think that's that would be [00:33:00] great to see for a variety of different reasons. But above all that kind of data shows that adoption has spiked hugely in students, and it's not a huge surprise. Generative AI companies kind of marketing to students all of the time. So recently OpenAI and Google have been doing free trials for students during assessment periods.
Jamie Cawthra: I think that's quite, quite a pointed time to start bringing in free trials. And I think, they are trying to bring in more of a user base and students are quite an obvious candidate for this user base. And I don't think it will have escaped the attention of organizations like this, that.
Jamie Cawthra: The application for students include ones that aren't actually good for their teaching and learning, or aren't actually good for their learning, I should say. I think the, sorry I keep knocking my thing on the microphone. I hope that people at home aren't looking around to see what's knocking at the window.
Jamie Cawthra: That statistic though about the spike in student adoption is weird enough on its own, but combined with another statistic becomes quite worrying. So that is the number of students who actually agree on what an [00:34:00] appropriate use of generative AI is in teaching and learning. So while 92% of students have used generative ai, it was only about two thirds of students could actually agree on a single kind of appropriate way to use it so that use was to help explain and understand concepts.
Jamie Cawthra: So to use it as a bit of a tutor. And still a third of students were going, oh, I don't think you should be doing that. So that we have lots of students using it, but fewer students agreeing on how it should be used, I think is a little bit of a powder cake, right? This stands to risk people misusing generative AI in all sorts of ways because there is no clear consensus on how it should be used.
Jamie Cawthra: This is something, God, you can tell when I'm actually more familiar with literature on something, suddenly I'm a lot more confident talking about stuff. There's, there was a piece that came out of, I think it was Deacon University in Australia talking about where the line is drawn.
Jamie Cawthra: Running focus groups with both students and staff on where they would consider the [00:35:00] line to be. And in fact, they didn't talk about the line in any of the setup for the project. They didn't come in that, with that as a kind of metaphor to keep using, but rather it cropped up again and again in these focus groups.
Jamie Cawthra: People going, I dunno where the line is. I dunno where to draw the line. But it's, yeah. It became the title of the paper by the end of it. So yeah we've got a weird situation for students where often they're being encouraged to use these tools, right? There's been a real swing in universities away from oh, I'm worried about this.
Jamie Cawthra: We should ban it to, okay, this is here to stay in, to do the best for our students. We need to be helping them understand and use it, rather than just going ignore all of that in the corner over there. But nevertheless, there is still not a clear understanding of what appropriate generative AI u use looks like for students or staff, and that's challenging to actually come to a conclusion on what responsible generative AI use looks like for anyone.
Jamie Cawthra: [00:36:00] That's maybe a topic that we'll come onto later on. The question was about student perspectives. In the UK at least, it looks like student perspectives are enthusiastic, but perhaps a little unsure about where exactly good applications for this technology exist and how they can stay on the right side of academic integrity.
Jamie Cawthra: That falls out as well in places like acknowledging uses of generative AI and assessments. But there is still a real problem with. Encouraging students that no, this isn't a trap, right? We want to know in order that we can give you appropriate feedback, not so that we can trick you into revealing that you've used it, but because there is uncertainty about what appropriate use looks like.
Jamie Cawthra: A lot of students will just play it safe and go, no, I didn't. I didn't touch it.
Matt Markin: Yeah. Depends on who you ask. That line is somewhere, it's over here, it's there. Maybe there is no line. Everyone's got an opinion on it and. Yeah. It is one where AI is here to stay, but yeah, it's like anyone's sandbox right now.
Matt Markin: Do you [00:37:00] think with how fast it's developing, how that might impact the arena team or in your own role developing or changing?
Jamie Cawthra: Certainly the first university teaching I did was in formal logic, and I did not have to check through my slides every time I delivered it to go. Okay. Is the modus opponent still true?
Jamie Cawthra: Do we still follow that? Whereas obviously yeah there's been such a rate of change that I think the first sort of UCL wide workshop that I wrote on developing generative AI kind of literacy, maybe two slides still survive from that workshop, and that was in. September last year, so not even a full year later on.
Jamie Cawthra: It's had to be completely changed and revised. It's a challenge and I think the biggest challenge then comes back in with assessment and it makes me sad that assessments and academic integrity are I think the two biggest places where kind of ink is spilled and people get stressed out at universities [00:38:00] because it's so much bigger than that, right?
Jamie Cawthra: There's a lot more to responsible use than just academic integrity. And there's a lot more to academic integrity than just our people misusing generative ai. Nevertheless, this is I think a place where the sheer rate of development of gen AI tools leads to the biggest problems. So you still get a lot of the kind of items from the early days of generative AI that lodged in people's consciousness and never really left.
Jamie Cawthra: So you still get a few people who go, oh yeah, no, you can always spot when people have used gen ai, 'cause they don't have any citations. It's
oh
Jamie Cawthra: God, I'm really sorry, dad, to be the one to tell you this. Not only can it now do citations, it can genuinely synthesize sources. So it's not even a case of looking for dodgy citations anymore.
Jamie Cawthra: Tools like deep Research has shown that it can actually use citations. So the sorts of tips and tricks that people used to have to detect generative AI writing have dropped by the wayside. People [00:39:00] don't always realize that this is the case, and I think that this is hugely problematic because it opens a doorway to bias, to reading a piece of work and going oh, I can tell that this is generative ai, actually Gen AI detectors, and I'm sure this won't be news to anybody listening, but it's worth repeating.
Jamie Cawthra: They don't work. They only try and detect, is this sentence structured in the most kind of normal way possible? Are the words being used the most kind of on average, predictable words to be used in this case? 'cause that's how generative AI works. But it's also the case that people will write in very predictable kind of formulaic ways if they have a lower level of confidence in the language that they write in.
Jamie Cawthra: My, my vocabulary in any other language that I have smatterings of I'm not bilingual by any stretch of the imagination. Any, anything I write in another language, it will look incredibly dull and will be picked up by generative AI detector. And I'll go yeah this probably is, but so will documents [00:40:00] the Declaration of Independence got fed through a gen AI detector and it went yeah, this is probably about two thirds generative ai.
Jamie Cawthra: It's all just about that pattern. So when generative AI writing patterns are reminiscent of writing patterns that we see from students who have English as an additional language, it means that actually teachers are much, much worse at spotting gen AI than they think they are. And this is, again, something that's had some studies confirming we are not as good.
Jamie Cawthra: We do lots of false positives when trying to identify generative AI written stuff. I'm gonna start just saying Gen ai I'm not gonna use its government. Yeah it's tough and with these kinds of tips and tricks that got spread around look for where there are no citations.
Jamie Cawthra: Look for M dashes, look for when it says delve. I use M dashes in my writing all the time. I'm genuinely worried that I'm just gonna get pulled in front of someone and accused of being gen AI at some point. So yeah, the rate of change means that it's really difficult for people to stay on top of what assessments ought to look like [00:41:00] today.
Jamie Cawthra: It tends to fall into people who have a false level of confidence that they can actually detect generative ai, but it also means that people will try and design it away. We'll try and create assessments that they feel like AI can't do, and even if they do manage to hit upon something which gen AI can't do consistently, that's no guarantee that's gonna stay the same way for a term, let alone a long enough time for the effort people go to, to really be.
Jamie Cawthra: Worthwhile. I've started describing this process of trying to stay one step ahead of Gen AI is just sis Sophia. It's Sisyphus rolling the boulder up the hill, and then every time they get to the top AI advances and now the boulders rolled straight back down to the bottom again. I think that's a lot more exhausting than many other ideas that I've heard of around developing new ways of running assessments in, in a gen AI age.
Ryan Scheckel: To bring a little bit of the conversation, full circle, we started talking about the concept of trust and you discussed how having shared values. [00:42:00] Seems to be a through line, at least at best, understanding what trust is. But then there was also the conversation about what do those shared values lead to in action?
Ryan Scheckel: And I'm curious if you see any the Venn diagram closing at all with the conversation around AI and the, your understandings of trust is there a way for us to trust each other? In our uses of ai, is there a way to, for institutions to trust students and so on? Is can trust and AI operate in the same space?
Jamie Cawthra: I think perhaps trust is something of an antidote to some of the stuff that's happening with generative ai. So at the moment we, we have a situation where. Obviously large numbers of students have used generative AI in inappropriate ways to, to cheat, and that's yeah, I was gonna call it open secret, but actually I think it's just open at this point and [00:43:00] that lowers the level of trust that students get from staff right there.
Jamie Cawthra: There's less of this kind shared value of discovery and knowledge and putting work in. But then conversely, because staff tend to overestimate their ability to spot AI because students are often, I think, worried that staff might not be as aware of how they've used AI as they ought to be and are jumping at shadows and are worried that they're just gonna get accused of misconduct.
Jamie Cawthra: Just for having a chat GPT window open and a lecture that AI has I think, undermined trust. Probably well to, to the greatest extent of anything I've seen in my academic career. Even occasions in the past where there have been, strikes at universities, a lot of students will still understand, okay, you're striking because there are these values that you need to uphold.
Jamie Cawthra: And often we saw students supporting strikers based on their sharing of those values, which is really nice. But. When [00:44:00] there is this kind of suspicion and I'm not gonna call it a witch hunt 'cause it's not that targeted, right? It's a lot to, it's much to fumbling and confused to be any sort of decent witch hunt.
Jamie Cawthra: Then we start seeing trust undermined. So I think being able to feel like. We do trust. Another is a route forwards. AI is undetectable, right? There is no longer a kind of assessment design that guarantees that AI hasn't been used inappropriately besides one where students are inated and even then, it's not impossible to cheat in an invitor situations.
Jamie Cawthra: It's happened plenty of times in the past. So the problem is that people are still stuffed into this mind frame of just every assessment must be secure because otherwise there's no value to the assessment. And I think security isn't quite the it isn't it's an unachievable dream for one thing, because there is no secure assessment, like I said, but [00:45:00] also it then drives that trust wedge further, I think, to be like, oh my assessment was unsecured, therefore my students would've cheated.
Jamie Cawthra: Therefore, I need to watch out for which ones of them had cheated. It starts driving students and staff further and further apart, whereas actually being able to develop trust as a practice is, I think a way to help people. Perhaps the best way, I'm fumbling here because in, in my mind as a picture, which you may or may not have seen, which is the fraud triangle.
Jamie Cawthra: So this is from the field of fraud deterrence, and it's something that comes up occasionally in the academic misconduct kind of literature. So like you get the fire triangle, which has heat and oxygen and fuel, and you need all of those for fire to happen. The Fraud Triangle is used by, kind of organizations to support whistleblowers or ho corruption, but it's applicable in academic misconduct stuff as well.
Jamie Cawthra: On one side of the triangle, you have pressure, right? People need to be forced to do things that they don't want to do because they feel too much pressure to do it. And in the uk certainly cost of living has [00:46:00] risen significantly. University is expensive. A lot more students are working than used to in the past.
Jamie Cawthra: And there's always the ghost of over assessment happening in the background. So students are certainly under a lot of pressure. Another side of the triangle is opportunity, right? So people need to have the opportunity to actually commit fraud, to commit academic misconduct without the opportunity they don't 'cause they can't.
Jamie Cawthra: Opportunity is something that we cannot remove except by doing secure in-person invigilated assessment. And there's always gonna be a limited scope to do in-person secure inated assessment. And that's before we even start talking about things like distance and online learning. Or even, and I'm looking desperately around for wood to touch the possibility of another future pandemic where we all have to stay inside again and assessments have to run online some more.
Jamie Cawthra: So the opportunity side, we have a limited amount of control over. But the last side of the triangle, and I think the most important is rationalization. People have to be able to tell themselves that what they're doing is all right for them to do. And sometimes that's because, [00:47:00] people I, take jean, right?
Jamie Cawthra: It's, I'm stealing bread because I need to feed my family. So it's a rationalized thing and arguably rational, but people rationalize because they go, okay, I'm gonna use it in this way and I haven't been told not to use it in this way, or I'm gonna use it in this way. And, I think that's an appropriate way to use it.
Jamie Cawthra: But like I said, in, in a remarkable kind of things coming back round in my rambling. Only two thirds of students can agree on what is an appropriate way to use this kind of stuff in the first place. There is a huge gaping void at the moment on that rationalization side of the triangle. The only way to address that void, or at least I think the only effective way to address that void is to.
Jamie Cawthra: Communicate with students to, to have clear, open, honest, frank dialogue around expectations for how generative AI should be used, but most importantly, why those expectations exist. What is the value that is expressed [00:48:00] by this task being done in this particular way? And I think this goes deeper than just, the assessment criteria.
Jamie Cawthra: And it goes deeper than just the learning outcomes. And it goes more to the heart of, and I can't believe I'm saying this phrase, what university is for, right? What is higher education for? What are we doing for our students? And it's only when we are clear on what our values are. As well as what is value able about what we do that we can ever hope to have that kind of re reciprocity of values with students.
Jamie Cawthra: And I think staff and students often do fundamentally share the same values. It's just a matter of working together co-creating to figure out how those values are best enacted when we use a tool like generative ai. And I'll climb down from my soapbox for a second before anyone asks me practical questions about how this should happen.
Matt Markin: I trust that listeners have found this to be a. Fascinating [00:49:00] conversation. I wish we had more time. And maybe there's a part two at some point down the line. But you've given a lot of listeners questions to really consider and think about, and this is gonna be an ongoing topic. But Jamie, thank you so much for being on the podcast today.
Jamie Cawthra: Thank you so much for having me. This has been lovely. Normally people have shut me up by now.
Matt Markin: No, we wanna hear you. Keep talking.
Jamie Cawthra: Thank you both.