The sky's the limit with AI (or not)
The last couple of weeks have been quite intense work-wise, so I'm a little late in blogging again. Mea culpa! Here's the reason, though.
It's not often I discuss my job, let alone voice any concerns about it, but the intensity I mentioned was generated by worries about students using AI (Artificial Intelligence) to write their essays. Have any of you had experience of this? It really is becoming quite a challenging issue.
Don't get me wrong. I'm not against AI as a tool. It can be incredibly useful, and I use it myself to give me ideas for materials I can use for classes. All I have to do is ask a question with suitable prompts and the AI program will give me suggestions for a lesson plan, for exercises, and even sample texts to use for evaluation purposes, but the thing is, I never actually use what it gives me. I just use the ideas.
The first problem with AI tools for teachers and students is that it's clever, but not clever enough. The ideas it can produce are generic and offer nothing original – for obvious reasons. After all, it can only produce what it has drawn from the mass of data it has absorbed from elsewhere. The second problem is that students don't seem to have understood these limitations, and believe if they ask ChatGPT (the most popular tool in academia) to give them an essay that answers a specific question, they can use the seemingly perfect answer it produces and the teachers won't notice. A big mistake on their part.
I won't go into the circumstances in which this issue has arisen recently, but suffice to say it became obvious that a few students were doing just what I've described, but what's worse, they were doing so in a test situation. Their essays were strangely fault free, their paragraphs were all the same length, and the arguments in their texts reflected little to no personal thought or experience and only very general ideas. Now, as a native-speaking writer myself, I know it's impossible to write a 350 word text in half an hour without making a single typo or error; nor will I produce sentences all of the same length, or have the ideal number of five sentences in a paragraph. Only super humans (or AI) can do that.
When I write this blog, for instance. I make heaps of silly mistakes and I have to edit it numerous times before I've eliminated them all. My sentences are often far too long and I naturally ramble, so I have to cut out words, shorten sentences and improve on what I've written. It all takes a substantial amount of time, and even then, I nearly always end up with a typo or two that Koos points out to me. I haven't even noticed them.
So to add to the unlikelihood of the fault free writing, the third problem is that we cannot actually prove they've used AI. You see, if I give ChatGPT the same question four times, I'll get four different essays, so it will never be flagged as plagiarism. As I said, it's clever, but not so clever we cannot sense its use in the style and content.
But what do we do about it? If we're trying to test students' ability to write at a certain level, and AI is doing the work for them, it's a serious matter. These kids are neither exercising, nor proving, any educational skill or academic level. The only solution I can think of is that schools and colleges will have to revert to controlled exam conditions with students using paper and pens instead of independent computers. But that would really be turning the clock back, wouldn't it?
Fortunately for me, the problem isn't mine to overcome; I'm just one of the assessors. But it never feels good accusing a student of effectively cheating when you don't have the evidence they've done so. What if they were really just that accurate? It's incredibly unlikely, but not impossible.
In the end, however, the sky might appear to be the limit when it comes to AI, but to me it feels as if it's another nail in coffin of real education, the kind of education where students use critical thinking and argue a point based on their own observations or research; that is unless we can teach them to use it as just a support rather than a replacement. What do you think? I'd be interested to hear of any experience you've had with AI.
Anyway, on the same, but slightly different subject, here are some photos of the stunning skyscapes (or limits) we've been having between the rain showers, as well as a couple of pretty spring village scenes, and Zoe, of course (for Rebecca).
It's not often I discuss my job, let alone voice any concerns about it, but the intensity I mentioned was generated by worries about students using AI (Artificial Intelligence) to write their essays. Have any of you had experience of this? It really is becoming quite a challenging issue.
Don't get me wrong. I'm not against AI as a tool. It can be incredibly useful, and I use it myself to give me ideas for materials I can use for classes. All I have to do is ask a question with suitable prompts and the AI program will give me suggestions for a lesson plan, for exercises, and even sample texts to use for evaluation purposes, but the thing is, I never actually use what it gives me. I just use the ideas.
The first problem with AI tools for teachers and students is that it's clever, but not clever enough. The ideas it can produce are generic and offer nothing original – for obvious reasons. After all, it can only produce what it has drawn from the mass of data it has absorbed from elsewhere. The second problem is that students don't seem to have understood these limitations, and believe if they ask ChatGPT (the most popular tool in academia) to give them an essay that answers a specific question, they can use the seemingly perfect answer it produces and the teachers won't notice. A big mistake on their part.
I won't go into the circumstances in which this issue has arisen recently, but suffice to say it became obvious that a few students were doing just what I've described, but what's worse, they were doing so in a test situation. Their essays were strangely fault free, their paragraphs were all the same length, and the arguments in their texts reflected little to no personal thought or experience and only very general ideas. Now, as a native-speaking writer myself, I know it's impossible to write a 350 word text in half an hour without making a single typo or error; nor will I produce sentences all of the same length, or have the ideal number of five sentences in a paragraph. Only super humans (or AI) can do that.
When I write this blog, for instance. I make heaps of silly mistakes and I have to edit it numerous times before I've eliminated them all. My sentences are often far too long and I naturally ramble, so I have to cut out words, shorten sentences and improve on what I've written. It all takes a substantial amount of time, and even then, I nearly always end up with a typo or two that Koos points out to me. I haven't even noticed them.
So to add to the unlikelihood of the fault free writing, the third problem is that we cannot actually prove they've used AI. You see, if I give ChatGPT the same question four times, I'll get four different essays, so it will never be flagged as plagiarism. As I said, it's clever, but not so clever we cannot sense its use in the style and content.
But what do we do about it? If we're trying to test students' ability to write at a certain level, and AI is doing the work for them, it's a serious matter. These kids are neither exercising, nor proving, any educational skill or academic level. The only solution I can think of is that schools and colleges will have to revert to controlled exam conditions with students using paper and pens instead of independent computers. But that would really be turning the clock back, wouldn't it?
Fortunately for me, the problem isn't mine to overcome; I'm just one of the assessors. But it never feels good accusing a student of effectively cheating when you don't have the evidence they've done so. What if they were really just that accurate? It's incredibly unlikely, but not impossible.
In the end, however, the sky might appear to be the limit when it comes to AI, but to me it feels as if it's another nail in coffin of real education, the kind of education where students use critical thinking and argue a point based on their own observations or research; that is unless we can teach them to use it as just a support rather than a replacement. What do you think? I'd be interested to hear of any experience you've had with AI.
Anyway, on the same, but slightly different subject, here are some photos of the stunning skyscapes (or limits) we've been having between the rain showers, as well as a couple of pretty spring village scenes, and Zoe, of course (for Rebecca).






Published on April 27, 2024 09:52
No comments have been added yet.