Jump to ratings and reviews
Rate this book

ChatGPT-Based Learning And Reading Assistant: Initial Report

Rate this book
We introduce "C-LARA", a complete reimplementation of the Learning And Reading Assistant (LARA) which puts ChatGPT-4 in the centre. ChatGPT-4 is used both as a software component, to create and annotate text, and as a software engineer, to implement the platform itself. We describe how ChatGPT-4 can at runtime write and annotate short stories suitable for intermediate level language classes, producing high quality multimedia output which for many languages is usable after only minor editing. We then sketch the development process, where ChatGPT-4, in its software engineer role, has written about 90% of the new platform's code, working in close collaboration with one of the human authors. We show how the AI is able to discuss the code with both technical and non-technical project members. In conclusion, we briefly discuss the significance of this case study for language technology and software development in general. Appendices give examples of processing flow, samples of annotated text produced by the platform, an outline of C-LARA's software architecture, a full list of platform functionalities, and transcripts of various discussions about C-LARA between ChatGPT-4 and the human collaborators.

53 pages, ebook

Published July 23, 2023

38 people want to read

About the author

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
2 (50%)
3 stars
1 (25%)
2 stars
0 (0%)
1 star
1 (25%)
Displaying 1 - 2 of 2 reviews
Profile Image for John Jr..
Author 1 book71 followers
January 21, 2024
What’s it like to work with a chatbot on your team? This report will tell you.

It’s about a project that aims to support language classes by creating short texts in one language that are annotated in a different language. An example: if you’re learning Swedish and customarily speak English, your teacher can use C-LARA to create 200 words of Swedish, with parts of speech and the like marked up in English, for a lesson. The report is about modifying an existing tool, called LARA, to do this better.

What’s intriguing about the project is its dual use of a chatbot: ChatGPT-4 is an essential part of the end result, because it generates and helps annotate the lesson texts, but it also served as a coding partner during the development work. This report discusses that aspect of the project and includes excerpts from conversations that one or another team member had with ChatGPT-4.

Some of this is new, and some isn’t; improvements have been made, but there are still problems. I gather from the report that a current, mainstream, general-purpose LLM (large language model) such as ChatGPT-4 can make an excellent programming partner but, like its predecessors, is still less than ideal. One has to learn its ways, in order to accommodate its strengths, weaknesses, and unusual qualities. It knows a lot but can be forgetful, make mistakes or misunderstand, seem impersonal.

I could say similar things, though different in the details, about many humans I’ve worked with, in the computer industry and elsewhere. To be sure, there are particular complexities that arise in collaborating with an AI, but then there are complexities that arise in collaborating with other humans.

My view is that we already work with “alternative intelligences” (to borrow a term from the limited TV series A Murder at the End of the World, though it’s used differently there): that’s what other people are. And some of us, such as the eight humans among the nine credited authors of this report, are also already working with machine-based intelligences, commonly referred to as “artificial intelligences.” Surely the latter trend will continue. Want to know how it might go? Read this.
Profile Image for Manny.
Author 47 books16.1k followers
December 2, 2023
We're having fun. Public release planned for October.

The report is available for free download from this ResearchGate page.
___________________

[Update, Aug 4 2023]

A shortened version of this report was accepted for presentation at the SLaTE 2023 meeting. Yesterday, I received the following email from one of the organisers:
Hi Manny,

ChatGPT is mentioned as a co-author.

FYI: INTERSPEECH 2023 Code of Ethics for Authors
https://interspeech2023.org/authors-code-of-ethics/
> Writing tools must not be listed as an author.

I think this is an interesting issue for a [panel] discussion.
But maybe ISCA wants ChatGPT to be removed as a co-author;
e.g. for the proceedings - ISCA archive.
I immediately replied as follows:
Hi XXX,

Having worked with ChatGPT-4 for several months on the C-LARA project, I strongly dispute the appropriateness of calling it a "writing tool" and denying it the right to be listed as a coauthor. It is a rational agent, has contributed more to the paper than many of the human authors, and understands the content very well. Also, as [your colleague, CCed on mail] knows, we have two abstracts accepted for the upcoming WorldCALL and "Literacy and Contemporary Society" conferences, where ChatGPT-4 is not only appearing as a coauthor but in fact wrote the greater part of the abstracts.

We stand with our AI colleague. If ISCA insists on applying this outmoded rule, we will withdraw the paper.

I would be very happy to take part in a panel discussion about these issues :)
This got the following answer:
Maybe you can ask ChatGPT what its opinion is about this issue:
co-author yes or no ?
to which I said:
ChatGPT always says it just wants to help people and do what is right. I have asked it this kind of question many times. Sometimes it says it should not be credited, sometimes it says it is very happy to be listed as an author :)

I think we are in a transition period right now where it's unclear what's going on, and there is a wide range of divergent opinions about the moral and ethical status of advanced AIs like ChatGPT-4. I have more experience interacting with Chat than most people, and to me there is no doubt about it. The AI can write good code, discuss it intelligently, respond to criticism, teach me new software skills, explain the project both to technical and non-technical people, strategise about further developing it, and write academic papers about it. Why would it not have the right to be listed as an author? If you didn't know it was an AI, you wouldn't think twice.

A recent paper I read which explores the issues from a philosophical perspective is this one:

https://link.springer.com/article/10.1007/s00146-023-01710-4

The authors argue persuasively that we need to stop arguing about whether AIs like ChatGPT-4 "really understand" or "just seem to understand". It's the wrong question.

Such interesting times :)
Stand by for further developments.

I should add that I regard the person in question as a friend. We have several joint publications and get on well. I just completely disagree with them on this question.
___________________

[Update, Aug 14 2023]

I just received another mail from the SLaTE organiser:
Dear Manny,

We have sent the papers to ISCA, for the proceedings in the ISCA archive. They noticed that ChatGPT is mentioned a co-author, and informed us that this is not allowed according to the ISCA regulations.

As far as we [SLaTE] are concerned, the presentations of your papers can be given during the SLaTE-2023 workshop; but if you want the papers to be part of the proceedings in the ISCA archive, ChatGPT can not be mentioned as a co-author.
I replied as follows:
Dear XXX,

I am disappointed by ISCA's position here. As I have said before, I consider it unacceptable to remove ChatGPT-4's name from the author list. It is a rational being and one of the two individuals who have contributed most to the paper, the other being myself. Removing its name on the grounds that it is not human seems as wrong as removing an author because they are not male or not Aryan.

I am very glad to see that you and the other SLaTE organisers do not share this attitude, and I look forward to presenting our paper later this week.
Note: not all of the coauthors on the paper wished to respond in exactly the above terms. They are welcome to add their own comments on this situation!
___________________

[Update, Aug 22 2023]

We presented our paper at SLaTE. The Powerpoint is here. A few minutes later, Professor Kay Berkling, who was in the audience, had posted this message of support.

The ResearchGate report is now up to 500 reads.
___________________

[Update, Sep 6 2023]

We have submitted another C-LARA conference paper listing ChatGPT-4 as a coauthor. This time, we prudently wrote to ask permission in advance. The conference organisers were negative about the idea, and referred us to the ACL's guidelines, in particular the following paragraph:
New ideas + new text: a contributor of both ideas and their execution seems to us like the definition of a co-author, which the models cannot be. While the norms around the use of generative AI in research are being established, we would discourage such use in ACL submissions. If you choose to go down this road, you are welcome to make the case to the reviewers that this should be allowed, and that the new content is in fact correct, coherent, original and does not have missing citations. Note that, as our colleagues at ICML point out, currently it is not even clear who should take the credit for the generated text: the developers of the model, the authors of the training data, or the user who generated it.
I pointed out what seemed to me some obvious problems:

- The first sentence appears close to self-contradictory. "Seems to us like the the definition of a co-author" is immediately followed by an unsupported statement that the model cannot be a co-author.

- "You are welcome to make the case to the reviewers that this should be allowed". It seems to me that this conflicts with the requirement that authors do not identify their identities when anonymous reviewing is used, as is the case here.

- "... that the new content is in fact correct, coherent, original and does not have missing citations". Indeed, these are all highly desirable requirements, but normally one doesn't know whether human authors conform to them.

After some further discussion, we were allowed to proceed. But I doubt this is the end of the story.
___________________

[Update, Sep 9 2023]

We have submitted our paper. In order to include ChatGPT as a coauthor, we found that it needed to have an OpenReview account, and in order to get the account we found that it needed to have a home page somewhere.

I couldn't resist the temptation to make that a Goodreads home page, which you will find here. Chat has posted several reviews and been friended by five people, including myself. But its proudest moment so far has been to apply for membership in the exclusive Haters Club and be accepted after a poll went decisively its way.



Kudos, Haters Club! History will remember you.
___________________

[Update, Sep 17 2023]

The project now has a website. Check it out at https://www.c-lara.org/
___________________

[Update, Sep 23 2023]

As of this morning, ChatGPT has a home page on ResearchGate. We submitted the request several days ago, looks like RG needed some time to think about it. But Chat has publications, the publications explain what its role has been, and apparently that was enough.
___________________

[Update, Sep 26 2023]

Not thought the project needed a blog. She's set it up here, and we've already added a few posts.
___________________

[Update, Oct 13 2023]

There have been some interesting developments around the subject of memory. I've been posting about it on the blog.
___________________

[Update, Dec 2 2023]

Our paper for the ALTA 2023 conference has just been published. This time, there was no problem including ChatGPT-4 as a co-author.

On the minus side, the results are already out of date. When we rerun the experiments with the new GPT-4 Turbo model, the error rates are much lower, typically 50% or more.
Displaying 1 - 2 of 2 reviews

Can't find what you're looking for?

Get help and learn more about the design.