There’s been a lot of panic and concern around the newest chatbot to arrive on the internet: ChatGPT. Developed by American artificial intelligence (AI) research laboratory, OpenAI, ChatGPT (Chat Generative Pre-trained Transformer) launched in November 2022. The reason it has caused so much panic and gained a lot of attention is due to its “detailed responses and articulate answers across many domains of knowledge”.
So what exactly is AI and where does ChatGPT fit into it?
The ChatGPT is a chatbot where the user “talks to it” and asks questions by giving it instructions and prompts to it. The prompts help the AI and ChatGPT to create a response for the user. It is available via a website where users are required to sign in. In January 2023, ChatGPT reached over 100 million users. It is the fastest growing consumer application to date. It was initially free to use.
However, the chatbot has been at capacity for a number of weeks. Therefore only registered users can seemingly access the bot at the moment. In February 2023, OpenAI began accepting registrations from United States customers for a premium service, ChatGPT Plus, to cost $20 a month.
On the website, the bot is described as a model which has been trained to interact in a conversational way. “The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
Upon release, it was met with a lot of positive feedback. However, equally, there were a lot of educators, journalists, artists, ethicists, academics, and public advocates who critiqued the chatbot. Several universities have already prohibited the use of ChatGPT by students.
The model has been created using Reinforcement Learning from Human Feedback (RLHF). OpenAI said they trained the initial model using “supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant”.
The core function of a normal chatbot is to mimic human conversationalists. Chatbots can appear on social media messaging applications and customer support amongst other things. However with ChatGPT it has a number of different uses. It is these versatile uses that are causing a lot of concern. ChatGPT is able to write computer programs, compose music, write student essays and answer test questions. It has many implications for cybersecurity and academia but most importantly ethically.
During a webinar hosted by the University of KwaZulu Natal on February 17 around the “Opportunities and Perils” of ChatGPT, Dr Jin Kuwata from Columbia University said ChatGPT can be seen not as a tool to finish a task but rather as a co-conspirator to finish a task. He said it’s making the transition of learning from to with.
Kuwata explained that artificial intelligence and chatbots like ChatGPT do not follow the rules that are given to it by human beings. They write their own rules and with a learned algorithm, they make their own predictions and decisions.
However, Kuwata said the AI and ChatGPT still require a lot of domain knowledge to produce good answers. Users won’t be able to use the tool effectively if they do not have knowledge.