ChatGPT: Why the human-like AI chatbot suddenly has everyone talking
ChatGPT has been taking the internet by storm – Copyright Canva
By Luke Hurst • Updated: 15/12/2022
Long promised by science fiction, an artificial intelligence that can talk to you in natural language, and answer almost any questions you might have, is here.
ChatGPT has been taking social media by storm over the past week, with users showcasing the diverse ways the tool can be used.
In just five days, it racked up over a million users, a feat that took social media platform Meta (formerly Facebook) 10 months and streaming platform Netflix three years to match.
Developed by the AI research company OpenAI, which has backers including Microsoft and Elon Musk, the chat tool uses the company’s GPT3 (Generative Pre-Trained Transformer 3) technology to allow users to talk to the AI about almost anything.
Trained on a massive data set, it is one of the most powerful language processing models ever created, and is able to respond in different styles, and even different languages.
about:blank
What sets it apart from previous AI chat tools, is how it can respond in natural-sounding language – if you didn’t know it was AI behind it, it could easily be mistaken for a chat with a real human.
Outside of basic conversations, people have been showcasing how it is doing their jobs or tasks for them – using it to help with writing articles and academic papers, writing entire job applications, and even helping to write code.
It is currently free to test out – you just need to sign up with an email and phone number – although OpenAI says it does review conversations “to improve our systems” and may use your conversations for AI training.
How does ChatGPT work?
ChatGPT is just the latest release from OpenAI’s catalogue of AI products, which includes DALL·E 2 for image creation.
It is a transformer – or a machine learning model – that processes and understands sequential data, such as natural language text. It works much like the human brain, using interconnected ‘neurons’ that can learn to identify patterns in data and make predictions about what should come next.
Trained on huge amounts of data from the Internet, including conversations, it was also trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.
Because of this method of learning, ChatGPT answers can come across as natural-sounding and human-like. And the bot is not just parroting text it has learned. According to Professor Mike Sharples, a professor of Educational Technology at the Open University in the UK, OpenAI’s language model is “creating an internal representation, not just at the surface text, but of the ideas and concepts behind it”.
How can I use ChatGPT?
It was trained on a massive data set, so it can respond to a wide array of questions, and deliver on a variety of tasks. Within a week of ChatGPT being launched, OpenAI’s co-founder and CEO tweeted that more than a million people had already used it.
Since its launch, users have been taking to social media to showcase its capabilities – which include coding help, writing essays – and even job applications.
Out-of-date information
There are limitations to what it can do, and what it knows. As it is trained on a data set, and it is not connected to the Internet, there is a cut-off point for its knowledge base, which is currently the end of 2021.
Therefore it cannot keep users up to date on current events, and so journalists and analysts don’t need to hang up their mouse and keyboard just yet.
ChatGPT can also be wrong. It even seems to get some basic maths questions wrong.
Some websites and services have banned the use of ChatGPT, such as Stack Overflow, a question-and-answer site for programming.
The site moderators justified the temporary ban: “The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce”.
OpenAI states in its ChatGPT FAQ that it can occasionally produce incorrect answers, and can produce “harmful instructions or biased content”. It also says it can write “plausible-sounding but incorrect or nonsensical answers”.
It can also find an answer to a question phrased in a certain way, while struggling to do so for the same question phrased slightly differently, and instead of asking the user to clarify these questions, it tends to guess what the user intended.
The company recommends checking whether its responses are accurate or not, and there is an option to provide feedback on its answers with a thumbs up or a thumbs down.