ChatGPT and the inevitable rise of artificial intelligence in everyday life

In this article, we will discuss aspects of this technology and explain how to use it ethically and critically. However, many questions arise: What actually is ChatGPT? How does it work? What are its risks and challenges?

Leonardo Foletto
Anna Bentes
Alessandra Maia

Digital communication began the year 2023 with a wave of excitement following the launch of ChatGPT, a chatbot that can answer practically any type of question formulated by users. Created by OpenAI and funded by Microsoft to the tune of US$11 billion, the tool was made available for anyone to use, free of charge, on November 30, 2022. In just a week, it reached its first million users, and by January of this year, it reached 100 million users. This makes ChatGPT the fastest growing technology in history (so far). For example, TikTok took nine months to reach the same number of users, while Instagram took two and a half years.

What actually is ChatGPT?

To begin with, ChatGPT is technically a chatbot, meaning a program that tries to simulate a human conversation using a Natural Language Processing (NLP) model. This model is a set of techniques and principles that allow a machine to communicate with a human in the most “natural” way possible. For example, voice assistants Alexa (Amazon) and Siri (Apple) also use NLP. However, ChatGPT is able to simulate human communication in a more sophisticated way than virtual assistants.

The ChatGPT that we know was created from an algorithm model called GPT-3, developed by OpenAI, which taps a database to process countless words and create billions of different sentences. Its database is made up of different web pages, books, new stories and encyclopedias, among other sources. When someone enters the system (using a web browser or mobile device) and writes any type of instruction (a question or command, which in computer language is called a prompt), the system searches the billions of web pages in its database and creates an answer, which will always be unique, different for each case. Although it uses a huge database of information from the internet, it is limited to the data that existed during its training period, up to 2021. That is, it will not be able to accurately answer questions related to facts that happened after this date. Thus, ChatGPT is part of a generation of artificial intelligence technologies called “generative,” which are capable of generating new texts, images and other things by processing different databases.

The GPT-3 model also stands out from other similar technologies, in that its ability to learn on its own (called “machine learning”) is enormous. This means that users can train the machine to make the text answers coherent, accurate and understandable. The more questions the AI tool is asked, the faster it improves its output.

Here are some practical uses of ChatGPT, based on the GPT-3 model, merely in the academic world:

  1. Organization of rough ideas into a coherent flow of text;
  2. Initial survey of texts on a given topic;
  3. Summarization of a theme in different topics;
  4. Summarization of a book, other ideas and key concepts in a subject;
  5. Formatting of articles and references.

OpenAI has been working on different GPT versions since 2018 and it plans to launch GPT-4 in 2023, which will increase its processing and learning capacity. However, there is much debate about how this and other artificial intelligence tools could develop to a dangerous point and act in ways that we cannot foresee, as in post-apocalyptic science fiction works.

Errors and challenges

The answers formulated by ChatGPT are readable, informative and coherent texts, but they are still answers with many flaws and limitations. As an NLP model, what it truly “learns” is to define patterns and probabilities of occurrence of words in a way consistent with human communication. That is, although it seems convincing and believable, in practice the algorithm does not understand the meaning of what it is generating and this is one of the sensitive points surrounding its use.

Over the last few weeks, many factual errors generated by ChatGPT have been identified. In everyday use, people may not bother to check the accuracy and veracity of the answers given by this AI tool, and this could contribute to the dissemination of false, incorrect or imprecise information on social networks.

Without losing its coherence, ChatGPT can also hallucinate. Yes, hallucinate! As explained by researcher Diogo Cortiz, machine hallucination is a technical term used to indicate when a machine gives a confident answer but without any justification in the training data.

What will happen to education?

One of the main points of tension following the launch of ChatGPT is its effects on education and science. The tool looks set to facilitate text production and aid information research. However, unlike the scientific production system, it is not concerned with citing its sources and references. How are we going to deal with the use of this type of tool in tests and assignments? There is no definitive answer, but much debate about the possible implications for education. Accordingly, the School of Communication, Media and Information (FGV ECMI) will debate this topic on March 22, in a webinar called Artificial Intelligence in the Classroom.

*As opiniões expressas neste artigo são de responsabilidade exclusiva do(s) autor(es), não refletindo necessariamente a posição institucional da FGV.

Our website collects information from your device and your browsing and uses technologies such as cookies to store it and enable features, including: improve the technical functioning of the pages, measure the website's audience and offer relevant products and services through personalized ads. For more information about this Portal, access our Cookie Notice and our Privacy Notice.