By Sheeva Azma
.@SheevaAzma wrote the ChatGPT explainer that she would want to read.
Tweet
It’s my job to explain complex science and technology. So, I decided to challenge myself: can I explain how ChatGPT works? I’m an MIT graduate, after all – I should be able to understand and explain anything involving technology, right?

The difficult part is that lately, I don’t have the patience for long explanations and diagrams – life has been complicated enough. So I just decided to write the ChatGPT explainer that I would find interesting.
I guess I should have tried to ask ChatGPT to explain itself – such an introspective task for something that isn’t really human – but I actually don’t use ChatGPT. It definitely would have taken a lot less time…but would it be as good? I am skeptical. I still haven’t tried – every time I have tried to use ChatGPT, it’s over capacity, so I wouldn’t know.
Maybe I could have told it a bunch of parameters; I could give it instructions like, “write an easy-to-understand, conversational explainer on ChatGPT that compares the technology to language processing in the human brain in less than 1000 words,” but it was trained on a limited dataset that stops at 2021 and I’ve heard that it makes up references.
Also, unlike me, ChatGPT is not a neuroscientist who knows how the brain produces language. I studied language processing as an undergrad at MIT, and worked on language research in grad school. I also taught people about neurolinguistics in various K-12 and university contexts.
I know that the way that the human brain produces language and the way artificial intelligence models try to produce language are not the same at all. Well, in theory, they are the same, but in reality – not really. The whole point of AI is to develop a model of the human brain, but the brain is so beautifully complex that we can’t even grasp its complexity. So AI researchers try to develop models that can approximate what we think that the brain does.
The human brain has a “language module” developed through experience
Reading about ChatGPT, I was surprised that it basically does the same thing as the human brain: it takes language inputs and learns from those to produce meaningful sentences and paragraphs. The main difference is that the human brain does this over a period of decades. When we are growing up, our brains are in a “critical period” for language. That means that there’s a certain timeframe in which our brain’s language module develops. We listen to people talking, and try to replicate what they are saying. When we’re teeny tiny babies, we’re not so great at this, but over time, with practice, we begin to speak the languages in which we are immersed.
The critical period for language closes sometime in adolescence, and after that point, we’re unable to learn new languages at the level of fluency we could if we learned them during the critical period. If you think of our ability to speak and understand others as a “language module,” you can say that the module is hard-wired into the brain by one’s late teen years via all of our language experiences.
ChatGPT was trained using machine learning with human feedback (analogous to how we learn language)
Unlike the human brain, which has its own developmental mechanisms, ChatGPT was created by the developers at OpenAI. Like humans, though, ChatGPT learns from input, and can also improve its language understanding over time. It’s been trained on a subset of data gleaned from text databases from the internet, though the information it uses stops at 2021.
As far as the language learning process, ChatGPT uses machine learning, which is a form of artificial intelligence which uses data and algorithms to “imitate the way that humans learn,” as the IBM Machine Learning page explains.
More specifically, ChatGPT relies on a software model called Reinforcement Learning with Human Feedback (RLHF). Reinforcement learning is a type of machine learning algorithm. In reinforcement learning, the algorithm learns to solve a multi-level problem by trial and error. It trains using real-life scenarios to make a sequence of decisions. The algorithm can get either rewards or penalties for the actions it performs, and it acts to maximize the total reward.
If that algorithm doesn’t sound like it’s about language at all, it’s not, though it was used to understand and parse the text inputs provided to ChatGPT. Similar algorithms have been used in self-driving cars.
The “human feedback” part of ChatGPT involves a human rating the response the chatbot provides so it can get smarter over time. To learn more about the nitty gritty details of the ChatGPT algorithm, check out this informative blog that has tons of technical details!
So, that was my ChatGPT explainer. Not too bad for a neuroscientist! Here at Fancy Comma, we continue to write our blogs ourselves without ChatGPT, but I still think it’s super cool to learn about. The future of AI seems very exciting indeed.