LLMs: The Power Behind ChatGPT

There has been so much buzz around the latest AI trend which is ChatGPT! From design, language, coding, writing, and even telling jokes, ChatGPT seems to be able to do it all. But how is it able to do just that? This is what we’re exploring today and hopefully make it a little bit easier to understand the complexity of it all.

Large Language Models (LLMs)

Some people HATE predictive texts with a passion of a thousand suns, but I love it. It really makes chatting easier and once your phone gets acquainted with the way you talk/write, mistakes rarely happen! But how is your phone able to understand you so well when you don’t even know yourself that well? Well, it’s all about LLMs. The AI tools can read, summarize, translate, and predict future words similar to how humans talk and write. All of this comes as a result of training with massive amounts of data.


No, not the ones you’re thinking of! In AI, a transformer is a deep learning model that utilizes the ‘self-attention’ mechanism. It works with sequences and transforms one sequence into another. A very obvious example of where transformers can be found is in the translation tech.

Transformers consist of two parts; encoders (input sequence) and decoders (output sequence). So, if we want to translate a simple sentence like “I want food” from English to Spanish, it’d be “Yo quiero comida”. Any LLM would be able to come up with the same result because the word order in English and Spanish is the same. However, if you want to translate the same sentence from English to Korean, unless you use a transformer-based LLM, you wouldn’t get the correct translation. That is because the sentence order (sequence) in the Korean language is (Subject + Object + verb).

Now, in technical terms, basically what happens is that the encoder takes the words in a sentence and predicts the next words in the output sequence. So, the encoder tells which words are relevant to each other then the decoder uses these encodings in context to generate an output sequence.

As we said, transformers use the “self-attention” mechanism. This means they are able to provide context around the input sequence which brings meaning to each word in the sentence (sequence). You know, like humans do!


GPT-3 (Generative Pretrained Transformer 3) is a third-generation type of transformer. Your own human-equivalent artificial assistant, so to speak! The term GPT-3 has become more popular among us, mundane, since the launch of ChatGPT – your personal assistant with infante knowledge. I think that sums up GPT-3 quite nicely as so far, we don’t know where its limitations lie.

LLMs are all a result of training, what makes GPT-3 different is that it was trained on an incredible amount of data from several sources–informal as they may be–so that it would be able to write and understand like humans. Reinforcement Learning with Human Feedback (RLHF) was also used in its training to understand what humans expect to hear when they ask a question.

So, it can understand human hints, sarcasm, jokes, and go above and beyond human intelligence. So, is Will Smith’s prophecy actually come true and AI will replace humans? That’s a topic for another day, stay tuned!

You've successfully subscribed to The Nexta Blog
Great! Next, complete checkout to get full access to all premium content.
Error! Could not sign up. invalid link.
Welcome back! You've successfully signed in.
Error! Could not sign in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.
Nav style

Choose color
NOTE: These are accessability tested suggested color. You can copy the color code and use as your accent color.