Back
Blog
Jun 26, 2023
The Schillace Laws - Part 4: Limitations and Opportunities of Using LLMs
Lukasz C.
The third and last group of laws, Limitations and Opportunities of Using LLMs, helps us reflect on the distinction between the way a model and the human brain operate.
The Menu
Our last menu portion consists of:
Ask Smart to Get Smart.
Beware "pareidolia of consciousness"; the model can be used against itself.
8. Ask Smart to Get Smart.
We often get fooled by the "smartness" of LLMs. They do have a vast knowledge and an amazing ability to mimic the human train of thought, but it's not real "human" smartness. When you ask a short, simple question on a topic that a smart person is familiar with - you would get an exhaustive answer. It's not the case with models that are missing this kind of energy or "ego" that causes a person to show their knowledge.
That's why to get comprehensive responses, you need to frame your questions accordingly. For example:
That is why in order to talk to a smart model, you need to ask questions in a way to trigger that. Better yet, tell the model how smart you want it to be and then ask a question in this context.
This rule was very relatable with older versions of GPT models prior to GPT-3.5, where it was easy to get this kind of response:
Since the arrival of GPT-3.5, it's not that easy to make the model seem dumb:
For this burst of ego and very human-like bragging, I believe, responsible is a concept called ‘reinforcement learning with human feedback’ or RLHF which is basically a process of applying human perspective on tasks performed by LLM.
Fig.1 - Steps of RLHF.
9. Beware "pareidolia of consciousness"; the model can be used against itself.
Despite our tendency to assign a mind to LLMs, they're merely programs mimicking human-like output, not conscious entities. This lack of consciousness can be harnessed for self-verification. Unlike a human developer who needs others to check their code, LLMs can effectively test their own outputs.
Take a scenario where a model maps user input to predefined categories. If incorrect mappings occur, even after few-shot learning, the model can be instructed to re-check its own work. For example:
Summary
Of course this doesn't give us 100% confidence that we get what we want. We need to take that always into account when working with LLMs, both during the design phase and development phase. As Peter Parker's uncle said with great power comes great responsibility. So the power to easily add brain-like capabilities to our software comes with the responsibilty to handle all the uncertainity and indeterminism that comes with human-like mechanism.
Thank you for joining us on the journey of "The Schillace Laws" series. If you're interested in GPT integration or any other software development projects, collaborate with us at AppStream Studio.
Let's create something remarkable together! Contact us today at contact@appstream.studio or connect with us on LinkedIn.
Stay safe and happy coding!
The Schillace Laws - Part 1: Taming the Beast of Large Language Models
The Schillace Laws - Part 2: Relationships Between Code And Models
The Schillace Laws - Part 3: Trade-offs And Challenges When Using LLMs
The Schillace Laws - Part 4: Limitations and Opportunities of Using LLMs