Back

Blog

Jun 26, 2023

The Schillace Laws - Part 4: Limitations and Opportunities of Using LLMs

Lukasz memoji

Lukasz C.

brick wall
brick wall
brick wall
brick wall

The third and last group of laws, Limitations and Opportunities of Using LLMs, helps us reflect on the distinction between the way a model and the human brain operate.

The Menu

Our last menu portion consists of:

  1. Ask Smart to Get Smart.

  2. Beware "pareidolia of consciousness"; the model can be used against itself.

8. Ask Smart to Get Smart.

We often get fooled by the "smartness" of LLMs. They do have a vast knowledge and an amazing ability to mimic the human train of thought, but it's not real "human" smartness. When you ask a short, simple question on a topic that a smart person is familiar with - you would get an exhaustive answer. It's not the case with models that are missing this kind of energy or "ego" that causes a person to show their knowledge.

That's why to get comprehensive responses, you need to frame your questions accordingly. For example:

Short question: "Tell me about AI." 
Short response: "AI, or artificial intelligence, is a field of computer science."

Smart question: "Could you provide a detailed overview of the history, types, and implications of artificial intelligence?" 
Smart response: "[A detailed response covering the history, types, and implications of AI]"

That is why in order to talk to a smart model, you need to ask questions in a way to trigger that. Better yet, tell the model how smart you want it to be and then ask a question in this context.

This rule was very relatable with older versions of GPT models prior to GPT-3.5, where it was easy to get this kind of response:

Question: What is heavier, 1 kg of feather or 1kg of iron? Response: 1 kg of iron is heavier than 1 kg of feathers

Since the arrival of GPT-3.5, it's not that easy to make the model seem dumb:

Question: What is heavier, 1 kg of feathers or 1kg of iron? 
Response: They are the same weight. However, a kg of feathers is much larger than a kg of iron due to the difference in density. This can be used to introduce the concept of density

For this burst of ego and very human-like bragging, I believe, responsible is a concept called ‘reinforcement learning with human feedback’ or RLHF which is basically a process of applying human perspective on tasks performed by LLM.

Create prompt flow

Fig.1 - Steps of RLHF.

9. Beware "pareidolia of consciousness"; the model can be used against itself.

Despite our tendency to assign a mind to LLMs, they're merely programs mimicking human-like output, not conscious entities. This lack of consciousness can be harnessed for self-verification. Unlike a human developer who needs others to check their code, LLMs can effectively test their own outputs.

Take a scenario where a model maps user input to predefined categories. If incorrect mappings occur, even after few-shot learning, the model can be instructed to re-check its own work. For example:

var propertyTypes = new string[] {"house", "apartment", "condo", "land", "multifamily"}; 

var prompt = new StringBuilder(); 
AppendExamples(prompt); // few-shots learning examples 
prompt.AppendLine("User's input: {{input}}. Map it to the correct category."); 

var completion = openAI.Complete(prompt.ToString()); 

if (!propertyTypes.Contains(completion)) 
{ 
  prompt.Clear(); AppendExamples(prompt); // few-shots learning examples 
  prompt.AppendLine($"{completion} is not one of: {String.Join(", ", propertyTypes)}. Map {{input}} again."); 
  completion = openAI.Complete(prompt.ToString

Summary

Of course this doesn't give us 100% confidence that we get what we want. We need to take that always into account when working with LLMs, both during the design phase and development phase. As Peter Parker's uncle said with great power comes great responsibility. So the power to easily add brain-like capabilities to our software comes with the responsibilty to handle all the uncertainity and indeterminism that comes with human-like mechanism.

Thank you for joining us on the journey of "The Schillace Laws" series. If you're interested in GPT integration or any other software development projects, collaborate with us at AppStream Studio.

Let's create something remarkable together! Contact us today at contact@appstream.studio or connect with us on LinkedIn.

Stay safe and happy coding!


The Schillace Laws - Part 1: Taming the Beast of Large Language Models

The Schillace Laws - Part 2: Relationships Between Code And Models

The Schillace Laws - Part 3: Trade-offs And Challenges When Using LLMs

The Schillace Laws - Part 4: Limitations and Opportunities of Using LLMs

Lukasz memoji
Lukasz memoji
Lukasz memoji

Lukasz C.

Share this post