Back

Blog

Jul 25, 2023

The Schillace Laws - Part 2: Relationships Between Code And Models

Lukasz AppStream Memoji

Lukasz C.

Human-robot head image
Human-robot head image
Human-robot head image
Human-robot head image

The first group of laws, Relationships Between Code and Models, is vital when working with AI orchestrators like Semantic Kernel. Balancing conventional code and LLM in the software system is a crucial skill for software developers to master as AI becomes more popular in software engineering.

The Menu

We have three items on the menu here:

  1. Don't write code if the model can do it; the model will get better, but the code won't.

  2. Code is for syntax and process; models are for semantics and intent.

  3. Text is the universal wire protocol.

Some are more straightforward than others, so let's focus on the first two as our meal:

Two-course meal

1. Don't write code if the model can do it; the model will get better, but the code won't

The general premise of the first law is quite simple: when adding any feature to your application, ask yourself: can an LLM do it? If so, let the model handle it. It doesn't have to be perfect or super-precise; we have multiple means to guide the model and verify/validate the results (as mentioned in other Schillace Laws).

However, the decision between code and model is rarely straightforward. Often, we need to use both, and the question becomes: where to draw the line.

2. Code is for syntax and process; models are for semantics and intent

The second law is closely connected to the first: when making the decision about drawing the line, keep in mind that models are for semantics and intent, while conventional code is for the rest.

Example: Send email to Peter on Wednesday to remind him of a meeting about AI

To better understand them and visualize the relationship, let's consider implementing a small LLM-powered app that can send email reminders to my teammates about meetings when I ask about it in a chat with a request.

The simplest thing to do would be to prepare a prompt:

Analyze the following request and provide a JSON object with the following fields:
- recipient 
- send_date 
- topic Request: "<user request>"

Our model's response would be:

{ 
  "recipient": "Peter", 
  "send_date": "Wednesday", 
  "topic": "Reminder of meeting about AI" 
}

A few months ago, I would have considered this response amazing. GPT-3.5 did a great job of extracting crucial information from the request and structuring it in a format that our conventional code can work with.

What is left to do is:

  • Parse the send_date field and replace it with the actual date and time.

  • Find the recipient in our user database with the first or last name "Peter."

  • Send an email.

If any of these tasks are not possible (e.g., difficult to parse the date or "Peter" is not enough to identify the recipient), we can show an error message, providing meaningful feedback and asking the user to be more precise.

Now let's see how we can improve the design while keeping in mind the two laws: "Don't write code if the model can do it" and "Use models for semantics and intent."

Firstly, I think we could do a better job of understanding the user's intent regarding the date. If the model can handle it, why bother with code? Secondly, if we could make the model determine the exact recipient of the email, implementing it in code would be breaking the first law, right?

The new prompt could look something like this:

Analyze the user request to send an email and generate a JSON object with the following fields: recipient, send_date, and topic.
Available recipients are: 
- John Kowalski 
- Peter Parker 
- Andy Warhol 
- Anne Shirley
- Philip K. Dick 
- Jack Sparrow
- Adam Smith 
  
Today is: June 12, 2023. Use the same format for the `send_date` field.

Example: "Send an email to John on Tuesday to remind him of a meeting about C#" 

JSON Output: 
{ 
  "recipient": "John Kowalski", 
  "send_date": "June 13, 2023", 
  "topic": "Meeting about C#" 
} 

User Request: 
"<user request>" 
  
JSON Output:

This time our response is:

{ 
  "recipient": "Peter Parker", 
  "send_date": "June 14, 2023", 
  "topic": "Meeting about AI" 
}

Now our conventional code has much less to do:

  • No need to interpret user-defined date formats.

  • No need to guess if the recipient is mentioned by name or surname.

So, all that's left to do is to write some code to schedule an email to be sent to Peter Parker on June 14, 2023.

While the date and topic may not always be 100% accurate, the system's performance improves as the model evolves. That's why it's crucial for us, as developers, to rely on the model to handle these aspects and resist our "DIY" impulse.

Dessert

3. Text is the universal wire protocol

A wire protocol, or more precisely, a communication protocol, is a set of rules that enables the exchange of information between two or more entities. In software development, we often use various protocols and data formats to facilitate communication between services and system components. These protocols are typically well-structured, well-described, and well-defined, with tools available to parse, validate, and generate messages.

However, when it comes to LLMs, things work a bit differently. LLMs not only have the ability to parse natural language text but also understand its intent. This means that translating it into a structured, machine-readable format can result in data loss. Therefore, it is essential to follow the rule of avoiding lossy translation whenever possible.

To illustrate this, let's revisit our previous example:

Original text: "Send email to Peter on Wednesday to remind him of a meeting about AI"

Converted JSON:

{ 
  "recipient": "Peter Parker", 
  "send_date": "June 14, 2023", 
  "topic": "Meeting about AI" 
}

It is evident that we lose information during the translation process. Even if we enhance the JSON by adding more fields, such as "activity" and asking the LLM to fill them, we still cannot be certain that we have captured the full intent of the original request:

{ 
  "activity": "sendEmail", 
  "recipient": "Peter Parker", 
  "send_date": "June 14, 2023", 
  "topic": "Meeting about AI" 
}

No matter how detailed the JSON becomes, or how many fields we fill in, there is always a risk of missing the complete intent of the original message. Therefore, when transmitting the message to another LLM or semantic service, the best way to ensure we preserve both data and intent is to use the original message or at least a prompt generated by an LLM that retains the intent or sense of the request.

Next Steps

Thank you for reading this blog post! We appreciate your time and hope you found it useful.

In the upcoming parts of this series, we will delve deeper into The Schillace Laws, specifically focusing on the group titled "Trade-offs And Challenges When Using LLMs." We will continue to explore the intricacies of working with LLMs in software engineering.

In the meantime, we encourage you to contribute to the open-source .NET AI orchestrator Semantic Kernel or checkout this free course on building AI-powered apps by John Maeda and Sam Schillace.

We would love to hear your thoughts and answer any questions you may have. Feel free to reach out to us via email at contact@appstream.studio or connect with us on LinkedIn.

Stay tuned and happy coding!

The Schillace Laws - Part 1: Taming the Beast of Large Language Models

The Schillace Laws - Part 2: Relationships Between Code And Models

The Schillace Laws - Part 3: Trade-offs And Challenges When Using LLMs

The Schillace Laws - Part 4: Limitations and Opportunities of Using LLMs

Lukasz AppStream Memoji
Lukasz AppStream Memoji

Lukasz C.

Share this post