Prompt Engineering
Stepping back for a moment, Large Language Models arenโt all that complex. They are text completion engines. And the responses are, by and large, not surprising.
If I enter the following prompt
> A long time ago in a galaxy far, far away
I bet you can guess, approximately, what the LLM will response with:
< , there was a great disturbance in the Force.
It even popped in a comma to make the sentence grammatically correct.
At a Temperature of 0.5:
< , a great war raged between the Rebel Alliance and the evil Galactic Empire.
And a Temperature of 1.5:
< It was a time of great turmoil in the galaxy. The evil Empire, led by the
Sith Lord Darth Vader, had seized control and was ruling with an iron fist.
The responses are predictable, albeit different. And resubmitting the same input, at the same settings, will produce a different yet still similar response.
< The galaxy was in a state of unrest as various factions vied for power and control.
The Jedi Order, once a peaceful and revered group of guardians, had all but disappeared,
driven into hiding by the malevolent Sith.
This is both interesting but counter to the software development norms of the past. A basic premise of functional programming is that given the same inputs, we can expect the exact same output - as with a mathematical function. Reliability and scale of systems is founded on this basic premise.
So LLM applications have different norms, but from the examples above, we can see that we can still mold the outputs to control the technology.
Prompt engineering
Prompt engineering is the process of getting what we want from LLMs by controlling or directing the inputs to the model. We have some levers at out disposal, which include:
- what input is used
- how specific the input is, including injecting obvious markers such as โa galaxy far, far awayโ
- how we structure the input
- how we delimit sections of the input
- what instructions we give
- what examples we include