The future is not prompt engineered

Since generative AI hit the mainstream, those close to it have considered prompt engineering a valuable skill. Some, at times, have labelled it the future’s most sought-after skill

This perspective gets less airtime now that Adobe, Google, Apple, and other companies have shifted the use of AI toward more consumer-friendly interfaces, however, it’s still an oft-used term… so it’s worth pausing and considering where prompt engineering is useful, and more specifically, where, it won’t be.

Let’s have a look at the dwindling relevance of prompt engineering and why its emergence into the general zeitgeist was inevitably short-lived. Along the way, we’ll look at some deficiencies that make prompt engineering appealing, while also asserting that, in most situations, it’s an inefficient use of a powerful tool — A usage that puts more work on humans at a time when we can offload that work onto machines.

Even before the advent of more intuitive interfaces, when AI was little more than a chat box that provided little instruction or aid to the user, AI’s focus on natural language input had already solidified it as the most intuitive and adaptable computer interface users could experience.

When interacting with these interfaces, “prompting” describes the process of posing a request to an AI, and “Prompt Engineering” is the art of constructing that request carefully — so as to elicit the exact response desired.

In most incarnations, this process occurs within a text-based chat box, but with multi-modal advances, it now extends to interaction using audio and images as well. This, combined with improved interfaces, means the intuitiveness of AI is even more evident now than it was previously — Regardless of where we think AI interfaces can still be improved further.

Users can interact with these models in almost any language, with any level of jargon (or no jargon), users can describe around a subject they don’t recall the name of, or ask specifics based on an already existing level of expertise. The relational mapping of AI neural nets means that it adapts to you in a way that no interface has ever been able to before. And multi-modality extends this to individuals not literate in the written word or not able to use keyboards.

In this way, AI systems reduce the knowledge and complexity put on the human operator.

Yet in a surprisingly contradictory turn, prompt engineering has arisen to help us how to adapt to the AI systems

How can a specialised prompt engineering skillset be the future for a technology whose most significant consumer offering is to reduce the need for specialised skillsets?

Prompt engineering’s relevance

It should be noted here, that developers of products that incorporate AI tooling will always need to pay special attention to prompt structure, and bad actors (or curious ones) will continue to misuse AI through prompt injection. Both of these could fall under a category of “prompt engineering” and will remain very relevant.

Those developing products that will use AI must design for the asynchronous nature of the use of the product rather than simply considering it during implementation. A game designer, for example, might build in a connection to AI for generating character dialog on the fly, but they will not be present when the user is playing the game months or years later. They therefore have to carefully engineer how the prompts will be constructed, to ensure that when they execute, they will always produce an appropriate result without need for review.

Building things that will use AI, however, has very different needs to using AI to build things.

This article focusses on the latter: The use of AI tools from a consumer perspective, Where the output doesn’t include an ongoing live link to AI, but rather, AI is simply used in the initial construction process.

Prompting types compared

From 2022, two main categories of generative AI gained popularity…

  • Large Language Models (LLMs), whose offering was to parse language and generate relevant output, and…
  • Generative Adversarial Networks (GANs) and Diffusion Models, who’s offering was to generate imagery.

For LLMs, while careful prompting can certainly produce better results, their most fundamental purpose is already perfectly aligned with removing the need for technical expertise and adapting to your communication.

An example LLM prompt:

Pretend you’re a modern-day physicist. Tell me about quantum physics. Include multiple perspectives on any currently debated theories.

For image generation models, however, prompt engineering provided more value. This was (and still is for some) primarily due to a technical interface masquerading as a basic text prompt. One that required the user to have specific expertise in the syntax and semantics needed to produce the desired output.

An example image generation prompt:

aeroplane::4, cloudy sky::1, water colour –ar 16:9

In this simplified prompt example, designed for Midjourney, the double colon and number sections indicate the emphasis that the AI should place on each object in the image, and the --ar 16:9 tells Midjourney what aspect ratio the image should be created at.

Prompts also get far more complicated and detailed than these, yet even these basic examples demonstrate that language models vs others, have come with different strengths. Image generators, for example, began by offering visual output at the expensive of learning new syntax, while large language models excelled at adapting to and parsing the native languages we already knew.

It’s no surprise then, that the need for prompt engineering arose to help bridge this gap, as well as mitigate other quirks. But it’s also no surprise that multi-modal models eventually arose — Solving this initial discrepancy by combining the two modes together.

An example image prompt now, using more modern tools:

Create a water colour painting of a close up plane against a cloudy sky.

Isn’t terminology important?

You might argue that no matter the advancement, knowledge of domain specific terminology is still necessary for good results. For instance, understanding terms like “shallow depth of field”, or “knolling”, would allow you to specify what you’re imagining more clearly.

Succinct and well established terms do help reduce ambiguity, but because AI models map relationships between words and phrases, asking for a very blurry background or a top down view of items laid out in an organised fashion will still produce comparable results to ‘shallow depth of field’ and ‘knolling’.

As the contextual and memory capabilities of AI are improved, therefore (and more capitalised on by product developers), AI responses could also inform you of what terminology or effects you might have intended, or should consider, and then prompt you further with an opportunity to clarify for more refined results.

Ambiguity still remains

Even as AI’s ability to interpret us improves, however, we’re still left with the necessary acknowledgement that most human communication, as accessible as it can be, is vague. Even to other humans.

Programming, and other technical languages (Like maths, scientific notation, and formal logic), were designed to help solve this. The standardisation and rigidity in these languages allow for the eradication of ambiguity, and better facilitate designing and building complex systems.

So no matter how perfect AI models get, if they’re based on natural language (or any other form of accessible human communication), ambiguity will persist.

It won’t then be the fault of the technology; It will be reflective of our communication.

Even as AI improves, therefore, careful clarification is still necessary to address the ambiguities that more compute or better algorithms simply will not overcome.

The best solution?

A prompt that draws the perfect response, however, requires having a human put in the work to define a question that perfectly elicits that response. This can drive prompt construction back toward technical expertise, and it requires significant knowledge-work from a human to reduce the amount of work done by a machine.

That’s a horrible misuse of resources, and it goes against that primary consumer offering that AI tools give us — The ability to reduce the knowledge and complexity put on the human operator.

This is a valid ask for simple prompts like “Describe the most common sailing knots”, or “What are the capital cities in every country on the planet”. These kinds of prompts rely on an easily manageable level of specificity from the user in order to elicit fairly deterministic answers.

More complex tasks, however, that capitalise on the generative power of AI more clearly, might represent many decisions and assumptions wrapped into one prompt. Such as “Build me an app that does A, B, & C”.

…But how should A work? What platform should the app be built on? What should happen when B contradicts C? Should it be tailored for novices or experts? What colour scheme should it use?

A complex task like this requires a potentially infinite level of specifications for the AI to produce the correct deterministic result. The model therefore needs to either make assumptions, or the user needs to write a prompt that takes on so much of that work that it can bear more resemblance to an SRS document than the more accessible and informal input that AI facilitates.

And even when working with human devs, this system design can sometimes constitute a significant portion of the overall work.

Note: It may be easy to think we’ve strayed into developer implementations of AI here. We haven’t. Developers are also consumers of other AI tools while developing their products.

More importantly, however, from more moderate tasks to the most complex, prompt engineering, is unrealistically reliant on the assumption that the user knows the exact output they want to receive in the first place. Which is often not the case.

As tasks increase in complexity, therefore, it’s important to ask…
Why would we even need the output of an AI model to be right on the first go?

Prompting can be Agile or Waterfall

In the past, engineering teams spent decades working with Waterfall approaches to product development before switching to more Agile -oriented approaches. Debatably, Agile can be better, not because it’s quicker than Waterfall, but because it doesn’t assume the right answer from the beginning. Instead, Agile embraces trials, changes, and pivoting during implementation. So that it can get to a more useful result, rather than just a pre-planned one.

Agile, however, can be slower. Humans take time to do the work, and doing, testing, changing, and redoing takes longer than simply building to a comprehensive plan outlined at the start.

But even so, iterating to the right goal slowly is almost always more useful than investing significantly at the planning stage, with less information, and then building something that may ultimately miss the mark.

And yet… prompt engineering seems to forget this.

When enlisting AI as part of the process, prompt engineering seems to ask us to front-load the work in the planning phase — To invest more time in definition and specifications up-front (When work is done by humans), so that we can make the implementation phase more efficient (When work is done by AI).

Seems rather inefficient, right? And almost inhumane.

Even if just considering efficiency, why would we go back to a Waterfall approach just when the resource cost of iterating in the implementation phase plummets to zero?

What’s wrong with asking a slightly malformed question and getting an incorrect result, if the cost of trying again is negligible? Agile’s main deficiency is evaporating, so is now really the time to err against it?

AI tools don’t require days to make a change, nor charge by the hour, and they don’t get frustrated by changing product requirements. So why take on the responsibility of creating the perfect brief when you can iterate to perfection alongside the machine.

Your future is not in prompt engineering

Prompt engineering started as a necessary requirement of more limited AI tooling that placed the onus of quality on consumers. So it’s understandable that it’s entered the zeitgeist as far as it has.

The ongoing power of these tools, however, lies in how they adapt to us, not the other way around. And even if limited now, as others have put quite succinctly, The capability of generative AI is currently the worst it will ever be”.

By extension, the need for prompt engineering is currently the highest it will ever be. So for consumers, prompt engineering is not the future. And if you’re a developer or a designer, you’re still a consumer most of the time.

Thanks…

I dissect and speculate on design and development.
Digging into subtle details and implications, and exploring broad perspectives and potential paradigm shifts.
Check out more like this on Substack or find my latest articles below.


You can also find me on Threads, Bluesky, Mastodon, or X for more diverse posts about ongoing projects.

My latest articles

Focal point blocking for XR media

Planning out a linear VR experience requires thinking about where the viewers attention might be. Thinking about the focal points…

Designing immersive experiences

In traditional cinema, TV, or even the more modern phone screen, there’s limited screen real-estate. But removing that limitation creates a design problem…

The future is not prompt engineered

Let’s not pretend the importance of prompt engineering is ubiquitous. The most prevalent power of generative AI is in the way it adapts to us, not the other way around…

Author:

Date:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.