HomeGlossaryPrompt Engineering
Applications

Prompt Engineering

The practice of crafting inputs to AI models to elicit better, more accurate, or more useful outputs.

Prompt engineering is the skill of writing inputs — prompts — that guide a language model to produce better outputs. Because LLMs are highly sensitive to how a question is phrased, small changes in wording can dramatically change the quality, format, and accuracy of responses.

Good prompt engineering involves being specific about the desired output format, providing context, giving examples (few-shot prompting), specifying the persona or tone, and breaking complex tasks into steps. Advanced techniques like chain-of-thought prompting ask the model to reason step-by-step before answering, which significantly improves accuracy on complex tasks.

Simple example: "Tell me about Paris" → vague, generic answer. "Write a 3-paragraph summary of Paris for a first-time tourist, focusing on top attractions, local food, and transport tips" → specific, useful answer. Same model, very different output.

Core Techniques

  • Zero-shot — ask directly without examples
  • Few-shot — provide 2–5 examples before the actual task
  • Chain-of-thought — instruct to reason step by step
  • Role prompting — assign a persona ("You are an expert lawyer...")
  • System prompts — set persistent context and constraints

Prompt engineering is increasingly a core skill for developers, writers, marketers, and researchers. As models improve and support longer context windows, the emphasis shifts from clever tricks to clear, structured communication. The best prompt engineers treat the model as a very capable but literal collaborator.

Related Terms

← Back to Glossary