Skip to content
Phase 1: FoundationsStep 2 of 14Beginner1-2 weeks

Prompt Engineering

Master the art of communicating with LLMs

Chain-of-ThoughtFew-shot promptingRole promptingOutput formattingPrompt iteration

Getting Started

Prompt engineering is the skill that separates agents that work from agents that stumble. Every interaction your agent has with an LLM is ultimately a prompt, so learning to write effective prompts is foundational to everything else on this roadmap.

The core insight is that LLMs are pattern-completion engines. Your job is to set up a pattern that leads to the output you want. Start by experimenting directly in a chat interface to build intuition, then move to structured prompts you can use programmatically.

There are four major techniques you need to master: zero-shot prompting, few-shot prompting, chain-of-thought reasoning, and role prompting. Each has strengths for different situations.

Key Concepts

Chain-of-thought prompting is the most important technique for agents. By asking the model to reason step-by-step, you dramatically improve accuracy on complex tasks:

Analyze this security log entry and determine if it represents a threat.

Think through this step by step:
1. Identify the source IP and action taken
2. Check if the pattern matches known attack signatures
3. Assess the severity level
4. Provide your verdict with confidence level

Log entry: {log_entry}

Few-shot prompting gives the model concrete examples to follow. This is essential when you need consistent output formatting:

Extract structured data from the product description.

Example input: "The Nike Air Max 90 in white, size 10, retails for $130"
Example output: {"brand": "Nike", "model": "Air Max 90", "color": "white", "size": "10", "price": 130}

Example input: "Adidas Ultraboost 22, black colorway, $190, size 9.5"
Example output: {"brand": "Adidas", "model": "Ultraboost 22", "color": "black", "size": "9.5", "price": 190}

Now extract from: {input_text}

Role prompting shapes the model's behavior by establishing expertise and constraints. This is how you control the personality and boundaries of your agents:

You are a senior database administrator with 15 years of experience.
You only provide SQL that is compatible with PostgreSQL 15+.
When asked about destructive operations, always include a warning and suggest a backup first.

Hands-On Practice

The best way to improve at prompting is systematic experimentation. Pick a task, write a prompt, test it with multiple inputs, and iterate. Keep track of what works and what does not. Common failure modes to watch for:

  • Ambiguous instructions lead to inconsistent outputs. Be specific about format, length, and structure.
  • Missing constraints let the model hallucinate or go off-topic. Always specify what the model should not do as well as what it should.
  • Over-complicated prompts can confuse the model. If your prompt is longer than a page, consider breaking the task into smaller steps.

Build your prompt library as a living document that grows with your experience. You will reuse these patterns constantly when building agents.

Exercises

Create a Prompt Library

Build a collection of 5+ tested prompts for different use cases: summarization, data extraction, code review, creative writing, and structured analysis. Document each prompt with its purpose, expected input format, and example outputs.

Knowledge Check

When should you use few-shot prompting instead of zero-shot?

Milestone Project

Create a prompt library with 5+ tested prompts for different use cases (summarization, analysis, code review, writing, data extraction)