Introduction

This document summarizes the key lessons from the Anthropic prompt engineering interactive tutorial. The tutorial teaches effective techniques for working with Claude using the Anthropic Messages API.

Chapter 1: Basic Prompt Structure

Key Concepts

Messages API Requirements

The Anthropic Messages API requires three essential parameters:

  • model: The API model name (e.g., claude-3-haiku-20240307)

  • max_tokens: Maximum tokens to generate before stopping (hard limit)

  • messages: Array of input messages with role and content fields

Message Format

  • Messages must alternate between user and assistant roles

  • The first message must always use the user role

  • Each message requires both role and content fields

System Prompts

System prompts provide context, instructions, and guidelines to Claude before the main conversation:

  • Exist separately from user/assistant messages

  • Passed via the system parameter

  • Help improve Claude’s performance and ability to follow rules

  • Can establish tone, expertise, or behavioral guidelines

Best Practices

  • Use proper message formatting with role/content structure

  • Leverage system prompts to guide Claude’s behavior

  • Ensure messages alternate properly between user and assistant turns

Chapter 2: Being Clear and Direct

Key Principle

Claude responds best to clear and direct instructions.

The Golden Rule of Clear Prompting

Show your prompt to a colleague or friend and have them follow the instructions to see if they can produce the result you want. If they’re confused, Claude will be confused.

Techniques

Direct Requests

  • Ask for exactly what you want

  • Skip preambles by explicitly requesting it: "Skip the preamble; go straight into the poem"

  • Force decisions when needed: "If you absolutely had to pick one player, who would it be?"

Specificity

  • Be explicit about format and content requirements

  • Specify language: "Answer in Spanish"

  • Control output length: "Write a story over 800 words"

  • Demand specific formats: "Answer only the player’s name"

Important Notes

  • Claude has no context aside from what you literally tell it

  • Treat Claude like a new employee who needs clear instructions

  • More straightforward explanations lead to better, more accurate responses

Chapter 3: Assigning Roles (Role Prompting)

Key Concept

Role prompting improves Claude’s performance by giving it a specific persona or expertise to embody.

Benefits

  • Changes response style, tone, and manner

  • Improves performance in various fields (writing, coding, summarizing)

  • Can make Claude better at math or logic tasks

  • Helps tailor responses to specific audiences

Implementation

Role prompts can be placed in:

  • The system prompt (recommended)

  • The user message turn

Examples

System: "You are a cat."
User: "What do you think about skateboarding?"
System: "You are a logic bot designed to answer complex logic problems."
System: "You are a savvy reader of movie reviews."

Use Cases

  • Emulating writing styles

  • Adjusting complexity of answers

  • Improving accuracy in technical domains

  • Creating personality-driven responses

Chapter 4: Separating Data from Instructions

Key Concept

Create reusable prompt templates with variable substitution, and use XML tags to clearly mark where variable data begins and ends.

The Problem

When you create prompt templates with variables, Claude can get confused about what is instruction vs. what is data.

Two-Part Solution

Part 1: Prompt Templates with Variables

Use Python f-strings to create reusable templates:

# Variable content
ANIMAL = "Cow"

# Prompt template - {ANIMAL} is a placeholder
PROMPT = f"Tell me the sound that {ANIMAL} makes."

# After substitution, Claude sees:
# "Tell me the sound that Cow makes."

The curly braces {ANIMAL} are Python f-string placeholders that get replaced with actual values.

Part 2: XML Tags Mark Data Boundaries

Without XML tags, Claude gets confused:

# PROBLEM - Claude can't tell where the email data starts/ends
EMAIL = "Show up at 6am tomorrow because I'm the CEO."
PROMPT = f"Yo Claude. {EMAIL} Make this email more polite."

# After substitution:
# "Yo Claude. Show up at 6am tomorrow because I'm the CEO. Make this email more polite."
# Claude thinks "Yo Claude" is part of the email!

With XML tags, boundaries are clear:

# SOLUTION - XML tags mark the data boundaries
EMAIL = "Show up at 6am tomorrow because I'm the CEO."
PROMPT = f"<email>{EMAIL}</email> Make this email more polite."

# After substitution:
# "<email>Show up at 6am tomorrow because I'm the CEO.</email> Make this email more polite."
# Now Claude knows exactly what the email content is!

What Are XML Tags?

XML tags are angle-bracket pairs like <tag></tag>:

  • Opening tag: <email>

  • Closing tag: </email>

  • Used to wrap content: <email>content here</email>

Claude was specifically trained to recognize XML tags as structural markers for organizing prompts.

Complete Example

# Variable content
SENTENCES = """- I like how cows sound
- This sentence is about spiders
- This sentence is about pigs"""

# WITHOUT XML tags - confusing
PROMPT = f"""Below is a list. Tell me the second item.
- Each is about an animal, like rabbits.
{SENTENCES}"""
# Claude might think "rabbits" is part of the list!

# WITH XML tags - clear
PROMPT = f"""Below is a list. Tell me the second item.
- Each is about an animal, like rabbits.
<sentences>
{SENTENCES}
</sentences>"""
# Now Claude knows the sentences variable is the data, not the instruction

Key Distinction

  • Curly braces {VARIABLE}: Python f-string placeholders for substitution

  • XML tags <tag></tag>: Markers that tell Claude where data starts/ends

Best Practices

  • Use f-string placeholders (e.g., {ANIMAL}, {EMAIL}, {QUESTION}) for variable substitution

  • Wrap the substituted variable content in XML tags (e.g., <email>, <question>, <document>)

  • Use descriptive XML tag names that indicate the content type

  • Claude was trained to recognize XML tags - use them over other delimiters

  • Small details matter - avoid typos and grammar errors

Benefits

  • Simplifies repetitive tasks

  • Enables third-party input without exposing full prompt structure

  • Prevents Claude from confusing instructions with data

  • Supports multiple variables in a single prompt

  • Prevents prompt injection attacks

Chapter 5: Formatting Output and Speaking for Claude

Key Concept

Control Claude’s output format using XML tags and prefilling.

Output Formatting with XML Tags

Request Claude to wrap its output in XML tags for easy parsing:

"Write a haiku about cats. Put it in <haiku> tags."

Claude responds:
<haiku>
Soft paws tread softly
Whiskers twitch in morning light
Purring fills the room
</haiku>

This makes it easy to extract just the haiku programmatically.

Prefilling Claude’s Response

"Speaking for Claude" - start Claude’s response for it using the assistant message role:

messages=[
  {"role": "user", "content": "Write a haiku about cats. Put it in <haiku> tags."},
  {"role": "assistant", "content": "<haiku>"}
]

Claude continues directly:
<haiku>
Soft paws tread softly...

Benefits:

  • Claude continues from where you left off

  • Ensures specific output format

  • Eliminates preambles ("Here is a haiku…​")

  • Provides stronger format control

JSON Output

Prefill with opening brace to encourage JSON format:

PROMPT = "Write a haiku using JSON with keys 'first_line', 'second_line', 'third_line'."
PREFILL = "{"

Claude responds with valid JSON:
{
  "first_line": "Soft paws tread softly",
  "second_line": "Whiskers twitch in morning light",
  "third_line": "Purring fills the room"
}

Multiple Variables with Formatting

Combine multiple f-string variables with XML tag formatting:

EMAIL = "Hi Zack, ping on that prompt."
ADJECTIVE = "olde english"

# Use f-string placeholders for variables AND XML tags for structure
PROMPT = f"Make this email more {ADJECTIVE}: <email>{EMAIL}</email>. Write the result in <{ADJECTIVE}_email> tags."

# Prefill to ensure format
PREFILL = f"<{ADJECTIVE}_email>"

Advanced Technique: Stop Sequences

Use the stop_sequences API parameter with closing XML tags to:

  • Save tokens and reduce costs

  • Reduce latency (time-to-last-token)

  • Stop generation immediately after desired content

  • Eliminate Claude’s concluding remarks

message = client.messages.create(
    model=MODEL_NAME,
    max_tokens=2000,
    stop_sequences=["</haiku>"],  # Stop after closing tag
    messages=[...]
)

Chapter 6: Precognition (Thinking Step by Step)

Core Principle

Giving Claude time to think step by step improves accuracy, especially for complex tasks.

Why It Works

  • Like humans, Claude performs better when given time to reason

  • Thinking only counts when it’s "out loud" in the response

  • Cannot ask Claude to think internally and output only the final answer

  • The reasoning process must be visible in the output

Implementation Patterns

Explicit Step-by-Step Instructions

"Is this review positive or negative? First, write the best arguments for each
side in <positive-argument> and <negative-argument> tags, then answer."

Brainstorming Before Answering

"Name a famous movie starring an actor born in 1956. First brainstorm about
some actors and their birth years in <brainstorm> tags, then give your answer."

Role + Step-by-Step Combination

System: "You are a savvy reader of movie reviews."

User: "Is this review positive or negative? First write arguments for each
side in XML tags, then answer.

This movie blew my mind with its freshness and originality. In totally
unrelated news, I have been living under a rock since 1900."

Order Sensitivity

Important consideration: Claude is sometimes sensitive to the order of options

  • Often more likely to choose the second of two options (possibly due to training data patterns)

  • "Is this positive or negative?" vs "Is this negative or positive?" can yield different results

  • Test different orderings or explicitly request unbiased consideration

Use Cases

  • Complex logic problems

  • Nuanced text interpretation

  • Multi-step reasoning tasks

  • Classification requiring analysis

  • Math and logic problems

Example Impact

Without thinking:

Q: "Name a famous movie starring an actor born in 1956."
A: "The Shawshank Redemption starring Tim Robbins" [INCORRECT - wrong birth year]

With thinking:

Q: "Name a famous movie starring an actor born in 1956. First brainstorm actors
and birth years in <brainstorm> tags."

A: "<brainstorm>
- Tom Hanks was born in 1956
- He starred in Forrest Gump, Saving Private Ryan, Cast Away
</brainstorm>

Forrest Gump starring Tom Hanks (born 1956)" [CORRECT]

Chapter 7: Using Examples (Few-Shot Prompting)

Key Concept

Providing examples of desired behavior is extremely effective for getting correct answers in the right format.

Terminology

  • Zero-shot: No examples provided

  • One-shot: One example provided

  • Few-shot: Multiple examples provided (2-4 typically)

  • The number of "shots" refers to the count of examples

Why Examples Work

  • More effective than lengthy descriptions

  • Shows rather than tells

  • Ensures consistent formatting

  • Establishes tone and style quickly

  • Claude extrapolates patterns from examples

Example Pattern: Tone Control

Without examples - formal and robotic:

Q: "Will Santa bring me presents on Christmas?"
A: "Santa Claus is a folkloric figure. The tradition involves..."

With examples - warm and child-appropriate:

"Please complete the conversation by writing the next line, speaking as 'A'.

Q: Is the tooth fairy real?
A: Of course, sweetie. Wrap up your tooth and put it under your pillow tonight.
   There might be something waiting for you in the morning.

Q: Will Santa bring me presents on Christmas?"

A: Absolutely! Make sure to leave out some cookies and milk for Santa on
   Christmas Eve, and he'll bring presents for good boys and girls like you!

Example Pattern: Format Extraction

Provide examples showing the exact output format:

Text: "Dr. Liam Patel is a neurosurgeon. Olivia Chen is an architect."
<individuals>
1. Dr. Liam Patel [NEUROSURGEON]
2. Olivia Chen [ARCHITECT]
</individuals>

Text: "Chef Oliver Hamilton runs Green Plate. Elizabeth Chen is a librarian."
<individuals>
1. Oliver Hamilton [CHEF]
2. Elizabeth Chen [LIBRARIAN]
</individuals>

Now extract from this text: "Laura Simmons is a farmer. Kevin Alvarez teaches dance."

Claude extrapolates the pattern and responds correctly.

Best Practices

  • Use 2-4 examples for best results

  • Keep examples consistent in format

  • Show the exact output structure you want

  • Examples are often easier than writing detailed format instructions

  • Combine with prefilling for maximum control

Combining with Other Techniques

# Few-shot examples + XML tags + prefilling
PROMPT = """Classify emails into categories.

Example:
Email: <email>My product is broken.</email>
Category: (B) Broken or defective item

Now classify:
Email: <email>{user_email}</email>
Category:"""

PREFILL = "("  # Start the category format

Chapter 8: Avoiding Hallucinations

Key Concept

Claude can generate plausible-sounding but incorrect information ("hallucinations"). Provide source material and request citations to improve accuracy.

What Are Hallucinations?

  • Confident-sounding but factually incorrect statements

  • Made-up facts, dates, names, or events

  • Misremembering or conflating information

  • More likely when Claude lacks source material

Mitigation Techniques

Provide Source Material

Give Claude relevant documents or context to reference:

"Here is a document about our product:
<document>
[Actual product documentation here]
</document>

Based on this document, answer: What is the warranty period?"
  • Include facts directly in the prompt

  • Use XML tags to separate source material from instructions

  • Ask Claude to cite or quote from provided sources

Request Citations

Ask Claude to reference specific parts of provided text:

"Answer based on the document provided in <document> tags. Quote relevant
passages to support your answer."

Use Step-by-Step Verification

Combine thinking step-by-step with fact-checking:

"First, identify relevant facts from the document in <facts> tags. Then,
provide your answer with citations to specific facts."

Long-Form Content

For factual long-form content:

  • Break into smaller, verifiable chunks

  • Request Claude to work from provided sources

  • Ask for specific evidence or examples from the source material

  • Have Claude note when it’s unsure or lacks information

Best Practices

  • Always provide source material when accuracy is critical

  • Use XML tags to separate context from instructions

  • Request citations or quotes from sources

  • Combine with role prompting ("You are a careful fact-checker")

  • Combine with step-by-step thinking

  • Verify critical information independently

  • Ask Claude to note limitations or uncertainties

Example Approach

System: "You are a careful analyst who only makes claims supported by provided sources."

User: "Here is a research paper:
<paper>
[Paper content]
</paper>

Summarize the key findings. For each finding, cite the specific section of
the paper. If something is unclear or not stated in the paper, explicitly
say so."

Common Patterns and Combinations

Effective Pattern Combinations

Combining multiple techniques yields the best results. Here are proven patterns:

Classification Task Pattern

System: "You are an expert email classifier."

User: "Classify emails into categories:
(A) Pre-sale question
(B) Broken or defective item
(C) Billing question
(D) Other

Examples:
<email>My product arrived broken.</email>
<thinking>This clearly describes a defective item.</thinking>
<answer>B</answer>

<email>How long does shipping take?</email>
<thinking>This is asking about service before purchase.</thinking>
<answer>A</answer>

Now classify:
<email>{user_email}</email>"

Assistant (prefill): "<thinking>"

Combines: Role prompting + Few-shot examples + XML tags + Step-by-step thinking + Prefilling

Extraction and Formatting Pattern

System: "You are an expert at extracting structured information."

User: "Extract names and occupations from text.

Example:
Text: <text>Dr. Sarah leads our research. Mike Chen manages the team.</text>
<individuals>
1. Dr. Sarah [RESEARCHER]
2. Mike Chen [MANAGER]
</individuals>

Now extract from:
<text>{user_text}</text>"

Assistant (prefill): "<individuals>"

Combines: Role prompting + Few-shot examples + XML tags + Prefilling

Complex Analysis Pattern

System: "You are a senior financial analyst."

User: "Analyze this earnings report:
<report>{earnings_data}</report>

Use this process:
1. First, identify key financial metrics in <metrics> tags
2. Then, evaluate trends and concerns in <analysis> tags
3. Finally, provide recommendations in <recommendations> tags"

Combines: Role prompting + XML tags + Step-by-step thinking + Structured output

Content Rewriting Pattern

System: "You are an expert business writer."

User: "Rewrite emails to be more professional.

Example:
<original>hey can u send that thing</original>
<professional>
Good morning,

Could you please send the document we discussed? Thank you for your assistance.

Best regards
</professional>

Now rewrite:
<original>{user_email}</original>"

Assistant (prefill): "<professional>"

Combines: Role prompting + Few-shot examples + XML tags + Prefilling

Key Takeaways

Essential Principles

  1. Be Clear and Direct: Claude responds best to straightforward instructions - if a human would be confused, Claude will be too

  2. Use Structure: XML tags organize prompts, separate data from instructions, and mark output boundaries

  3. Provide Context: Role prompting and system prompts guide Claude’s expertise and response style

  4. Show Examples: Few-shot prompting is more effective than lengthy descriptions

  5. Allow Thinking: Step-by-step reasoning improves accuracy - thinking must be visible in the output

  6. Format Output: Prefilling and XML tags provide strong control over response format

  7. Separate Data from Instructions: Use f-string placeholders for variables and XML tags for boundaries

  8. Prevent Hallucinations: Provide source material, request citations, and ask for evidence

Technical Best Practices

API Usage

  • Use proper message structure with alternating user/assistant roles

  • Set temperature=0.0 for consistent, deterministic results

  • Use max_tokens appropriately (it’s a hard stop, may cut off mid-sentence)

  • Leverage system prompts for persistent context and guidelines

  • Consider stop_sequences to save tokens and reduce latency

Prompt Engineering

  • Test prompts with colleagues using the Golden Rule

  • Small details matter - avoid typos and grammar errors

  • Claude is sensitive to patterns and quality of input

  • Combine multiple techniques for complex tasks

  • Iterate and refine based on results

Variables and XML Tags

  • F-string placeholders ({VARIABLE}): For Python variable substitution

  • XML tags (<tag></tag>): For marking data boundaries that Claude understands

  • Claude was specifically trained to recognize XML tags

  • Use descriptive tag names (e.g., <email>, <document>, <thinking>)

  • Closing tags can trigger stop_sequences to save tokens

Output Control

  • Prefill assistant responses to ensure format and skip preambles

  • Use explicit instructions for format requirements

  • Request specific structures (JSON, XML, lists, tables)

  • Combine prefilling with examples for maximum control

  • Start XML tags in prefill to ensure Claude continues correctly

Advanced Techniques

Prompt Templates

Create reusable, maintainable prompt structures:

  • Use Python f-strings or .format() for variable substitution

  • Wrap all user input in XML tags to prevent confusion

  • Keep template structure clear with comments

  • Support multiple variables when needed

  • Test templates with edge cases

Optimization Strategies

  • Use stop_sequences with closing XML tags to reduce token usage

  • Prefill responses to skip preambles and jump to content

  • Combine role prompting with few-shot examples for efficiency

  • Request thinking when accuracy matters more than brevity

  • Cache frequently used system prompts and examples

Error Prevention

  • Wrap user input in XML tags to prevent prompt injection

  • Be explicit about what NOT to do when relevant

  • Provide format examples rather than verbose format descriptions

  • Test with edge cases, empty inputs, and adversarial inputs

  • Use system prompts to set behavioral boundaries

Practical Applications

Email Classification System

Problem: Categorize support emails into predefined categories

Solution: Combine role prompting + few-shot examples + step-by-step thinking + XML tags + prefilling

System: "You are an email classification expert."

User: "Classify emails into these categories:
(A) Pre-sale question
(B) Broken or defective item
(C) Billing question
(D) Other

Examples:
<email>My Mixmaster4000 is making strange noises and smells like burning plastic.</email>
<thinking>This describes a defective product needing replacement.</thinking>
<answer>B</answer>

<email>Can I use the Mixmaster4000 to mix paint?</email>
<thinking>This is asking about product capabilities before purchase.</thinking>
<answer>A</answer>

Now classify:
<email>{user_email_text}</email>"

Assistant (prefill): "<thinking>"

Why this works:

  • Role prompting sets expectations

  • Examples show correct format and reasoning

  • XML tags separate each email clearly

  • Step-by-step thinking improves accuracy

  • Prefilling ensures Claude shows its reasoning

Data Extraction from Unstructured Text

Problem: Extract names and roles from narrative text consistently

Solution: Few-shot examples + XML tags + prefilling

User: "Extract names and occupations from text.

<text>Dr. Lisa Wong leads the research team. Marcus Johnson manages operations.</text>
<individuals>
1. Dr. Lisa Wong [RESEARCHER]
2. Marcus Johnson [OPERATIONS MANAGER]
</individuals>

<text>{user_text}</text>"

Assistant (prefill): "<individuals>"

Tone and Style Adjustment

Problem: Rewrite content in a specific style while preserving meaning

Solution: Role prompting + few-shot examples + XML input/output separation

System: "You are an expert business communications writer."

User: "Rewrite emails to be more professional.

Example:
<original>yo can u send me that file asap thx</original>
<professional>Good morning,

Could you please send me the file at your earliest convenience? Thank you for your assistance.

Best regards</professional>

Now rewrite:
<original>{user_email}</original>"

Assistant (prefill): "<professional>"

Complex Document Analysis

Problem: Analyze nuanced content requiring multi-step reasoning

Solution: Role prompting + source material + step-by-step thinking + structured output

System: "You are a senior financial analyst with expertise in earnings reports."

User: "Analyze this earnings report:
<report>{earnings_data}</report>

Process:
1. Identify key metrics in <metrics> tags
2. Evaluate trends and concerns in <analysis> tags
3. Provide recommendations in <recommendations> tags

Base all analysis on data from the report. Cite specific figures."

Assistant (prefill): "<metrics>"

Common Pitfalls and Solutions

Pitfall 1: Vague Instructions

Problem: "Make this better" or "Improve this code"

Why it fails: Claude doesn’t know what "better" means in your context

Solution: Be specific about improvements

BAD:  "Make this email better"

GOOD: "Make this email more professional by:
1. Removing casual language and slang
2. Adding formal greetings and closings
3. Organizing into clear paragraphs
4. Using complete sentences"

Pitfall 2: Unclear Data Boundaries

Problem: Mixing instructions and variable data without separation

# BAD - No clear boundaries
EMAIL = "Send me the report now!"
PROMPT = f"Rewrite this email nicely: {EMAIL} but keep it short"
# Claude might think "but keep it short" is part of the email

Solution: Always wrap variable content in XML tags

# GOOD - Clear boundaries
EMAIL = "Send me the report now!"
PROMPT = f"Rewrite this email nicely: <email>{EMAIL}</email> Keep the rewrite short."

Pitfall 3: Expecting Internal Reasoning

Problem: "Think about this carefully and give me the answer" - asking Claude to think but not show its work

Why it fails: Claude can only "think" when the thinking is visible in the output

Solution: Explicitly request visible thinking in tags

BAD:  "Think carefully about this math problem and give the answer"

GOOD: "First show your step-by-step work in <work> tags, then give the final
       answer in <answer> tags"

Pitfall 4: Inconsistent Examples

Problem: Providing few-shot examples with varying formats

# BAD - Inconsistent format
Example 1: "Input: X, Output: Y"
Example 2: "Q: X\nA: Y"
Example 3: "X => Y"

Solution: Ensure all examples follow the exact same structure

# GOOD - Consistent format
Example 1: <input>X</input><output>Y</output>
Example 2: <input>X</input><output>Y</output>
Example 3: <input>X</input><output>Y</output>

Pitfall 5: Ignoring Order Effects

Problem: Not considering that Claude may prefer options presented later

Why it matters: "Is this positive or negative?" vs "Is this negative or positive?" can yield different answers

Solution: Test different orderings or explicitly request unbiased consideration

BETTER: "Is this review positive or negative? Consider both possibilities
         equally before deciding. Show your reasoning for each side first."

Pitfall 6: Skipping System Prompts

Problem: Putting all context and instructions in user messages

Why it’s suboptimal: System prompts provide persistent context and can improve adherence to guidelines

Solution: Use system prompts for role, tone, and behavioral guidelines

# SUBOPTIMAL
User: "You are an expert analyst. Please analyze this data carefully and professionally..."

# BETTER
System: "You are an expert data analyst. Always show your reasoning and cite specific data points."
User: "Analyze this data: <data>{data}</data>"

Pitfall 7: Over-Complexity

Problem: Trying to accomplish too much in a single prompt

Why it fails: Complex multi-stage tasks can confuse Claude or produce inconsistent results

Solution: Break into multiple sequential prompts (prompt chaining)

# BAD - Too much at once
"Read this document, extract key points, summarize them, translate to Spanish,
format as a presentation, and create discussion questions"

# GOOD - Sequential steps
Prompt 1: "Extract key points from: <document>{doc}</document>"
Prompt 2: "Summarize these points: <points>{extracted_points}</points>"
Prompt 3: "Translate to Spanish: <summary>{summary}</summary>"
... etc

Appendix: API Reference

Basic Message Structure

import anthropic

client = anthropic.Anthropic(api_key=API_KEY)

message = client.messages.create(
    model="claude-3-haiku-20240307",
    max_tokens=2000,
    temperature=0.0,
    system="Your system prompt here",
    messages=[
        {"role": "user", "content": "Your prompt here"},
        {"role": "assistant", "content": "Optional prefill text"}
    ],
    stop_sequences=["</tag>"]  # Optional
)

response = message.content[0].text

Key Parameters

  • model (required): Model identifier

    • claude-3-haiku-20240307 - Fast, cost-effective

    • claude-3-sonnet-20240229 - Balanced performance

    • claude-3-opus-20240229 - Most capable

  • max_tokens (required): Maximum response length in tokens

    • Hard stop - may cut off mid-sentence

    • Typical range: 1000-4000 for most tasks

  • temperature (optional): Randomness/creativity level

    • 0.0 - Deterministic, consistent (recommended for learning)

    • 1.0 - More creative and varied

    • Default: 1.0

  • system (optional but recommended): System prompt for persistent context

  • messages (required): List of message objects

    • Must alternate user and assistant roles

    • Must start with user role

    • Each has role and content fields

  • stop_sequences (optional): List of strings that stop generation

    • Useful with XML closing tags to save tokens

    • Example: ["</answer>", "</thinking>"]

Helper Function Example

def get_completion(prompt: str, system_prompt="", prefill=""):
    message = client.messages.create(
        model="claude-3-haiku-20240307",
        max_tokens=2000,
        temperature=0.0,
        system=system_prompt,
        messages=[
            {"role": "user", "content": prompt},
            {"role": "assistant", "content": prefill}
        ]
    )
    return message.content[0].text

# Usage
response = get_completion(
    prompt="What is 2+2?",
    system_prompt="You are a math tutor.",
    prefill="Let me calculate:"
)
  • Model: claude-3-haiku-20240307 (fast, cost-effective, great for learning)

  • Temperature: 0.0 (deterministic, easier to debug)

  • Max_tokens: 2000-4000 (adjust based on expected response length)

  • System prompt: Use for role and guidelines

  • Prefill: Use to control output format

Resources

Tutorial Information

  • Tutorial Repository: Interactive Jupyter notebooks with exercises

  • API Key: Sign up at https://console.anthropic.com/

  • Python SDK: Install with pip install anthropic

  • Alternative: Static tutorial answer key available via Google Sheets

Additional Resources

  • Anthropic Console for API key management and usage monitoring

  • Community examples and best practices

  • Model comparison and pricing information

  • Rate limits and optimization guidelines

Conclusion

Effective prompt engineering with Claude combines clarity, structure, and context. The key techniques covered in this tutorial work together synergistically:

  • Clear instructions eliminate ambiguity

  • XML tags provide structure and separate data from instructions

  • Role prompting establishes expertise and tone

  • Few-shot examples show desired behavior

  • Step-by-step thinking improves accuracy

  • Output formatting ensures consistent, parseable results

Remember the Golden Rule

If a human would be confused by your instructions, Claude will be too.

Start simple, test thoroughly, and iterate based on results.

Combining Techniques

The most powerful prompts often combine multiple techniques:

  1. System prompt with role and behavioral guidelines

  2. Few-shot examples in the user message showing desired format

  3. XML tags for clear structure and data boundaries

  4. Step-by-step instructions for complex reasoning

  5. Prefilling to control output format

Next Steps

  • Practice with the interactive exercises in the tutorial notebooks

  • Experiment with different technique combinations

  • Develop your own prompt engineering style

  • Test with real-world use cases from your domain

  • Iterate and refine based on Claude’s responses

The techniques in this guide provide a strong foundation, but prompt engineering is both an art and a science. The best way to improve is through experimentation and practice.