The old way
- ✗Your prompt worked great in testing.
- ✗Then you upgraded the model.
- ✗Then your client changed the use case.
- ✗Then it started hallucinating on edge cases.
- ✗Then you added "IMPORTANT:" for the fourth time.
- ✗Then you had no idea why it worked Monday and broke Tuesday.
The DSPy way
- ✓You write a Python class.
- ✓DSPy writes the prompt.
- ✓You swap models with one line.
- ✓You run an optimizer — prompts improve automatically.
- ✓You write tests like normal code.
- ✓You ship with confidence.
DSPy is a framework for programming LLMs, not prompting them.
When you prompt an LLM, you write instructions in English and hope the model interprets them the way you intended. When you program with DSPy, you declare what you want — inputs, outputs, types, constraints — and let the framework figure out how to ask for it. You write Python. DSPy writes the prompts. And when you need better prompts, you run an optimizer and DSPy rewrites them automatically based on examples that actually work.
It's the difference between manually tuning a SQL query every time your data changes versus writing a proper ORM. One is maintenance. The other is engineering.
prompt = f"""You are a
helpful AI assistant.
Analyze this startup pitch.
Respond in JSON format.
IMPORTANT: Only valid JSON.
SERIOUSLY: Just the JSON.
Pitch: {pitch}"""class AnalyzeStartup(dspy.Signature):
"""Analyze a startup pitch."""
pitch: str = dspy.InputField()
score: int = dspy.OutputField()
strengths: list[str] = dspy.OutputField()
weaknesses: list[str] = dspy.OutputField()
verdict: str = dspy.OutputField()See it happen.
Press Compile. Watch DSPy turn a Python class into a prompt.
YOUR CODE
class AnalyzeStartup(dspy.Signature): """Analyze a startup pitch.""" pitch: str = dspy.InputField() viability_score: int = dspy.OutputField() strengths: list[str] = dspy.OutputField() weaknesses: list[str] = dspy.OutputField() verdict: str = dspy.OutputField()
DSPy ENGINE
THE PROMPT
This book is for you if...
You've been hacking with the OpenAI API
You can make an LLM do things. Your prompts work — until they don't. You've copy-pasted "respond only in JSON" into six different projects and you're starting to suspect there's a better way.
You're building something real
You need reliability, testability, and the ability to swap models without rewriting everything. You want LLM code that behaves like normal code — versioned, tested, debuggable.
You're tired of prompt archaeology
You've spent hours tweaking wording, adding emphatic ALL-CAPS instructions, and still can't explain why a two-word change broke your pipeline. There's a better mental model — and this book teaches it.
What's Inside
Every chapter builds something real — not a toy, not a demo.
About the Author
Ali Raza
I built production DSPy systems before writing this book. I've hit every error message, debugged every silent failure, and rewritten every prompt that “worked in testing.” This book is the guide I wish I'd had.
Software engineer shipping AI tools at scale. Previously built developer platforms and open-source tools used by thousands.
Stop writing prompts. Start writing code.
One-time purchase. Instant access. No subscription.
Buy once. Every future update is yours, free.
$24.99 launch price — goes to $34 after early access ends
7-day money-back guarantee. Not what you expected? Email me within 7 days for a full refund, no questions asked.
Questions? ali@aliirz.com