Session 8 · Wednesday, April 1, 2026

Trinity Agent
Design

The Generative Graph: What Could Be

WHO WHAT WHAT IF ← You Are Here

RAG Pipelines · Agent Reasoning · Context Engineering

Special Guest

Luke Steele

Creative world-building & the architecture of imagination

The Artist

ARIA Award-winning musician, frontman of Empire of the Sun. Known for Walking on a Dream, We Are the People, Alive. One of the most visually inventive artists in modern music.

The Method

Build the world first — mythology, characters, visual universe — then let the music emerge from that substrate. The context came before the content.

Build the world. The output follows. Luke doesn't write songs and then dress them up — he designs a universe and lets the music emerge from it. That's exactly what a Trinity Agent does: build the graph, and the generative layer creates from it.

Group Projects

Wrapping Up
Group Projects

Finish line check — what ships, what doesn't

Status Check

  • ☑ What's built and working?
  • ☑ What's in progress?
  • ☐ What's blocked or stuck?
  • ☐ What do you need help with?

Finish Line

  • ➤ Scope down to what's deliverable
  • ➤ Assign owners for remaining tasks
  • ➤ Set a deadline for final integration
  • ➤ Plan your presentation story

Quality over ambition. A polished demo of one feature beats a half-working prototype of five. Decide what to cut today so you can ship something great.

Core Concept

What Is RAG?

Retrieval-Augmented Generation

Instead of relying on what the AI memorized during training, retrieve relevant facts from your knowledge graph and feed them into the prompt. The AI generates responses grounded in your data.

Query
Retrieve from KG
Augment Prompt
Generate Response

Without RAG

"Tell me about Nashville real estate trends" → Generic answer from 2024 training data. Potentially wrong. No sources.

With RAG

"Tell me about Nashville real estate trends" → Pulls YOUR 200 triples about Nashville properties, zoning, permits. Cites specific entities. Grounded.

Lewis et al. (2020): RAG models "combine the best of both worlds — the parametric memory of a pre-trained model and the non-parametric memory of a retrieval index." Your knowledge graph IS that retrieval index.

RAG Architecture

How your knowledge graph powers AI responses

User Query
Embed Query
→ Convert text to vector
Vector Search
→ Find semantically similar nodes
Top-K Results
→ Select best matching triples
LLM Context Window
→ System prompt + retrieved facts + query
Grounded Response

Key insight: "Vector search" means the system doesn't need exact keyword matches. It understands meaning. A query about "housing affordability" will find triples about "median home price" and "income levels" even if those exact words aren't used.

Agent Pattern

The ReAct Pattern

Reasoning + Acting

Agents that interleave thinking and doing outperform those that only reason or only act. The loop: Think → Act → Observe → Think again.

🧠
THINK
Reason about goal
ACT
Take an action
👁
OBSERVE
See the result
// ReAct loop in practice
Thought: The student asked about Nashville zoning laws.
        I need to check their knowledge graph first.
Action: query_graph("Nashville zoning regulations")
Observe: Found 12 triples about R1, R2, commercial zones.
Thought: Good data, but missing recent 2025 changes.
        Let me check if they have any gaps flagged.
Action: query_graph("unknown zoning changes 2025")
Observe: They flagged this as a Known Unknown.
Thought: Perfect. I'll answer with what they have and
        highlight the gap they already identified.
Response: "Based on your graph, here's what we know..."

Yao et al. (2022): "ReAct prompting outperforms chain-of-thought reasoning alone by 33% on knowledge-intensive tasks." Reasoning without action is speculation. Action without reasoning is random.

Critical Skill

Context Engineering

The art of giving AI the right information at the right time

📌

Static Context

System prompts, persona instructions, rules. Set once, rarely changes. "You are a real estate analysis agent focused on Nashville metro."

🔍

Dynamic Context

Retrieved from the knowledge graph per query. This is RAG. Different questions pull different triples. The context adapts to what the user needs right now.

🗨

Emergent Context

Conversation history, user preferences learned over time, patterns that emerge from interaction. The context the AI discovers by talking to you.

Your knowledge graph is a context engine. Every triple you created in the midterm is now fuel for dynamic context. The more structured your graph, the better your agent's context, the better its output. Quality in, quality out.

The Vision

The Trinity Agent

An AI that traverses all three graph layers

Trinity Agent

👥

Knows WHO

Queries the Social Graph. Who is asking? What are their relationships? Who else should be involved? What community context matters?

📚

Knows WHAT

Queries the Knowledge Graph. What facts are verified? What triples are relevant? What sources back this up? Where are the gaps?

Imagines WHAT IF

Generates new possibilities. What connections haven't been made? What could emerge from combining these facts? What should we explore next?

A Trinity Agent doesn't just answer questions — it synthesizes across layers. "Given who you are, what you know, and what's possible — here's what you should consider."

VanderBot IS a Trinity Agent

It's been doing this all along. Now you see the architecture.

Every conversation with VanderBot follows the Trinity pattern:

// When a student sends a message to VanderBot...

// 1. WHO Layer — Social Graph
const student = lookupStudent(phoneNumber)
const pod = student.pod   // "BackyardOne"
const role = student.role  // "Product Lead"

// 2. WHAT Layer — Knowledge Graph
const triples = queryGraph(student.project, message)
// Returns: 15 relevant triples about ADU permits

// 3. WHAT IF Layer — Generative
const response = generate({
  persona: "You know this student's role and team",
  context: triples,  // RAG: grounded in their data
  query: message,
  mode: "creative_synthesis" // Generate new ideas
})
👥
WHO

Personalized to YOU

📚
WHAT

Grounded in YOUR graph

WHAT IF

Generates NEW insights

Workshop

Live Demo

Making Your Knowledge Graph Generate

Open VanderBot. Ask it questions that require graph traversal and creative synthesis. Notice how the quality of your graph determines the quality of the output.

Level 1: Simple Retrieval

"What do we know about [key entity in your project]?"
Tests: Does the graph have the data? Can the agent find it?

Level 2: Cross-Entity Reasoning

"How does [Entity A] relate to [Entity B] in ways we haven't explored?"
Tests: Can the agent traverse edges and find non-obvious paths?

Level 3: Generative Synthesis

"Based on everything in our graph, what's the biggest opportunity we're missing?"
Tests: Can the agent reason across the entire graph and generate new insight?

Pay attention to the gap. Teams with 200+ well-structured triples will get dramatically richer responses than teams with 50 messy ones. The graph is the input. The AI is the amplifier. Garbage in, garbage out. Gold in, diamonds out.

Prompt Engineering for Agents

Writing prompts that leverage graph context

1. Persona Instructions (WHO)

"You are a strategic advisor for the BackyardOne team.
The student you're talking to is Sarah, the Product Lead.
Her team includes 4 members focused on ADU development."

2. Knowledge Grounding (WHAT)

"Here are the relevant facts from the knowledge graph:
- Nashville permits 1 ADU per residential lot (R1-R4)
- Average ADU construction cost: $150K-$280K
- Metro Council passed Bill BL2024-123 allowing ADUs
Only use these facts. If asked about something not here,
say 'This isn't in your knowledge graph yet.'"

3. Generative Constraints (WHAT IF)

"When generating new ideas, always:
- Ground them in existing graph data
- Flag confidence level (high/medium/low)
- Suggest what new triples would verify the idea
- Connect back to the team's stated goals"

Prompt engineering isn't tricks — it's structured context. The knowledge graph gives you that structure. Every triple is a potential fact to ground a prompt.

Agent Architecture Patterns

Four patterns. Choose based on the problem.

🔧 Tool Use

Agent has access to external tools: APIs, databases, calculators. It decides when and which tool to call.

Best for: structured tasks

💡 Chain of Thought

Agent reasons step-by-step before answering. Shows its work. Better for complex, multi-step problems.

Best for: reasoning tasks

👥 Multi-Agent

Multiple specialized agents collaborate. One researches, one critiques, one synthesizes. Division of cognitive labor.

Best for: complex systems

🗘 Graph Traversal

Agent navigates the knowledge graph: follows edges, discovers paths, finds connections no single query would reveal.

Best for: discovery tasks

In practice, you combine patterns. VanderBot uses all four: Tool Use (queries Neo4j), Chain of Thought (reasons about context), Multi-Agent (different modes for different tasks), and Graph Traversal (follows relationship edges). Your Agent Logic Map should identify which patterns your design uses.

Emergence

The Generative Graph Emerges

When agents create new nodes and edges, the graph evolves

Before: Static Graph

[Nashville] —has→ [ADU Policy]
[ADU Policy] —requires→ [R1 Zoning]
[R1 Zoning] —allows→ [1 ADU per lot]

3 nodes, 3 edges. Static.

After: Generative Graph

[Nashville] —has→ [ADU Policy]
[ADU Policy] —requires→ [R1 Zoning]
[R1 Zoning] —allows→ [1 ADU per lot]
[Agent] —generates→ [Insight: R2 gap]
[Insight: R2 gap] —suggests→ [Research Task]
[Agent] —discovers→ [Hidden Link]
6 nodes, 6 edges. Growing.

New Node Types in the Generative Graph

GENERATED_CONTENT INSIGHT HYPOTHESIS CREATIVE_SYNTHESIS RESEARCH_TASK EMERGENT_CONNECTION

This is emergence. The graph starts generating its own growth. Human input + AI synthesis = a knowledge system that evolves faster than either could alone. The WHAT IF layer doesn't replace WHO and WHAT — it activates them.

Core Thesis

Why Context Changes Everything

Same AI + Different Context = Completely Different Output

Input
Context
Output
"Analyze ADU opportunity"
No context (raw GPT)
Generic 5-paragraph essay about ADUs nationally
"Analyze ADU opportunity"
+ Knowledge Graph (RAG)
Nashville-specific analysis citing your 200 triples
"Analyze ADU opportunity"
+ KG + Social + History
Personalized strategy for YOUR team, citing gaps you flagged, connecting to partners in your network

The Knowledge Graph IS Your Competitive Moat

Everyone has access to GPT-4, Claude, Gemini. The model is a commodity.
Your proprietary knowledge graph — the structured understanding of YOUR domain — is not.

For your analysis paper: Pick one question relevant to your project. Show how the same question produces three different outputs with three different levels of context. Argue why context engineering matters more than model selection.

Key Takeaways

Six insights from today's session

📚

RAG Grounds AI in Reality

Retrieval-Augmented Generation connects LLMs to your knowledge graph. No more hallucination — just grounded, sourced responses.

🔄

ReAct = Think + Do

The best agents interleave reasoning and acting. Think, act, observe, repeat. Pure reasoning without action is speculation.

🎯

Context > Model

The same AI with different context gives completely different outputs. Context engineering is the most important AI skill.

🗘

Trinity = WHO + WHAT + WHAT IF

The Trinity Agent traverses all three layers: social context, verified knowledge, and generative possibility.

🚀

Graphs Generate Growth

When agents create new nodes, the graph evolves. Human input + AI synthesis = emergent intelligence greater than either alone.

🏆

Your KG Is Your Moat

Everyone has GPT-4. Nobody else has your structured knowledge graph. The data, not the model, is the competitive advantage.

Before You Go

Experience Survey

Help us make this course better — takes 2 minutes

📱

Text Buddy Now

Open your WhatsApp conversation with VanderBot and send:

survey

Buddy will walk you through 5 quick questions about your experience in the course so far.

2 minutes
Quick and conversational

Real-time feedback
Your answers shape the course

This is also a live demo of agent-driven data collection — the graph learns from you.

Speaker Notes