Skip to content

Agentic Accelerators

Location: inception_core/accelerators/

The accelerators layer provides two categories of reusable, runnable templates:

  • patterns_agentic/ — The 7 canonical AI agent design patterns, each as a standalone, executable Python file
  • patterns_solution/ — Enterprise security and integration reference patterns

Teams use these as starting points — copy, extend, and wire into your domain recipe without rebuilding from scratch.


Agentic Patterns (patterns_agentic/)

Pattern 1 — Augmented LLM (Foundation)

File: agent_with_augmented_llm.py

The base pattern every other pattern is built from. An LLM enhanced with three capabilities:

  • Retrieval — access relevant knowledge from vector stores or databases
  • Tools — execute actions, query APIs, run computations
  • Memory — retain context and state across turns
Retrieval ──┐
            ├──→ LLM (augmented) ──→ Memory
Tools ──────┘

When to use: Any agent that needs to know things it wasn't trained on, or take actions in external systems.


Pattern 2 — Prompt Chaining

File: agent_with_prompt_chaining.py

Decomposes a complex task into sequential LLM steps. Each call processes the output of the previous one. Quality gates (programmatic checks) can sit between steps and trigger retries or early termination.

Input → LLM₁ → Gate → LLM₂ → Output
                 Fail → retry / abort

When to use: Multi-stage document generation, structured data extraction pipelines, translation + review flows.


Pattern 3 — Routing

File: agent_with_routing.py

Classifies the input, then dispatches to a specialized handler with an optimized prompt for that category. Built with LangGraph conditional edges.

Input → Classifier → Route A → LLM_A → Output
                  → Route B → LLM_B → Output
                  → Route C → LLM_C → Output

When to use: Multi-intent chatbots, tiered support triage, domain-specific agent fleets where different inputs need different expertise.


Pattern 4 — Parallelization

File: agent_with_parallelization.py

Two sub-patterns:

  • Sectioning — split a large task into independent subtasks and run them concurrently
  • Voting — run the same task N times and merge results for confidence or quality
Input → ┌─ LLM₁ ─┐
        ├─ LLM₂ ─┤ → Merge → Output
        └─ LLM₃ ─┘

When to use: Long document summarization, parallel research on multiple topics, reliability-critical responses that benefit from consensus.


Pattern 5 — Orchestrator-Workers

File: agent_with_orchestrator_workers.py

A central orchestrator LLM dynamically decides what subtasks are needed, delegates to specialized worker agents, and synthesizes their results into a final response.

         Orchestrator
        /      |      \
  Worker₁  Worker₂  Worker₃
        \      |      /
         Synthesized output

When to use: Complex multi-step research tasks, code generation with review and test steps, any workflow where task decomposition should be dynamic rather than hardcoded.


Pattern 6 — Evaluator-Optimizer

File: agent_with_evaluator_optimizer.py

One LLM generates output; a second LLM evaluates it against quality criteria and provides structured feedback. The generator retries with the feedback until the criteria are met or max attempts reached.

Input → Generator → Evaluator → pass → Output
              ↑         |
              └─feedback─┘ (retry)

When to use: Content quality control, code correctness loops, structured data validation, anywhere you need iterative refinement.


Pattern 7 — Autonomous Agent

File: basic_agent.py

The LLM dynamically directs its own process — choosing which tools to call, planning steps, and reacting to tool results in a loop until the task is complete.

while not done:
    Input → LLM (decides) → Tools (execute) → feedback

Needs guardrails

This is the most powerful and hardest pattern to control. Always attach OCI Guardrails and human-in-the-loop approval for high-stakes tool execution.


Memory Patterns

Short-Term Memory (agent_with_st_memory.py) — Thread-based in-memory checkpointing using LangGraph's built-in MemorySaver. Conversations persist within a process session.

Long-Term Memory (agent_with_lt_memory.py) — Oracle ADB-backed OracleDBSaver checkpointer. Conversations survive process restarts. Resume any session by thread ID.

Memory Store (agent_with_memory_store.py) — Oracle ADB-backed OracleStore virtual filesystem. Supports:

  • File read/write with hierarchical namespacing
  • Keyword and vector similarity search
  • Multi-agent shared access — every agent in the fleet sees the same store
  • Skills storage backend — skills are versioned files, lazy-loaded on demand

Memory-Aware Deep Research Agent

The flagship pattern — a batteries-included harness combining all of the above:

Component Implementation
Core reasoning LangGraph state machine
LLM inference OCI GenAI (Cohere / LLaMA)
Tracing LangFuse callback handler
Thread memory OracleDBSaver checkpointer
Shared memory OracleStore virtual filesystem
Skills Lazy-loaded from memory store
Subagent fleet Parallel agents sharing the store

Configuration:

# OCI credentials
OCI_CONFIG_FILE=~/.oci/config
CONFIG_PROFILE=DEFAULT
OCI_REGION=us-chicago-1
OCI_COMPARTMENT_ID=ocid1.compartment...

# LLM
OCI_GENAI_ENDPOINT=https://inference.generativeai...
OCI_GENAI_MODEL_ID=cohere.command-r-plus
OCI_EMBEDDING_MODEL=cohere.embed-english-v3.0

# Oracle ADB
ADB_USER=ADMIN
ADB_PASSWORD=<secret>
ADB_DSN=mydb_high
ADB_WALLET_LOCATION=./wallet
ADB_WALLET_PASSWORD=<secret>

# LangFuse
LANGFUSE_SECRET_KEY=sk-...
LANGFUSE_PUBLIC_KEY=pk-...
LANGFUSE_BASE_URL=https://langfuse.your-domain.com

Solution Patterns (patterns_solution/)

User Impersonation (user_impersonation_adb/, user_impersonation_oic/)

Allows agents to query databases as if they were the end user, without requiring per-user DB credentials.

How it works:

  1. Agent fetches IAM group membership for the authenticated user
  2. Populates ADMIN.session_role table with (user_id, session_id, role)
  3. Oracle VPD reads the session role at query time and applies row-level filters
User → SSO → IAM groups → session_role table → VPD → filtered query result

End-to-End IAM Identity Propagation (identity_propagation_adb/, user_e2e_token_exchange_adb/)

True per-user DB token flow — no shared service credentials:

  1. User authenticates via IDCS SSO → receives accessToken
  2. MCP server exchanges SSO token with IAM endpoint → receives short-lived DB token
  3. DB connection opened with IAM DB token — Oracle logs the actual end-user identity
  4. VPD enforces role-based access using native IAM identity
IDCS SSO → accessToken → IAM Token Exchange → DB token → Oracle ADB
                                                     Audit trail (end user)

OIC Identity Propagation (user_e2e_token_exchange_oic/, user_impersonation_oic/)

Same patterns adapted for Oracle Integration Cloud (OIC) workflows as the write-back channel to EBS.

A2A Service Invocation (a2a_invoke_services/)

Agent-to-agent call patterns using the A2A protocol over localhost HTTP ports. Enables orchestrator agents to spawn and coordinate specialized subagents in the same deployment.

Async Fusion Agent Invocation (async_invoke_fusion_agents/)

Asynchronous invocation of Oracle Fusion Studio agents from custom LangGraph workflows. Bridges the platform to native Oracle Enterprise AI managed agents.


Running pattern examples

From the inception_core/accelerators/patterns_agentic/ directory:

# Activate your environment
source .venv/bin/activate

# Run any pattern
python -m src.agents.patterns.agent_with_augmented_llm
python -m src.agents.patterns.agent_with_routing
python -m src.agents.patterns.agent_with_orchestrator_workers

# Run the full test suite
bash tests/test_run.sh

Most examples require .env to be populated with OCI credentials and (for memory patterns) Oracle ADB connectivity.


Which pattern should I use?

Not sure which of the 7 patterns fits your problem? Use this decision guide:

Your task looks like... Start with
Single-turn Q&A with tool use Pattern 1 — Augmented LLM
Multi-step document generation or transformation Pattern 2 — Prompt Chaining
Different input types need different specialists Pattern 3 — Routing
Independent subtasks that can run at the same time Pattern 4 — Parallelization
Complex, unpredictable task that needs dynamic planning Pattern 5 — Orchestrator-Workers
Output quality is the goal; correctness must be verified Pattern 6 — Evaluator-Optimizer
Open-ended research or tool-use with no fixed steps Pattern 7 — Autonomous Agent
Any pattern that needs to survive a process restart Add Pattern — Long-Term Memory
Multiple agents sharing knowledge Add Pattern — Memory Store

Anti-patterns (what not to do)

Situation Don't do this Do this instead
Building on OCI Wayflow (Oracle's managed workflow service) Plug in LangGraph memory primitives — state management is incompatible Use Wayflow-native state management, or move off Wayflow to LangGraph entirely
Integrating with Oracle Fusion Studio / Enterprise AI Projects Build a custom LangGraph agent that reimplements its logic Use the async_invoke_fusion_agents pattern to call Fusion Studio agents from LangGraph
Sharing credentials across agents Use a shared service account in every agent Use IAM identity propagation — each agent call carries the end user's own token
Deploying Pattern 7 (Autonomous Agent) without safety controls Ship without a guardrail policy Always attach OCIGuardrails handler and enable HITL for any tool that mutates data
Hardcoding model IDs in agent files model = "cohere.command-r-plus" scattered everywhere Import LLMFactory.get_chat_model() — change the model in one env var