AI Core Libraries¶
Location: inception_core/libs/
The core libraries are the shared foundation every other module in the platform builds on. They provide consistent, tested primitives for LLM invocation, content governance, observability, and data access — so you never re-implement these from scratch in a recipe or application.
Package structure¶
inception_core/libs/src/
├── env.py ← Shared environment loader
├── auth/
│ └── token_factory/ ← Credential translation library (see Token Factory page)
│ ├── factory.py ← TokenFactory.from_env() + get_token()
│ └── _factory_support.py ← JWT assertion, UPST exchange, Vault reads
├── llm/
│ ├── llmfactory.py ← Unified LLM provider abstraction
│ ├── image_analysis.py ← Multimodal image processing
│ ├── pdf_processing.py ← PDF extraction and chunking
│ └── video_analysis.py ← Video frame analysis
├── metro/
│ ├── guardrails/
│ │ ├── oci_guardrails.py ← OCI Guardrails integration
│ │ └── oci_guardrails_handler.py ← LangChain-compatible handler
│ ├── auditing/ ← LangFuse audit integration
│ └── logging/
│ ├── agentic_logging.py ← Structured agent logging
│ └── oci_logginghandler.py ← OCI Logging sink handler
└── utils/
├── db_conn_utils.py ← Oracle ADB connection helpers
└── upload_research_dataset.py ← Dataset ingestion utility
Token Factory
The auth/token_factory module is documented on its own page: Token Factory.
It translates incoming IDCS tokens into Oracle DB tokens, OCI service signers, or OAuth access tokens — used by the MCP servers and E2E identity propagation patterns.
LLM Factory (llm/llmfactory.py)¶
The LLM factory is the single abstraction layer across all supported model providers. It lets you swap models without changing agent code.
Supported providers:
- OCI GenAI (Cohere, Meta LLaMA, and other OCI-hosted models)
- OpenAI-compatible endpoints (includes OCI GenAI via OpenAI SDK adapter)
- Any provider that conforms to the LangChain chat model interface
Why it matters: Agents and recipes import the factory, not a provider directly. When your team switches from one model to another — for cost, capability, or compliance reasons — you change one config value, not every agent file.
Usage:
from src.llm.llmfactory import LLMFactory
# Returns a LangChain-compatible chat model
llm = LLMFactory.get_chat_model()
# Use directly in a LangGraph node
response = llm.invoke([HumanMessage(content="Summarise this invoice")])
# Or bind tools before passing to LangGraph
llm_with_tools = llm.bind_tools([execute_sql, upload_file])
Configuration:
OCI_GENAI_ENDPOINT=https://inference.generativeai.us-chicago-1.oci.oraclecloud.com
OCI_GENAI_MODEL_ID=cohere.command-r-plus
OCI_EMBEDDING_MODEL=cohere.embed-english-v3.0
OCI_COMPARTMENT_ID=ocid1.compartment.oc1...<your-compartment>
Multimodal Utilities (llm/)¶
Image Analysis (image_analysis.py)¶
Wraps OCI GenAI vision-capable models for image understanding tasks. Useful for document classification, receipt scanning, diagram parsing, and invoice image extraction.
from src.llm.image_analysis import analyze_image
result = analyze_image(
image_path="invoice_scan.png",
prompt="Extract vendor name, invoice date, and total amount"
)
# Returns structured dict with extracted fields
PDF Processing (pdf_processing.py)¶
Provides document extraction and chunking utilities for PDFs. Designed for RAG pipelines, knowledge base ingestion, and document-grounded agent tools.
from src.llm.pdf_processing import extract_pdf_chunks
chunks = extract_pdf_chunks("contract.pdf", chunk_size=512, overlap=64)
# Returns list of text chunks ready for embedding and vector storage
Video Analysis (video_analysis.py)¶
Frame-level analysis utilities for video content. Applicable to field service recording analysis, training data workflows, and multimedia content classification.
from src.llm.video_analysis import analyze_video_frames
summary = analyze_video_frames(
video_path="field_recording.mp4",
sample_rate=5, # analyze every 5th frame
prompt="Describe what the technician is doing"
)
Governance Library (metro/)¶
The metro package groups all compliance, safety, and observability primitives under one roof.
OCI Guardrails (guardrails/oci_guardrails.py)¶
Wraps the OCI Generative AI Guardrails service to enforce:
- Content safety — detect and block harmful content in agent inputs and outputs
- Topic filtering — restrict agents to approved subject domains
- Policy enforcement — apply custom content policies without writing prompt-level rules
Usage pattern:
from metro.guardrails.oci_guardrails import OCIGuardrails
guardrails = OCIGuardrails(config)
result = guardrails.check(user_input)
if result.blocked:
return result.reason
Guardrails Handler (guardrails/oci_guardrails_handler.py)¶
A LangChain-compatible callback handler that integrates guardrail checks into the agent execution loop automatically. Drop it into any LangGraph or LangChain agent with zero changes to tool or prompt code.
Auditing (auditing/)¶
LangFuse integration hooks that emit structured trace events for every agent step:
- Prompts sent and received
- Tool calls and results
- Token usage per step
- Latency and error signals
These traces are queryable in the LangFuse dashboard for debugging, cost analysis, and quality evaluation.
Structured Logging (logging/)¶
agentic_logging.py — Provides a structured JSON logger pre-configured for agentic workloads. Fields include agent ID, session thread, tool name, and execution stage.
oci_logginghandler.py — A Python logging.Handler subclass that ships logs to OCI Logging. Supports alerting rules, retention policies, and dashboards via OCI Log Analytics.
Utility Helpers (utils/)¶
Database Connection Utils (db_conn_utils.py)¶
Oracle ADB (Autonomous Database) connection helpers supporting:
- Wallet-based mTLS connections
- IAM token-based authentication
- Connection pooling and lifecycle management
Used by checkpointer patterns, memory store backends, and MCP ADW tools.
Dataset Upload (upload_research_dataset.py)¶
Utility for ingesting research datasets into OCI Object Storage or Oracle ADB. Supports chunking, embedding generation, and vector store population — the setup step for RAG-capable agents.
Installation¶
The library is installed as a local editable package:
Testing¶
What is covered:
| Test area | What is verified |
|---|---|
| Guardrails | Blocked content returns a reason; approved content passes through cleanly |
| LLM invocation | LLM factory returns a callable model; invoke returns a non-empty response |
| OCI Logging handler | Log records are serialized and sent without error; log level filtering works |
| DB connection utils | Wallet-based connection succeeds; connection pool lifecycle is clean |
| PDF processing | Extracted text matches known fixture; chunk boundaries are correct |
Run a specific test file to isolate failures: