OpenAI Codex Integration
Learn how to connect TraceMem's MCP server to OpenAI Codex and ChatGPT, enabling AI assistants to track decisions, evaluate policies, and request approvals while maintaining data minimization and security best practices.
Overview
OpenAI supports MCP (Model Context Protocol) servers in Codex tooling and general MCP guidance. This integration enables you to:
- Track decisions as AI assistants work on code changes and architecture
- Evaluate policies to ensure compliance before actions are taken
- Request approvals when policies require human judgment
- Minimize data transfer by optimizing tool schemas and relying on TraceMem's redaction
- Maintain audit trails for all AI-driven decisions and changes
This is the lowest-effort, highest-leverage path because your TraceMem MCP server already exists and works everywhere OpenAI supports MCP.
Prerequisites
Before setting up the integration, ensure you have:
- A TraceMem account - Sign up at app.tracemem.com
- An Agent API key - Generate one from your TraceMem project settings
- OpenAI API access - Access to OpenAI Codex or ChatGPT with MCP support
- MCP support - Your OpenAI environment must support MCP servers
Adding TraceMem MCP Server
Step 1: Locate MCP Configuration
The MCP configuration location depends on your OpenAI setup:
- Codex: Configuration file typically at
~/.config/codex/mcp.jsonor project-specific.codex/mcp.json - ChatGPT: Settings → MCP Servers (if available in your environment)
- Custom Integration: Follow OpenAI's MCP documentation for your specific setup
Step 2: Add TraceMem Server Configuration
Add the TraceMem MCP server to your configuration:
{
"mcpServers": {
"tracemem": {
"url": "https://mcp.tracemem.com",
"auth": {
"type": "header",
"header": "Authorization",
"value": "Agent YOUR_API_KEY_HERE"
}
}
}
}
Important: Replace YOUR_API_KEY_HERE with your actual TraceMem Agent API key.
Step 3: Restart Your Environment
Restart your OpenAI Codex or ChatGPT environment to load the new MCP server configuration.
Step 4: Verify Connection
Test the connection using the capabilities_get tool (see "Verifying Connectivity" below).
Starter Config Snippet
Here's a complete starter configuration you can copy and customize:
{
"mcpServers": {
"tracemem": {
"url": "https://mcp.tracemem.com",
"auth": {
"type": "header",
"header": "Authorization",
"value": "Agent YOUR_API_KEY_HERE"
},
"description": "TraceMem MCP server for decision tracking and governance"
}
}
}
Using Environment Variables
For better security, use environment variables for your API key:
{
"mcpServers": {
"tracemem": {
"url": "https://mcp.tracemem.com",
"auth": {
"type": "header",
"header": "Authorization",
"value": "Agent ${TRACEMEM_API_KEY}"
}
}
}
}
Then set the environment variable:
export TRACEMEM_API_KEY="your-api-key-here"
Verifying Connectivity
Use the capabilities_get tool (TraceMem's "doctor" tool) to verify your setup immediately after configuration.
Quick Verification
Ask your AI assistant:
"Run capabilities_get to verify TraceMem connectivity"
Or directly call the tool:
# Example verification request
{
"method": "tools/call",
"params": {
"name": "capabilities_get",
"arguments": {}
}
}
Expected Response
A successful connection returns:
{
"agent_id": "agent_...",
"name": "Your Agent Name",
"permissions": {
"data_products": [...],
"policies": [...]
}
}
If you see this response, your TraceMem MCP server is properly configured and accessible.
Understanding Approvals
OpenAI describes approvals before data is shared with a connector/remote MCP server. This is a critical security feature that TraceMem fully supports.
Why Approvals Matter
Approvals in TraceMem serve several important purposes:
- Human Judgment: When policies cannot automatically allow an action, human judgment is required
- Exception Handling: Approvals capture who explicitly allowed an exception and why
- Audit Trail: Every approval becomes part of the immutable decision trace
- Compliance: Approvals provide evidence for regulatory and compliance requirements
When Approvals Are Required
Approvals are required when:
- Policy evaluation returns
requires_exception: The policy cannot automatically allow the action - Automation mode is
propose: The decision requires explicit human approval - Data Product writes: Some data products require approval by definition
- Sensitive operations: Operations involving sensitive data or high-risk changes
Approval Workflow
- Policy Evaluation: AI assistant evaluates a policy using
decision_evaluate - Exception Detected: Policy returns
requires_exception - Approval Request: AI uses
decision_request_approvalto request human approval - Human Review: Approval is delivered via configured route (Slack, email, webhook)
- Decision: Human approves or rejects
- Proceed or Abort: AI proceeds if approved, aborts if rejected
Example Approval Flow
# 1. Evaluate policy
policy_result = decision_evaluate(
decision_id=decision_id,
policy_id="discount_cap_v1",
inputs={"proposed_discount": 0.25}
)
# 2. Check if approval needed
if policy_result["outcome"] == "requires_exception":
# 3. Request approval
approval = decision_request_approval(
decision_id=decision_id,
title="High-Value Discount Approval",
message="Customer requesting 25% discount. Policy requires exception."
)
# 4. Wait for approval (poll decision_get)
# 5. Proceed if approved, abort if rejected
Why Minimization Matters
Data minimization is crucial when working with remote MCP servers. OpenAI's approval system requires reviewing data before it's shared, making minimization essential for both security and efficiency.
TraceMem's Minimization Features
TraceMem provides several built-in minimization features:
-
Automatic Secret Redaction
- Keys matching patterns like
token,secret,password,api_keyare automatically redacted - Values replaced with
[redacted]before storage
- Keys matching patterns like
-
Size Limits
- Long strings (>1000 characters) are truncated
- Arrays limited to 100 items
- Recursion depth limited to 10 levels
-
Purpose-Bound Data Access
- Data products require explicit purposes
- Only necessary data is exposed
- Access is logged and auditable
-
Optimized Tool Schemas
- Tool schemas are designed for minimal data transfer
- Only essential fields are included
- Redaction applied automatically
Best Practices for Minimization
-
Use Structured Intents
- Use specific intents like
code.refactor.executeinstead of free-form descriptions - This reduces the amount of context needed
- Use specific intents like
-
Minimize Metadata
- Only include essential metadata in
decision_create - Avoid including full file contents or large data structures
- Only include essential metadata in
-
Use Purpose-Bound Queries
- When using
decision_read, query only what you need - Use specific filters and limits
- When using
-
Leverage Automatic Redaction
- Trust TraceMem's automatic secret redaction
- Don't manually redact unless necessary
-
Review Before Sharing
- Review decision context before requesting approval
- Ensure no sensitive data is included
Example: Minimized Decision Creation
# Good: Minimal, structured metadata
decision_create(
intent="code.refactor.execute",
automation_mode="propose",
metadata={
"component": "auth-service",
"change_type": "refactor",
"files_affected": 3
}
)
# Avoid: Including large data structures
decision_create(
intent="code.refactor.execute",
automation_mode="propose",
metadata={
"full_file_contents": "...", # Too large
"entire_codebase": "...", # Unnecessary
"all_secrets": "..." # Will be redacted, but better to avoid
}
)
Tool Optimization for OpenAI
TraceMem's tool schemas are optimized for minimal data transfer. Here's how to use them effectively:
Decision Tools
decision_create: Minimal metadata, structured intentsdecision_add_context: Incremental context, not bulk datadecision_read: Specific queries with filters and limitsdecision_write: Targeted mutations, not full replacements
Policy Tools
decision_evaluate: Only essential inputs, not full data setsdecision_request_approval: Clear, concise messages
Discovery Tools
capabilities_get: Lightweight capability discoveryproducts_list: Summary information onlyproduct_get: Detailed info only when needed
Complete Workflow Example
Here's a complete example of using TraceMem with OpenAI Codex:
# 1. Create decision envelope
decision = decision_create(
intent="code.refactor.execute",
automation_mode="propose",
metadata={
"component": "user-service",
"change_type": "refactor"
}
)
decision_id = decision["decision_id"]
# 2. Add context as work progresses
decision_add_context(
decision_id=decision_id,
data={
"step": "extracting_auth_logic",
"rationale": "Improving testability"
}
)
# 3. Evaluate policy if needed
policy_result = decision_evaluate(
decision_id=decision_id,
policy_id="code_change_policy_v1",
inputs={"change_type": "refactor", "component": "user-service"}
)
# 4. Request approval if required
if policy_result["outcome"] == "requires_exception":
decision_request_approval(
decision_id=decision_id,
title="Refactoring Approval Required",
message="Refactoring user-service authentication logic"
)
# Wait for approval (poll decision_get)
# ...
# 5. Close decision when complete
decision_close(
decision_id=decision_id,
outcome="commit",
summary="Successfully refactored authentication logic"
)
Security Best Practices
-
Protect API Keys
- Use environment variables
- Never commit keys to version control
- Rotate keys regularly
-
Review Approvals
- Always review approval requests before approving
- Understand what data will be accessed
- Verify the action is appropriate
-
Minimize Data Transfer
- Use structured intents
- Include only essential metadata
- Leverage automatic redaction
-
Monitor Usage
- Review decision traces regularly
- Monitor API key usage
- Audit approvals and exceptions
-
Use Purpose-Bound Access
- Always specify appropriate purposes
- Use data products with minimal permissions
- Review data product access regularly
Troubleshooting
Connection Issues
If you're unable to connect to TraceMem:
- Verify API Key: Ensure your API key is correct and active
- Check Network: Verify you can reach
https://mcp.tracemem.com - Review Configuration: Check MCP server configuration syntax
- Test Endpoint: Use
capabilities_getto test connectivity
Tools Not Available
If TraceMem tools don't appear:
- Restart Environment: Fully restart your OpenAI environment
- Check Configuration: Verify MCP server configuration
- Review Logs: Check for error messages in logs
- Verify Permissions: Ensure your API key has necessary permissions
Approval Issues
If approvals aren't working:
- Check Approval Routes: Verify approval routes are configured
- Review Policy Evaluation: Ensure policies are returning
requires_exceptionwhen needed - Check Integration: Verify Slack/email/webhook integration is working
- Review Logs: Check TraceMem logs for approval delivery issues
Minimization Concerns
If you're concerned about data minimization:
- Review Metadata: Check what metadata is being included
- Use Filters: Apply filters and limits to queries
- Leverage Redaction: Trust automatic secret redaction
- Review Before Approval: Always review data before approving