OpenAI Codex Integration

Learn how to connect TraceMem's MCP server to OpenAI Codex and ChatGPT, enabling AI assistants to track decisions, evaluate policies, and request approvals while maintaining data minimization and security best practices.

Overview

OpenAI supports MCP (Model Context Protocol) servers in Codex tooling and general MCP guidance. This integration enables you to:

  • Track decisions as AI assistants work on code changes and architecture
  • Evaluate policies to ensure compliance before actions are taken
  • Request approvals when policies require human judgment
  • Minimize data transfer by optimizing tool schemas and relying on TraceMem's redaction
  • Maintain audit trails for all AI-driven decisions and changes

This is the lowest-effort, highest-leverage path because your TraceMem MCP server already exists and works everywhere OpenAI supports MCP.

Prerequisites

Before setting up the integration, ensure you have:

  1. A TraceMem account - Sign up at app.tracemem.com
  2. An Agent API key - Generate one from your TraceMem project settings
  3. OpenAI API access - Access to OpenAI Codex or ChatGPT with MCP support
  4. MCP support - Your OpenAI environment must support MCP servers

Adding TraceMem MCP Server

Step 1: Locate MCP Configuration

The MCP configuration location depends on your OpenAI setup:

  • Codex: Configuration file typically at ~/.config/codex/mcp.json or project-specific .codex/mcp.json
  • ChatGPT: Settings → MCP Servers (if available in your environment)
  • Custom Integration: Follow OpenAI's MCP documentation for your specific setup

Step 2: Add TraceMem Server Configuration

Add the TraceMem MCP server to your configuration:

json
{
  "mcpServers": {
    "tracemem": {
      "url": "https://mcp.tracemem.com",
      "auth": {
        "type": "header",
        "header": "Authorization",
        "value": "Agent YOUR_API_KEY_HERE"
      }
    }
  }
}

Important: Replace YOUR_API_KEY_HERE with your actual TraceMem Agent API key.

Step 3: Restart Your Environment

Restart your OpenAI Codex or ChatGPT environment to load the new MCP server configuration.

Step 4: Verify Connection

Test the connection using the capabilities_get tool (see "Verifying Connectivity" below).

Starter Config Snippet

Here's a complete starter configuration you can copy and customize:

json
{
  "mcpServers": {
    "tracemem": {
      "url": "https://mcp.tracemem.com",
      "auth": {
        "type": "header",
        "header": "Authorization",
        "value": "Agent YOUR_API_KEY_HERE"
      },
      "description": "TraceMem MCP server for decision tracking and governance"
    }
  }
}

Using Environment Variables

For better security, use environment variables for your API key:

json
{
  "mcpServers": {
    "tracemem": {
      "url": "https://mcp.tracemem.com",
      "auth": {
        "type": "header",
        "header": "Authorization",
        "value": "Agent ${TRACEMEM_API_KEY}"
      }
    }
  }
}

Then set the environment variable:

bash
export TRACEMEM_API_KEY="your-api-key-here"

Verifying Connectivity

Use the capabilities_get tool (TraceMem's "doctor" tool) to verify your setup immediately after configuration.

Quick Verification

Ask your AI assistant:

text
"Run capabilities_get to verify TraceMem connectivity"

Or directly call the tool:

python
# Example verification request
{
  "method": "tools/call",
  "params": {
    "name": "capabilities_get",
    "arguments": {}
  }
}

Expected Response

A successful connection returns:

json
{
  "agent_id": "agent_...",
  "name": "Your Agent Name",
  "permissions": {
    "data_products": [...],
    "policies": [...]
  }
}

If you see this response, your TraceMem MCP server is properly configured and accessible.

Understanding Approvals

OpenAI describes approvals before data is shared with a connector/remote MCP server. This is a critical security feature that TraceMem fully supports.

Why Approvals Matter

Approvals in TraceMem serve several important purposes:

  1. Human Judgment: When policies cannot automatically allow an action, human judgment is required
  2. Exception Handling: Approvals capture who explicitly allowed an exception and why
  3. Audit Trail: Every approval becomes part of the immutable decision trace
  4. Compliance: Approvals provide evidence for regulatory and compliance requirements

When Approvals Are Required

Approvals are required when:

  • Policy evaluation returns requires_exception: The policy cannot automatically allow the action
  • Automation mode is propose: The decision requires explicit human approval
  • Data Product writes: Some data products require approval by definition
  • Sensitive operations: Operations involving sensitive data or high-risk changes

Approval Workflow

  1. Policy Evaluation: AI assistant evaluates a policy using decision_evaluate
  2. Exception Detected: Policy returns requires_exception
  3. Approval Request: AI uses decision_request_approval to request human approval
  4. Human Review: Approval is delivered via configured route (Slack, email, webhook)
  5. Decision: Human approves or rejects
  6. Proceed or Abort: AI proceeds if approved, aborts if rejected

Example Approval Flow

python
# 1. Evaluate policy
policy_result = decision_evaluate(
    decision_id=decision_id,
    policy_id="discount_cap_v1",
    inputs={"proposed_discount": 0.25}
)

# 2. Check if approval needed
if policy_result["outcome"] == "requires_exception":
    # 3. Request approval
    approval = decision_request_approval(
        decision_id=decision_id,
        title="High-Value Discount Approval",
        message="Customer requesting 25% discount. Policy requires exception."
    )
    
    # 4. Wait for approval (poll decision_get)
    # 5. Proceed if approved, abort if rejected

Why Minimization Matters

Data minimization is crucial when working with remote MCP servers. OpenAI's approval system requires reviewing data before it's shared, making minimization essential for both security and efficiency.

TraceMem's Minimization Features

TraceMem provides several built-in minimization features:

  1. Automatic Secret Redaction

    • Keys matching patterns like token, secret, password, api_key are automatically redacted
    • Values replaced with [redacted] before storage
  2. Size Limits

    • Long strings (>1000 characters) are truncated
    • Arrays limited to 100 items
    • Recursion depth limited to 10 levels
  3. Purpose-Bound Data Access

    • Data products require explicit purposes
    • Only necessary data is exposed
    • Access is logged and auditable
  4. Optimized Tool Schemas

    • Tool schemas are designed for minimal data transfer
    • Only essential fields are included
    • Redaction applied automatically

Best Practices for Minimization

  1. Use Structured Intents

    • Use specific intents like code.refactor.execute instead of free-form descriptions
    • This reduces the amount of context needed
  2. Minimize Metadata

    • Only include essential metadata in decision_create
    • Avoid including full file contents or large data structures
  3. Use Purpose-Bound Queries

    • When using decision_read, query only what you need
    • Use specific filters and limits
  4. Leverage Automatic Redaction

    • Trust TraceMem's automatic secret redaction
    • Don't manually redact unless necessary
  5. Review Before Sharing

    • Review decision context before requesting approval
    • Ensure no sensitive data is included

Example: Minimized Decision Creation

python
# Good: Minimal, structured metadata
decision_create(
    intent="code.refactor.execute",
    automation_mode="propose",
    metadata={
        "component": "auth-service",
        "change_type": "refactor",
        "files_affected": 3
    }
)

# Avoid: Including large data structures
decision_create(
    intent="code.refactor.execute",
    automation_mode="propose",
    metadata={
        "full_file_contents": "...",  # Too large
        "entire_codebase": "...",      # Unnecessary
        "all_secrets": "..."           # Will be redacted, but better to avoid
    }
)

Tool Optimization for OpenAI

TraceMem's tool schemas are optimized for minimal data transfer. Here's how to use them effectively:

Decision Tools

  • decision_create: Minimal metadata, structured intents
  • decision_add_context: Incremental context, not bulk data
  • decision_read: Specific queries with filters and limits
  • decision_write: Targeted mutations, not full replacements

Policy Tools

  • decision_evaluate: Only essential inputs, not full data sets
  • decision_request_approval: Clear, concise messages

Discovery Tools

  • capabilities_get: Lightweight capability discovery
  • products_list: Summary information only
  • product_get: Detailed info only when needed

Complete Workflow Example

Here's a complete example of using TraceMem with OpenAI Codex:

python
# 1. Create decision envelope
decision = decision_create(
    intent="code.refactor.execute",
    automation_mode="propose",
    metadata={
        "component": "user-service",
        "change_type": "refactor"
    }
)
decision_id = decision["decision_id"]

# 2. Add context as work progresses
decision_add_context(
    decision_id=decision_id,
    data={
        "step": "extracting_auth_logic",
        "rationale": "Improving testability"
    }
)

# 3. Evaluate policy if needed
policy_result = decision_evaluate(
    decision_id=decision_id,
    policy_id="code_change_policy_v1",
    inputs={"change_type": "refactor", "component": "user-service"}
)

# 4. Request approval if required
if policy_result["outcome"] == "requires_exception":
    decision_request_approval(
        decision_id=decision_id,
        title="Refactoring Approval Required",
        message="Refactoring user-service authentication logic"
    )
    
    # Wait for approval (poll decision_get)
    # ...

# 5. Close decision when complete
decision_close(
    decision_id=decision_id,
    outcome="commit",
    summary="Successfully refactored authentication logic"
)

Security Best Practices

  1. Protect API Keys

    • Use environment variables
    • Never commit keys to version control
    • Rotate keys regularly
  2. Review Approvals

    • Always review approval requests before approving
    • Understand what data will be accessed
    • Verify the action is appropriate
  3. Minimize Data Transfer

    • Use structured intents
    • Include only essential metadata
    • Leverage automatic redaction
  4. Monitor Usage

    • Review decision traces regularly
    • Monitor API key usage
    • Audit approvals and exceptions
  5. Use Purpose-Bound Access

    • Always specify appropriate purposes
    • Use data products with minimal permissions
    • Review data product access regularly

Troubleshooting

Connection Issues

If you're unable to connect to TraceMem:

  1. Verify API Key: Ensure your API key is correct and active
  2. Check Network: Verify you can reach https://mcp.tracemem.com
  3. Review Configuration: Check MCP server configuration syntax
  4. Test Endpoint: Use capabilities_get to test connectivity

Tools Not Available

If TraceMem tools don't appear:

  1. Restart Environment: Fully restart your OpenAI environment
  2. Check Configuration: Verify MCP server configuration
  3. Review Logs: Check for error messages in logs
  4. Verify Permissions: Ensure your API key has necessary permissions

Approval Issues

If approvals aren't working:

  1. Check Approval Routes: Verify approval routes are configured
  2. Review Policy Evaluation: Ensure policies are returning requires_exception when needed
  3. Check Integration: Verify Slack/email/webhook integration is working
  4. Review Logs: Check TraceMem logs for approval delivery issues

Minimization Concerns

If you're concerned about data minimization:

  1. Review Metadata: Check what metadata is being included
  2. Use Filters: Apply filters and limits to queries
  3. Leverage Redaction: Trust automatic secret redaction
  4. Review Before Approval: Always review data before approving

TraceMem is trace-native infrastructure for AI agents