summaryrefslogtreecommitdiff
path: root/docs/handbook/ofborg
diff options
context:
space:
mode:
authorMehmet Samet Duman <yongdohyun@projecttick.org>2026-04-05 17:37:54 +0300
committerMehmet Samet Duman <yongdohyun@projecttick.org>2026-04-05 17:37:54 +0300
commit32f5f761bc8e960293b4f4feaf973dd0da26d0f8 (patch)
tree8d0436fdd093d5255c3b75e45f9741882b22e2e4 /docs/handbook/ofborg
parent64f4ddfa97c19f371fe1847b20bd26803f0a25d5 (diff)
downloadProject-Tick-32f5f761bc8e960293b4f4feaf973dd0da26d0f8.tar.gz
Project-Tick-32f5f761bc8e960293b4f4feaf973dd0da26d0f8.zip
NOISSUE Project Tick Handbook is Released!
Assisted-by: Claude:Opus-4.6-High Signed-off-by: Mehmet Samet Duman <yongdohyun@projecttick.org>
Diffstat (limited to 'docs/handbook/ofborg')
-rw-r--r--docs/handbook/ofborg/amqp-infrastructure.md631
-rw-r--r--docs/handbook/ofborg/architecture.md814
-rw-r--r--docs/handbook/ofborg/build-executor.md657
-rw-r--r--docs/handbook/ofborg/building.md530
-rw-r--r--docs/handbook/ofborg/code-style.md332
-rw-r--r--docs/handbook/ofborg/configuration.md472
-rw-r--r--docs/handbook/ofborg/contributing.md326
-rw-r--r--docs/handbook/ofborg/data-flow.md346
-rw-r--r--docs/handbook/ofborg/deployment.md413
-rw-r--r--docs/handbook/ofborg/evaluation-system.md602
-rw-r--r--docs/handbook/ofborg/github-integration.md603
-rw-r--r--docs/handbook/ofborg/message-system.md731
-rw-r--r--docs/handbook/ofborg/overview.md571
-rw-r--r--docs/handbook/ofborg/webhook-receiver.md470
14 files changed, 7498 insertions, 0 deletions
diff --git a/docs/handbook/ofborg/amqp-infrastructure.md b/docs/handbook/ofborg/amqp-infrastructure.md
new file mode 100644
index 0000000000..4575da966a
--- /dev/null
+++ b/docs/handbook/ofborg/amqp-infrastructure.md
@@ -0,0 +1,631 @@
+# Tickborg — AMQP Infrastructure
+
+## Overview
+
+Tickborg uses **AMQP 0-9-1** (RabbitMQ) as the message bus connecting all
+services. The Rust crate `lapin` (v4.3.0) provides the low-level protocol
+client. Two abstraction layers — `easyamqp` and `easylapin` — provide
+higher-level APIs for declaring exchanges, binding queues, and running worker
+consumers.
+
+---
+
+## Key Source Files
+
+| File | Purpose |
+|------|---------|
+| `tickborg/src/easyamqp.rs` | Config types, traits, exchange/queue declarations |
+| `tickborg/src/easylapin.rs` | `lapin`-based implementations of the traits |
+| `tickborg/src/worker.rs` | `SimpleWorker` trait, `Action` enum |
+| `tickborg/src/notifyworker.rs` | `SimpleNotifyWorker`, `NotificationReceiver` |
+| `tickborg/src/config.rs` | `RabbitMqConfig` |
+
+---
+
+## Connection Configuration
+
+### `RabbitMqConfig`
+
+```rust
+// config.rs
+#[derive(Deserialize, Debug)]
+pub struct RabbitMqConfig {
+ pub ssl: bool,
+ pub host: String,
+ pub vhost: Option<String>,
+ pub username: String,
+ pub password_file: PathBuf,
+}
+```
+
+### Connection URI Construction
+
+```rust
+// easylapin.rs
+pub async fn from_config(cfg: &RabbitMqConfig) -> Result<lapin::Connection, lapin::Error> {
+ let password = std::fs::read_to_string(&cfg.password_file)
+ .expect("Failed to read RabbitMQ password file")
+ .trim()
+ .to_owned();
+
+ let vhost = cfg.vhost
+ .as_deref()
+ .unwrap_or("/")
+ .to_owned();
+
+ let scheme = if cfg.ssl { "amqps" } else { "amqp" };
+ let uri = format!(
+ "{scheme}://{user}:{pass}@{host}/{vhost}",
+ user = urlencoding::encode(&cfg.username),
+ pass = urlencoding::encode(&password),
+ host = cfg.host,
+ vhost = urlencoding::encode(&vhost),
+ );
+
+ lapin::Connection::connect(
+ &uri,
+ lapin::ConnectionProperties::default()
+ .with_tokio()
+ .with_default_executor(8),
+ ).await
+}
+```
+
+---
+
+## Exchange and Queue Configuration Types
+
+### `ExchangeType`
+
+```rust
+#[derive(Clone, Debug)]
+pub enum ExchangeType {
+ Topic,
+ Fanout,
+ Headers,
+ Direct,
+ Custom(String),
+}
+
+impl ExchangeType {
+ fn as_str(&self) -> &str {
+ match self {
+ ExchangeType::Topic => "topic",
+ ExchangeType::Fanout => "fanout",
+ ExchangeType::Headers => "headers",
+ ExchangeType::Direct => "direct",
+ ExchangeType::Custom(s) => s.as_ref(),
+ }
+ }
+}
+```
+
+### `ExchangeConfig`
+
+```rust
+#[derive(Clone, Debug)]
+pub struct ExchangeConfig {
+ pub exchange_name: String,
+ pub exchange_type: ExchangeType,
+ pub passive: bool,
+ pub durable: bool,
+ pub exclusive: bool,
+ pub auto_delete: bool,
+ pub no_wait: bool,
+ pub internal: bool,
+}
+
+impl Default for ExchangeConfig {
+ fn default() -> Self {
+ ExchangeConfig {
+ exchange_name: String::new(),
+ exchange_type: ExchangeType::Topic,
+ passive: false,
+ durable: true,
+ exclusive: false,
+ auto_delete: false,
+ no_wait: false,
+ internal: false,
+ }
+ }
+}
+```
+
+### `QueueConfig`
+
+```rust
+#[derive(Clone, Debug)]
+pub struct QueueConfig {
+ pub queue_name: String,
+ pub passive: bool,
+ pub durable: bool,
+ pub exclusive: bool,
+ pub auto_delete: bool,
+ pub no_wait: bool,
+}
+
+impl Default for QueueConfig {
+ fn default() -> Self {
+ QueueConfig {
+ queue_name: String::new(),
+ passive: false,
+ durable: true,
+ exclusive: false,
+ auto_delete: false,
+ no_wait: false,
+ }
+ }
+}
+```
+
+### `BindQueueConfig`
+
+```rust
+#[derive(Clone, Debug)]
+pub struct BindQueueConfig {
+ pub queue_name: String,
+ pub exchange_name: String,
+ pub routing_key: Option<String>,
+ pub no_wait: bool,
+ pub headers: Option<Vec<(String, String)>>,
+}
+```
+
+### `ConsumeConfig`
+
+```rust
+#[derive(Clone, Debug)]
+pub struct ConsumeConfig {
+ pub queue: String,
+ pub consumer_tag: String,
+ pub no_local: bool,
+ pub no_ack: bool,
+ pub no_wait: bool,
+ pub exclusive: bool,
+}
+```
+
+---
+
+## The `ChannelExt` Trait
+
+```rust
+// easyamqp.rs
+pub trait ChannelExt {
+ fn declare_exchange(
+ &mut self,
+ config: ExchangeConfig,
+ ) -> impl Future<Output = Result<(), String>>;
+
+ fn declare_queue(
+ &mut self,
+ config: QueueConfig,
+ ) -> impl Future<Output = Result<(), String>>;
+
+ fn bind_queue(
+ &mut self,
+ config: BindQueueConfig,
+ ) -> impl Future<Output = Result<(), String>>;
+}
+```
+
+### `lapin` Implementation
+
+```rust
+// easylapin.rs
+impl ChannelExt for lapin::Channel {
+ async fn declare_exchange(&mut self, config: ExchangeConfig) -> Result<(), String> {
+ let opts = ExchangeDeclareOptions {
+ passive: config.passive,
+ durable: config.durable,
+ auto_delete: config.auto_delete,
+ internal: config.internal,
+ nowait: config.no_wait,
+ };
+ self.exchange_declare(
+ &config.exchange_name,
+ lapin::ExchangeKind::Custom(
+ config.exchange_type.as_str().to_owned()
+ ),
+ opts,
+ FieldTable::default(),
+ ).await
+ .map_err(|e| format!("Failed to declare exchange: {e}"))?;
+ Ok(())
+ }
+
+ async fn declare_queue(&mut self, config: QueueConfig) -> Result<(), String> {
+ let opts = QueueDeclareOptions {
+ passive: config.passive,
+ durable: config.durable,
+ exclusive: config.exclusive,
+ auto_delete: config.auto_delete,
+ nowait: config.no_wait,
+ };
+ self.queue_declare(
+ &config.queue_name,
+ opts,
+ FieldTable::default(),
+ ).await
+ .map_err(|e| format!("Failed to declare queue: {e}"))?;
+ Ok(())
+ }
+
+ async fn bind_queue(&mut self, config: BindQueueConfig) -> Result<(), String> {
+ let routing_key = config.routing_key
+ .as_deref()
+ .unwrap_or("#");
+
+ let mut headers = FieldTable::default();
+ if let Some(hdr_vec) = &config.headers {
+ for (k, v) in hdr_vec {
+ headers.insert(
+ k.clone().into(),
+ AMQPValue::LongString(v.clone().into()),
+ );
+ }
+ }
+
+ self.queue_bind(
+ &config.queue_name,
+ &config.exchange_name,
+ routing_key,
+ QueueBindOptions { nowait: config.no_wait },
+ headers,
+ ).await
+ .map_err(|e| format!("Failed to bind queue: {e}"))?;
+ Ok(())
+ }
+}
+```
+
+---
+
+## The `ConsumerExt` Trait
+
+```rust
+// easyamqp.rs
+pub trait ConsumerExt {
+ fn consume<W: worker::SimpleWorker + 'static>(
+ &mut self,
+ worker: W,
+ config: ConsumeConfig,
+ ) -> impl Future<Output = Result<(), String>>;
+}
+```
+
+Three implementations exist in `easylapin.rs`:
+
+### 1. `Channel` — Simple Workers
+
+```rust
+impl ConsumerExt for lapin::Channel {
+ async fn consume<W: worker::SimpleWorker + 'static>(
+ &mut self,
+ mut worker: W,
+ config: ConsumeConfig,
+ ) -> Result<(), String> {
+ let consumer = self.basic_consume(
+ &config.queue,
+ &config.consumer_tag,
+ BasicConsumeOptions {
+ no_local: config.no_local,
+ no_ack: config.no_ack,
+ exclusive: config.exclusive,
+ nowait: config.no_wait,
+ },
+ FieldTable::default(),
+ ).await
+ .map_err(|e| format!("Failed to start consumer: {e}"))?;
+
+ // Message processing loop
+ while let Some(delivery) = consumer.next().await {
+ let delivery = delivery
+ .map_err(|e| format!("Consumer error: {e}"))?;
+
+ // Decode the message
+ let job = match worker.msg_to_job(
+ &delivery.routing_key,
+ &delivery.exchange,
+ &delivery.data,
+ ).await {
+ Ok(job) => job,
+ Err(err) => {
+ tracing::error!("Failed to decode message: {}", err);
+ delivery.ack(BasicAckOptions::default()).await?;
+ continue;
+ }
+ };
+
+ // Process the job
+ let actions = worker.consumer(&job).await;
+
+ // Execute resulting actions
+ for action in actions {
+ action_deliver(&self, &delivery, action).await?;
+ }
+ }
+ Ok(())
+ }
+}
+```
+
+### 2. `WorkerChannel` — Workers on a Dedicated Channel
+
+```rust
+pub struct WorkerChannel {
+ pub channel: lapin::Channel,
+ pub prefetch_count: u16,
+}
+
+impl ConsumerExt for WorkerChannel {
+ async fn consume<W: worker::SimpleWorker + 'static>(
+ &mut self,
+ worker: W,
+ config: ConsumeConfig,
+ ) -> Result<(), String> {
+ // Set QoS (prefetch count)
+ self.channel.basic_qos(
+ self.prefetch_count,
+ BasicQosOptions::default(),
+ ).await?;
+
+ // Delegate to Channel implementation
+ self.channel.consume(worker, config).await
+ }
+}
+```
+
+### 3. `NotifyChannel` — Notify Workers
+
+```rust
+pub struct NotifyChannel {
+ pub channel: lapin::Channel,
+}
+
+impl NotifyChannel {
+ pub async fn consume<W: notifyworker::SimpleNotifyWorker + 'static>(
+ &mut self,
+ mut worker: W,
+ config: ConsumeConfig,
+ ) -> Result<(), String> {
+ // Similar to Channel but creates a ChannelNotificationReceiver
+ // that allows the worker to report progress back to AMQP
+ let consumer = self.channel.basic_consume(/* ... */).await?;
+
+ while let Some(delivery) = consumer.next().await {
+ let delivery = delivery?;
+ let receiver = ChannelNotificationReceiver {
+ channel: self.channel.clone(),
+ delivery: &delivery,
+ };
+
+ let job = worker.msg_to_job(/* ... */).await?;
+ let actions = worker.consumer(&job, &receiver).await;
+
+ for action in actions {
+ action_deliver(&self.channel, &delivery, action).await?;
+ }
+ }
+ Ok(())
+ }
+}
+```
+
+---
+
+## Action Delivery
+
+```rust
+// easylapin.rs
+async fn action_deliver(
+ channel: &lapin::Channel,
+ delivery: &lapin::message::Delivery,
+ action: worker::Action,
+) -> Result<(), String> {
+ match action {
+ worker::Action::Ack => {
+ delivery.ack(BasicAckOptions::default()).await
+ .map_err(|e| format!("Failed to ack: {e}"))?;
+ }
+ worker::Action::NackRequeue => {
+ delivery.nack(BasicNackOptions {
+ requeue: true,
+ ..Default::default()
+ }).await
+ .map_err(|e| format!("Failed to nack: {e}"))?;
+ }
+ worker::Action::NackDump => {
+ delivery.nack(BasicNackOptions {
+ requeue: false,
+ ..Default::default()
+ }).await
+ .map_err(|e| format!("Failed to nack-dump: {e}"))?;
+ }
+ worker::Action::Publish(msg) => {
+ channel.basic_publish(
+ msg.exchange.as_deref().unwrap_or(""),
+ msg.routing_key.as_deref().unwrap_or(""),
+ BasicPublishOptions::default(),
+ &msg.content,
+ BasicProperties::default()
+ .with_delivery_mode(2), // persistent
+ ).await
+ .map_err(|e| format!("Failed to publish: {e}"))?;
+ }
+ }
+ Ok(())
+}
+```
+
+---
+
+## Notification Receiver
+
+```rust
+// easylapin.rs
+pub struct ChannelNotificationReceiver<'a> {
+ channel: lapin::Channel,
+ delivery: &'a lapin::message::Delivery,
+}
+
+impl<'a> notifyworker::NotificationReceiver for ChannelNotificationReceiver<'a> {
+ async fn tell(&mut self, action: worker::Action) {
+ if let Err(e) = action_deliver(&self.channel, self.delivery, action).await {
+ tracing::error!("Failed to deliver notification action: {}", e);
+ }
+ }
+}
+```
+
+Used by `BuildWorker` (which implements `SimpleNotifyWorker`) to publish
+incremental log messages while a build is in progress, without waiting for the
+build to complete.
+
+---
+
+## Exchange Topology
+
+### Declarations
+
+Every binary declares its own required exchanges/queues at startup.
+Here is the complete topology used across the system:
+
+| Exchange | Type | Purpose |
+|----------|------|---------|
+| `github-events` | Topic | GitHub webhooks → routing by event type |
+| `build-jobs` | Fanout | Evaluation → builders |
+| `build-results` | Fanout | Builder results → poster + stats |
+| `logs` | Topic | Build log lines → collector |
+| `stats` | Fanout | Metrics events → stats collector |
+
+### Queue Bindings
+
+| Queue | Exchange | Routing Key | Consumer |
+|-------|----------|-------------|----------|
+| `mass-rebuild-check-inputs` | `github-events` | `pull_request.*` | EvaluationFilterWorker |
+| `mass-rebuild-check-jobs` | _(direct publish)_ | — | EvaluationWorker |
+| `build-inputs-{identity}` | `build-jobs` | — | BuildWorker |
+| `build-results` | `build-results` | — | GitHubCommentPoster |
+| `build-logs` | `logs` | `logs.*` | LogMessageCollector |
+| `comment-jobs` | `github-events` | `issue_comment.*` | GitHubCommentWorker |
+| `push-jobs` | `github-events` | `push.*` | PushFilterWorker |
+| `stats-events` | `stats` | — | StatCollectorWorker |
+
+### Topic Routing Keys
+
+For the `github-events` exchange, the routing key follows the pattern:
+
+```
+{event_type}.{action}
+```
+
+Examples:
+- `pull_request.opened`
+- `pull_request.synchronize`
+- `issue_comment.created`
+- `push.push`
+
+For the `logs` exchange:
+- `logs.{build_id}` — Each build's log lines are tagged with the build ID
+
+---
+
+## Message Persistence
+
+All published messages use `delivery_mode = 2` (persistent), which means
+messages survive RabbitMQ restarts:
+
+```rust
+BasicProperties::default()
+ .with_delivery_mode(2) // persistent
+```
+
+---
+
+## Prefetch / QoS
+
+Worker binaries configure `basic_qos` (prefetch count) to control how many
+messages are delivered to a consumer before it must acknowledge them:
+
+```rust
+let mut chan = WorkerChannel {
+ channel,
+ prefetch_count: 1, // Process one job at a time
+};
+```
+
+Setting `prefetch_count = 1` ensures fair dispatching across multiple worker
+instances and prevents a single slow worker from hoarding messages.
+
+---
+
+## Error Recovery
+
+### Message Processing Failures
+
+| Scenario | Action | Effect |
+|----------|--------|--------|
+| Decode error | `Ack` | Message discarded |
+| Processing error (retryable) | `NackRequeue` | Message requeued |
+| Processing error (permanent) | `NackDump` | Message dead-lettered |
+| Processing success | `Ack` | Message removed |
+| Worker publish | `Publish` | New message to exchange |
+
+### Connection Recovery
+
+`lapin` supports automatic connection recovery. If the TCP connection drops,
+the library will attempt to reconnect. However, tickborg binaries are designed
+to be restarted by their process supervisor (systemd) if the connection
+cannot be re-established.
+
+---
+
+## Usage Example: Declaring a Full Stack
+
+A typical binary does:
+
+```rust
+#[tokio::main]
+async fn main() -> Result<(), Box<dyn std::error::Error>> {
+ tickborg::setup_log();
+ let cfg = tickborg::config::load();
+
+ // 1. Connect to RabbitMQ
+ let conn = easylapin::from_config(&cfg.rabbitmq).await?;
+ let mut chan = conn.create_channel().await?;
+
+ // 2. Declare exchange
+ chan.declare_exchange(ExchangeConfig {
+ exchange_name: "github-events".to_owned(),
+ exchange_type: ExchangeType::Topic,
+ durable: true,
+ ..Default::default()
+ }).await?;
+
+ // 3. Declare queue
+ chan.declare_queue(QueueConfig {
+ queue_name: "mass-rebuild-check-inputs".to_owned(),
+ durable: true,
+ ..Default::default()
+ }).await?;
+
+ // 4. Bind queue to exchange
+ chan.bind_queue(BindQueueConfig {
+ queue_name: "mass-rebuild-check-inputs".to_owned(),
+ exchange_name: "github-events".to_owned(),
+ routing_key: Some("pull_request.*".to_owned()),
+ ..Default::default()
+ }).await?;
+
+ // 5. Start consume loop
+ let worker = EvaluationFilterWorker::new(cfg.acl());
+ chan.consume(worker, ConsumeConfig {
+ queue: "mass-rebuild-check-inputs".to_owned(),
+ consumer_tag: format!("evaluation-filter-{}", cfg.identity),
+ ..Default::default()
+ }).await?;
+
+ Ok(())
+}
+```
diff --git a/docs/handbook/ofborg/architecture.md b/docs/handbook/ofborg/architecture.md
new file mode 100644
index 0000000000..69b02cc4db
--- /dev/null
+++ b/docs/handbook/ofborg/architecture.md
@@ -0,0 +1,814 @@
+# Tickborg — Architecture
+
+## Workspace Structure
+
+The tickborg codebase is organized as a Cargo workspace with two member crates:
+
+```toml
+# ofborg/Cargo.toml
+[workspace]
+members = [
+ "tickborg",
+ "tickborg-simple-build"
+]
+resolver = "2"
+
+[profile.release]
+debug = true
+```
+
+The `debug = true` in the release profile ensures that production binaries
+include debug symbols, making crash backtraces and profiling useful without
+sacrificing optimization.
+
+---
+
+## Crate: `tickborg`
+
+This is the main crate. It compiles into a library (`lib.rs`) and **11 binary
+targets** under `src/bin/`.
+
+### Library Root (`src/lib.rs`)
+
+```rust
+#![recursion_limit = "512"]
+#![allow(clippy::redundant_closure)]
+
+pub mod acl;
+pub mod asynccmd;
+pub mod buildtool;
+pub mod checkout;
+pub mod clone;
+pub mod commentparser;
+pub mod commitstatus;
+pub mod config;
+pub mod easyamqp;
+pub mod easylapin;
+pub mod evalchecker;
+pub mod files;
+pub mod ghevent;
+pub mod locks;
+pub mod message;
+pub mod notifyworker;
+pub mod stats;
+pub mod systems;
+pub mod tagger;
+pub mod tasks;
+pub mod test_scratch;
+pub mod worker;
+pub mod writetoline;
+```
+
+Additionally, a `tickborg` sub-module re-exports everything for convenient
+access:
+
+```rust
+pub mod tickborg {
+ pub use crate::acl;
+ pub use crate::asynccmd;
+ pub use crate::buildtool;
+ pub use crate::checkout;
+ pub use crate::clone;
+ pub use crate::commentparser;
+ // ... all modules re-exported ...
+
+ pub const VERSION: &str = env!("CARGO_PKG_VERSION");
+
+ pub fn partition_result<A, B>(results: Vec<Result<A, B>>) -> (Vec<A>, Vec<B>) {
+ let mut ok = Vec::new();
+ let mut err = Vec::new();
+ for result in results.into_iter() {
+ match result {
+ Ok(x) => ok.push(x),
+ Err(x) => err.push(x),
+ }
+ }
+ (ok, err)
+ }
+}
+```
+
+### Logging Initialization
+
+```rust
+pub fn setup_log() {
+ let filter_layer = EnvFilter::try_from_default_env()
+ .or_else(|_| EnvFilter::try_new("info"))
+ .unwrap();
+
+ let log_json = env::var("RUST_LOG_JSON").is_ok_and(|s| s == "1");
+
+ if log_json {
+ let fmt_layer = tracing_subscriber::fmt::layer().json();
+ tracing_subscriber::registry()
+ .with(filter_layer)
+ .with(fmt_layer)
+ .init();
+ } else {
+ let fmt_layer = tracing_subscriber::fmt::layer();
+ tracing_subscriber::registry()
+ .with(filter_layer)
+ .with(fmt_layer)
+ .init();
+ }
+}
+```
+
+Every binary calls `tickborg::setup_log()` as its first action. The environment
+variable `RUST_LOG` controls the filter level. Setting `RUST_LOG_JSON=1`
+switches to JSON-structured output for log aggregation in production.
+
+---
+
+## Module Hierarchy
+
+### Core Worker Pattern
+
+```
+worker.rs
+├── SimpleWorker trait
+├── Action enum (Ack, NackRequeue, NackDump, Publish)
+├── QueueMsg struct
+└── publish_serde_action() helper
+
+notifyworker.rs
+├── SimpleNotifyWorker trait
+├── NotificationReceiver trait
+└── DummyNotificationReceiver (for testing)
+```
+
+### AMQP Layer
+
+```
+easyamqp.rs
+├── ConsumeConfig struct
+├── BindQueueConfig struct
+├── ExchangeConfig struct
+├── QueueConfig struct
+├── ExchangeType enum (Topic, Headers, Fanout, Direct, Custom)
+├── ChannelExt trait
+└── ConsumerExt trait
+
+easylapin.rs
+├── from_config() → Connection
+├── impl ChannelExt for Channel
+├── impl ConsumerExt for Channel
+├── WorkerChannel (with prefetch=1)
+├── NotifyChannel (with prefetch=1, for SimpleNotifyWorker)
+├── ChannelNotificationReceiver
+└── action_deliver() (Ack/Nack/Publish dispatch)
+```
+
+### Configuration
+
+```
+config.rs
+├── Config (top-level)
+├── GithubWebhookConfig
+├── LogApiConfig
+├── EvaluationFilter
+├── GithubCommentFilter
+├── GithubCommentPoster
+├── MassRebuilder
+├── Builder
+├── PushFilter
+├── LogMessageCollector
+├── Stats
+├── RabbitMqConfig
+├── BuildConfig
+├── GithubAppConfig
+├── RunnerConfig
+├── CheckoutConfig
+├── GithubAppVendingMachine
+└── load() → Config
+```
+
+### Message Types
+
+```
+message/
+├── mod.rs (re-exports)
+├── common.rs
+│ ├── Repo
+│ ├── Pr
+│ └── PushTrigger
+├── buildjob.rs
+│ ├── BuildJob
+│ ├── QueuedBuildJobs
+│ └── Actions
+├── buildresult.rs
+│ ├── BuildStatus enum
+│ ├── BuildResult enum (V1, Legacy)
+│ ├── LegacyBuildResult
+│ └── V1Tag
+├── buildlogmsg.rs
+│ ├── BuildLogMsg
+│ └── BuildLogStart
+└── evaluationjob.rs
+ ├── EvaluationJob
+ └── Actions
+```
+
+### GitHub Event Types
+
+```
+ghevent/
+├── mod.rs (re-exports)
+├── common.rs
+│ ├── Comment
+│ ├── User
+│ ├── Repository
+│ ├── Issue
+│ └── GenericWebhook
+├── issuecomment.rs
+│ ├── IssueComment
+│ └── IssueCommentAction enum
+├── pullrequestevent.rs
+│ ├── PullRequestEvent
+│ ├── PullRequest
+│ ├── PullRequestRef
+│ ├── PullRequestState enum
+│ ├── PullRequestAction enum
+│ ├── PullRequestChanges
+│ └── BaseChange, ChangeWas
+└── pushevent.rs
+ ├── PushEvent
+ ├── Pusher
+ └── HeadCommit
+```
+
+### Task Implementations
+
+```
+tasks/
+├── mod.rs
+├── build.rs
+│ ├── BuildWorker (SimpleNotifyWorker)
+│ └── JobActions (log streaming helper)
+├── eval/
+│ ├── mod.rs
+│ │ ├── EvaluationStrategy trait
+│ │ ├── EvaluationComplete
+│ │ └── Error enum
+│ └── monorepo.rs
+│ ├── MonorepoStrategy
+│ ├── label_from_title()
+│ └── parse_commit_scopes()
+├── evaluate.rs
+│ ├── EvaluationWorker (SimpleWorker)
+│ ├── OneEval (per-job evaluation context)
+│ └── update_labels()
+├── evaluationfilter.rs
+│ └── EvaluationFilterWorker (SimpleWorker)
+├── githubcommentfilter.rs
+│ └── GitHubCommentWorker (SimpleWorker)
+├── githubcommentposter.rs
+│ ├── GitHubCommentPoster (SimpleWorker)
+│ ├── PostableEvent enum
+│ ├── job_to_check()
+│ └── result_to_check()
+├── log_message_collector.rs
+│ ├── LogMessageCollector (SimpleWorker)
+│ ├── LogFrom
+│ └── LogMessage
+├── pushfilter.rs
+│ └── PushFilterWorker (SimpleWorker)
+└── statscollector.rs
+ └── StatCollectorWorker (SimpleWorker)
+```
+
+### Utility Modules
+
+```
+acl.rs — Access control (repos, trusted users, arch mapping)
+asynccmd.rs — Async subprocess execution with streaming output
+buildtool.rs — Build system detection and execution
+checkout.rs — Git checkout caching (CachedCloner, CachedProject)
+clone.rs — Git clone trait (GitClonable, file locking)
+commentparser.rs — @tickbot command parser (nom combinators)
+commitstatus.rs — GitHub commit status abstraction
+evalchecker.rs — Generic command execution checker
+files.rs — File utility functions
+locks.rs — File-based locking (fs2)
+stats.rs — Metrics events and RabbitMQ publisher
+systems.rs — Platform/architecture enum
+tagger.rs — PR label generation from changed files
+writetoline.rs — Random-access line writer for log files
+```
+
+---
+
+## Binary Targets
+
+### `github-webhook-receiver`
+
+**File:** `src/bin/github-webhook-receiver.rs`
+
+- Starts an HTTP server using `hyper 1.0`.
+- Validates `X-Hub-Signature-256` using HMAC-SHA256.
+- Reads the `X-Github-Event` header to determine the event type.
+- Parses the body as `GenericWebhook` to extract the repository name.
+- Publishes to the `github-events` topic exchange with routing key
+ `{event_type}.{owner}/{repo}`.
+- Declares queues: `build-inputs`, `github-events-unknown`,
+ `mass-rebuild-check-inputs`, `push-build-inputs`.
+
+### `evaluation-filter`
+
+**File:** `src/bin/evaluation-filter.rs`
+
+- Consumes from `mass-rebuild-check-inputs`.
+- Deserializes `PullRequestEvent`.
+- Checks if the repo is eligible via ACL.
+- Filters by action (Opened, Synchronize, Reopened, Edited with base change).
+- Produces `EvaluationJob` to `mass-rebuild-check-jobs`.
+
+### `github-comment-filter`
+
+**File:** `src/bin/github-comment-filter.rs`
+
+- Consumes from `build-inputs`.
+- Deserializes `IssueComment`.
+- Parses the comment body for `@tickbot` commands.
+- Looks up the PR via GitHub API to get the head SHA.
+- Produces `BuildJob` messages to architecture-specific queues.
+- Also produces `QueuedBuildJobs` to `build-results` for the comment poster.
+
+### `github-comment-poster`
+
+**File:** `src/bin/github-comment-poster.rs`
+
+- Consumes from `build-results`.
+- Accepts both `QueuedBuildJobs` (build queued) and `BuildResult` (build
+ finished).
+- Creates GitHub Check Runs via the Checks API.
+- Maps `BuildStatus` to `Conclusion` (Success, Failure, Skipped, Neutral).
+
+### `mass-rebuilder`
+
+**File:** `src/bin/mass-rebuilder.rs`
+
+- Consumes from `mass-rebuild-check-jobs`.
+- Uses `EvaluationWorker` with `MonorepoStrategy`.
+- Clones the repository, checks out the PR, detects changed files.
+- Uses build system detection to discover affected projects.
+- Creates `BuildJob` messages for each affected project/architecture.
+- Updates GitHub commit statuses throughout the process.
+
+### `builder`
+
+**File:** `src/bin/builder.rs`
+
+- Consumes from `build-inputs-{system}` (e.g., `build-inputs-x86_64-linux`).
+- Creates one channel per configured system.
+- Uses `BuildWorker` (a `SimpleNotifyWorker`) to execute builds.
+- Streams build log lines to the `logs` exchange in real-time.
+- Publishes `BuildResult` to `build-results` when done.
+
+### `push-filter`
+
+**File:** `src/bin/push-filter.rs`
+
+- Consumes from `push-build-inputs`.
+- Deserializes `PushEvent`.
+- Skips tag pushes, branch deletions, and zero-SHA events.
+- Detects changed projects from the push event's commit info.
+- Falls back to `default_attrs` when no projects are detected.
+- Creates `BuildJob::new_push()` and schedules on primary platforms.
+
+### `log-message-collector`
+
+**File:** `src/bin/log-message-collector.rs`
+
+- Consumes from `logs` (ephemeral queue bound to the `logs` exchange).
+- Writes build log lines to `{logs_path}/{routing_key}/{attempt_id}`.
+- Uses `LineWriter` for random-access line writing.
+- Also writes `.metadata.json` and `.result.json` files.
+
+### `logapi`
+
+**File:** `src/bin/logapi.rs`
+
+- HTTP server that serves build log metadata.
+- Endpoint: `GET /logs/{routing_key}`.
+- Returns JSON with attempt IDs, metadata, results, and log URLs.
+- Path traversal prevention via `canonicalize()` and `validate_path_segment()`.
+
+### `stats`
+
+**File:** `src/bin/stats.rs`
+
+- Consumes from `stats-events` (bound to the `stats` fanout exchange).
+- Collects `EventMessage` payloads.
+- Exposes Prometheus-compatible metrics on `0.0.0.0:9898`.
+- Runs an HTTP server in a separate thread.
+
+### `build-faker`
+
+**File:** `src/bin/build-faker.rs`
+
+- Development tool that publishes fake `BuildJob` messages.
+- Useful for testing the builder without a real GitHub webhook.
+
+---
+
+## The Worker Pattern in Detail
+
+### `SimpleWorker`
+
+```rust
+pub trait SimpleWorker: Send {
+ type J: Send;
+
+ fn consumer(&mut self, job: &Self::J) -> impl Future<Output = Actions>;
+
+ fn msg_to_job(
+ &mut self,
+ method: &str,
+ headers: &Option<String>,
+ body: &[u8],
+ ) -> impl Future<Output = Result<Self::J, String>>;
+}
+```
+
+Workers that implement `SimpleWorker` receive a message, process it, and return
+a `Vec<Action>`. The actions are applied in order:
+
+```rust
+pub enum Action {
+ Ack, // Acknowledge message (remove from queue)
+ NackRequeue, // Negative ack, requeue (retry later)
+ NackDump, // Negative ack, discard
+ Publish(Arc<QueueMsg>), // Publish a new message
+}
+```
+
+The `ConsumerExt` implementation on `Channel` drives the loop:
+
+```rust
+impl<'a, W: SimpleWorker + 'a> ConsumerExt<'a, W> for Channel {
+ async fn consume(self, mut worker: W, config: ConsumeConfig)
+ -> Result<Self::Handle, Self::Error>
+ {
+ let mut consumer = self.basic_consume(/* ... */).await?;
+ Ok(Box::pin(async move {
+ while let Some(Ok(deliver)) = consumer.next().await {
+ let job = worker.msg_to_job(/* ... */).await.expect("...");
+ for action in worker.consumer(&job).await {
+ action_deliver(&self, &deliver, action).await.expect("...");
+ }
+ }
+ }))
+ }
+}
+```
+
+### `SimpleNotifyWorker`
+
+```rust
+#[async_trait]
+pub trait SimpleNotifyWorker {
+ type J;
+
+ async fn consumer(
+ &self,
+ job: Self::J,
+ notifier: Arc<dyn NotificationReceiver + Send + Sync>,
+ );
+
+ fn msg_to_job(
+ &self,
+ routing_key: &str,
+ content_type: &Option<String>,
+ body: &[u8],
+ ) -> Result<Self::J, String>;
+}
+```
+
+The key difference: instead of returning `Actions`, the worker receives a
+`NotificationReceiver` that it can `tell()` at any point during processing.
+This enables streaming log lines back to RabbitMQ while a build is still
+running.
+
+```rust
+#[async_trait]
+pub trait NotificationReceiver {
+ async fn tell(&self, action: Action);
+}
+```
+
+The `ChannelNotificationReceiver` bridges this to a real AMQP channel:
+
+```rust
+pub struct ChannelNotificationReceiver {
+ channel: lapin::Channel,
+ deliver: Delivery,
+}
+
+#[async_trait]
+impl NotificationReceiver for ChannelNotificationReceiver {
+ async fn tell(&self, action: Action) {
+ action_deliver(&self.channel, &self.deliver, action)
+ .await
+ .expect("action deliver failure");
+ }
+}
+```
+
+### Channel Variants
+
+| Wrapper | Prefetch | Use Case |
+|---------|----------|----------|
+| `Channel` (raw) | None | Services with a single instance or that want prefetching |
+| `WorkerChannel(Channel)` | 1 | Multi-instance workers (fair dispatch) |
+| `NotifyChannel(Channel)` | 1 | Long-running workers with streaming notifications |
+
+---
+
+## Message Flow Through the System
+
+### PR Opened/Synchronised
+
+```
+GitHub ──POST──► webhook-receiver
+ │
+ ▼ publish to github-events
+ │ routing_key: pull_request.{owner}/{repo}
+ │
+ ┌──────────┴──────────┐
+ ▼ ▼
+ evaluation-filter (other consumers)
+ │
+ ▼ publish to mass-rebuild-check-jobs
+ │
+ mass-rebuilder
+ │
+ ├─► clone repo
+ ├─► checkout PR branch
+ ├─► detect changed files
+ ├─► map to projects
+ ├─► create BuildJob per project/arch
+ │
+ ├─► publish BuildJob to build-inputs-{system}
+ ├─► publish QueuedBuildJobs to build-results
+ └─► update commit status
+ │
+ ┌──────────┴──────────┐
+ ▼ ▼
+ builder comment-poster
+ │ │
+ ├─► clone & merge ├─► create check run (Queued)
+ ├─► build project │
+ ├─► stream logs ──► │
+ │ logs exchange │
+ │ │ │
+ │ log-collector │
+ │ │
+ ├─► publish result │
+ │ to build-results │
+ │ │ │
+ │ └──────────►├─► create check run (Completed)
+ └─► Ack └─► Ack
+```
+
+### Comment Command (`@tickbot build meshmc`)
+
+```
+GitHub ──POST──► webhook-receiver
+ │
+ ▼ publish to github-events
+ │ routing_key: issue_comment.{owner}/{repo}
+ │
+ comment-filter
+ │
+ ├─► parse @tickbot commands
+ ├─► lookup PR via GitHub API
+ ├─► determine build architectures from ACL
+ │
+ ├─► publish BuildJob to build-inputs-{system}
+ ├─► publish QueuedBuildJobs to build-results
+ └─► Ack
+```
+
+### Push to Branch
+
+```
+GitHub ──POST──► webhook-receiver
+ │
+ ▼ publish to github-events
+ │ routing_key: push.{owner}/{repo}
+ │
+ push-filter
+ │
+ ├─► check if branch push (not tag/delete)
+ ├─► detect changed projects from commit info
+ ├─► fallback to default_attrs if needed
+ │
+ ├─► create BuildJob::new_push()
+ ├─► publish to build-inputs-{system} (primary)
+ ├─► publish QueuedBuildJobs to build-results
+ └─► Ack
+```
+
+---
+
+## Concurrency Model
+
+Tickborg uses **Tokio** as its async runtime with multi-threaded scheduling:
+
+```rust
+#[tokio::main]
+async fn main() -> Result<(), Box<dyn Error>> {
+ // ...
+}
+```
+
+Within the builder, multiple systems can be served simultaneously:
+
+```rust
+// builder.rs — main()
+let mut handles: Vec<Pin<Box<dyn Future<Output = ()> + Send>>> = Vec::new();
+for system in &cfg.build.system {
+ handles.push(self::create_handle(&conn, &cfg, system.to_string()).await?);
+}
+future::join_all(handles).await;
+```
+
+Each handle is a `Pin<Box<dyn Future>>` that runs a consumer loop for one
+architecture. The `basic_qos(1)` prefetch setting ensures that each builder
+instance only works on one job at a time from each queue, preventing resource
+starvation.
+
+Build subprocesses themselves are spawned via `std::process::Command` and
+monitored through the `AsyncCmd` abstraction which uses OS threads for I/O
+multiplexing:
+
+```rust
+pub struct AsyncCmd {
+ command: Command,
+}
+
+pub struct SpawnedAsyncCmd {
+ waiter: JoinHandle<Option<Result<ExitStatus, io::Error>>>,
+ rx: Receiver<String>,
+}
+```
+
+---
+
+## Git Operations
+
+### CachedCloner
+
+```rust
+pub struct CachedCloner {
+ root: PathBuf,
+}
+
+impl CachedCloner {
+ pub fn project(&self, name: &str, clone_url: String) -> CachedProject;
+}
+```
+
+The cached cloner maintains a local mirror of repositories under:
+```
+{root}/repo/{md5(name)}/clone — bare clone (shared by all checkouts)
+{root}/repo/{md5(name)}/{category}/ — working checkouts
+```
+
+### CachedProjectCo (Checkout)
+
+```rust
+pub struct CachedProjectCo {
+ root: PathBuf,
+ id: String,
+ clone_url: String,
+ local_reference: PathBuf,
+}
+
+impl CachedProjectCo {
+ pub fn checkout_origin_ref(&self, git_ref: &OsStr) -> Result<String, Error>;
+ pub fn checkout_ref(&self, git_ref: &OsStr) -> Result<String, Error>;
+ pub fn fetch_pr(&self, pr_id: u64) -> Result<(), Error>;
+ pub fn commit_exists(&self, commit: &OsStr) -> bool;
+ pub fn merge_commit(&self, commit: &OsStr) -> Result<(), Error>;
+ pub fn commit_messages_from_head(&self, commit: &str) -> Result<Vec<String>, Error>;
+ pub fn files_changed_from_head(&self, commit: &str) -> Result<Vec<String>, Error>;
+}
+```
+
+All git operations use file-based locking via `fs2::FileExt::lock_exclusive()`
+to prevent concurrent access to the same checkout directory.
+
+---
+
+## File Locking
+
+Two locking mechanisms exist:
+
+### `clone.rs` — Git-level locks
+
+```rust
+pub trait GitClonable {
+ fn lock_path(&self) -> PathBuf;
+ fn lock(&self) -> Result<Lock, Error>;
+ fn clone_repo(&self) -> Result<(), Error>;
+ fn fetch_repo(&self) -> Result<(), Error>;
+}
+```
+
+### `locks.rs` — Generic file locks
+
+```rust
+pub trait Lockable {
+ fn lock_path(&self) -> PathBuf;
+ fn lock(&self) -> Result<Lock, Error>;
+}
+
+pub struct Lock {
+ lock: Option<fs::File>,
+}
+
+impl Lock {
+ pub fn unlock(&mut self) { self.lock = None }
+}
+```
+
+Both use `fs2`'s `lock_exclusive()` which maps to `flock(2)` on Unix.
+
+---
+
+## Error Handling Strategy
+
+### CommitStatusError
+
+```rust
+pub enum CommitStatusError {
+ ExpiredCreds(hubcaps::Error),
+ MissingSha(hubcaps::Error),
+ Error(hubcaps::Error),
+ InternalError(String),
+}
+```
+
+This is used to determine retry behavior:
+- `ExpiredCreds` → `NackRequeue` (retry after token refresh)
+- `MissingSha` → `Ack` (commit was force-pushed away, skip)
+- `InternalError` → `Ack` + label `tickborg-internal-error`
+
+### EvalWorkerError
+
+```rust
+enum EvalWorkerError {
+ EvalError(eval::Error),
+ CommitStatusWrite(CommitStatusError),
+}
+```
+
+### eval::Error
+
+```rust
+pub enum Error {
+ CommitStatusWrite(CommitStatusError),
+ Fail(String),
+}
+```
+
+---
+
+## Testing Strategy
+
+- Unit tests are embedded in modules using `#[cfg(test)]`.
+- Test fixtures (JSON event payloads) are stored in `test-srcs/events/`.
+- Tests use `include_str!()` to load test data at compile time.
+- The `DummyNotificationReceiver` captures actions for assertion:
+
+```rust
+#[derive(Default)]
+pub struct DummyNotificationReceiver {
+ pub actions: parking_lot::Mutex<Vec<Action>>,
+}
+```
+
+Example test from `evaluationfilter.rs`:
+
+```rust
+#[tokio::test]
+async fn changed_base() {
+ let data = include_str!("../../test-srcs/events/pr-changed-base.json");
+ let job: PullRequestEvent = serde_json::from_str(data).expect("...");
+
+ let mut worker = EvaluationFilterWorker::new(
+ acl::Acl::new(vec!["project-tick/Project-Tick".to_owned()], Some(vec![]))
+ );
+
+ assert_eq!(worker.consumer(&job).await, vec![
+ worker::publish_serde_action(
+ None,
+ Some("mass-rebuild-check-jobs".to_owned()),
+ &evaluationjob::EvaluationJob { /* ... */ }
+ ),
+ worker::Action::Ack,
+ ]);
+}
+```
diff --git a/docs/handbook/ofborg/build-executor.md b/docs/handbook/ofborg/build-executor.md
new file mode 100644
index 0000000000..8b0cbcdac8
--- /dev/null
+++ b/docs/handbook/ofborg/build-executor.md
@@ -0,0 +1,657 @@
+# Tickborg — Build Executor
+
+## Overview
+
+The **build executor** is the component responsible for actually running builds
+of sub-projects in the Project Tick monorepo. Unlike the original ofborg which
+used `nix-build` exclusively, tickborg's build executor supports multiple build
+systems: CMake, Meson, Autotools, Cargo, Gradle, Make, and custom commands.
+
+The build executor is invoked by the **builder** binary
+(`tickborg/src/bin/builder.rs`) which consumes `BuildJob` messages from
+architecture-specific queues.
+
+---
+
+## Key Source Files
+
+| File | Purpose |
+|------|---------|
+| `tickborg/src/buildtool.rs` | Build system abstraction, `BuildExecutor`, `ProjectBuildConfig` |
+| `tickborg/src/tasks/build.rs` | `BuildWorker`, `JobActions` — the task implementation |
+| `tickborg/src/bin/builder.rs` | Binary entry point |
+| `tickborg/src/asynccmd.rs` | Async subprocess execution |
+
+---
+
+## Build System Abstraction
+
+### `BuildSystem` Enum
+
+```rust
+// tickborg/src/buildtool.rs
+#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
+pub enum BuildSystem {
+ CMake,
+ Meson,
+ Autotools,
+ Cargo,
+ Gradle,
+ Make,
+ Custom { command: String },
+}
+```
+
+Each variant corresponds to a well-known build system with a standard
+invocation pattern.
+
+### `ProjectBuildConfig`
+
+```rust
+#[derive(Clone, Debug, Serialize, Deserialize)]
+pub struct ProjectBuildConfig {
+ pub name: String,
+ pub path: String,
+ pub build_system: BuildSystem,
+ pub build_timeout_seconds: u16,
+ pub configure_args: Vec<String>,
+ pub build_args: Vec<String>,
+ pub test_command: Option<Vec<String>>,
+}
+```
+
+Each sub-project in the monorepo has a `ProjectBuildConfig` that specifies:
+- **name**: Human-readable project name (e.g., `"meshmc"`, `"mnv"`)
+- **path**: Relative path within the repository
+- **build_system**: Which build system to use
+- **build_timeout_seconds**: Maximum time allowed for the build
+- **configure_args**: Arguments passed to the configure step
+- **build_args**: Arguments passed to the build step
+- **test_command**: Custom test command (overrides the default for the build system)
+
+### `BuildExecutor`
+
+```rust
+#[derive(Clone, Debug)]
+pub struct BuildExecutor {
+ pub build_timeout: u16,
+}
+
+impl BuildExecutor {
+ pub fn new(build_timeout: u16) -> Self {
+ Self { build_timeout }
+ }
+}
+```
+
+The `BuildExecutor` is created from the configuration with a minimum timeout
+of 300 seconds:
+
+```rust
+// config.rs
+impl Config {
+ pub fn build_executor(&self) -> BuildExecutor {
+ if self.build.build_timeout_seconds < 300 {
+ error!(?self.build.build_timeout_seconds,
+ "Please set build_timeout_seconds to at least 300");
+ panic!();
+ }
+ BuildExecutor::new(self.build.build_timeout_seconds)
+ }
+}
+```
+
+---
+
+## Build Commands Per System
+
+### CMake
+
+```rust
+fn build_command(&self, project_dir: &Path, config: &ProjectBuildConfig) -> Command {
+ let build_dir = project_dir.join("build");
+ let mut cmd = Command::new("cmake");
+ cmd.arg("--build").arg(&build_dir);
+ cmd.args(["--config", "Release"]);
+ for arg in &config.build_args { cmd.arg(arg); }
+ cmd.current_dir(project_dir);
+ cmd
+}
+```
+
+Test command (default):
+```rust
+let mut cmd = Command::new("ctest");
+cmd.arg("--test-dir").arg("build");
+cmd.args(["--output-on-failure"]);
+```
+
+### Meson
+
+```rust
+let mut cmd = Command::new("meson");
+cmd.arg("compile");
+cmd.args(["-C", "build"]);
+```
+
+Test:
+```rust
+let mut cmd = Command::new("meson");
+cmd.arg("test").args(["-C", "build"]);
+```
+
+### Autotools / Make
+
+```rust
+let mut cmd = Command::new("make");
+cmd.args(["-j", &num_cpus().to_string()]);
+```
+
+Test:
+```rust
+let mut cmd = Command::new("make");
+cmd.arg("check");
+```
+
+### Cargo
+
+```rust
+let mut cmd = Command::new("cargo");
+cmd.arg("build").arg("--release");
+```
+
+Test:
+```rust
+let mut cmd = Command::new("cargo");
+cmd.arg("test");
+```
+
+### Gradle
+
+```rust
+let gradlew = project_dir.join("gradlew");
+let prog = if gradlew.exists() {
+ gradlew.to_string_lossy().to_string()
+} else {
+ "gradle".to_string()
+};
+let mut cmd = Command::new(prog);
+cmd.arg("build");
+```
+
+Gradle prefers the wrapper (`gradlew`) if present.
+
+### Custom
+
+```rust
+let mut cmd = Command::new("sh");
+cmd.args(["-c", command]);
+```
+
+---
+
+## Build Execution Methods
+
+### Synchronous Build
+
+```rust
+impl BuildExecutor {
+ pub fn build_project(
+ &self, project_root: &Path, config: &ProjectBuildConfig,
+ ) -> Result<fs::File, fs::File> {
+ let project_dir = project_root.join(&config.path);
+ let cmd = self.build_command(&project_dir, config);
+ self.run(cmd, true)
+ }
+}
+```
+
+Returns `Ok(File)` with stdout/stderr on success, `Err(File)` on failure.
+The `File` contains the captured output.
+
+### Asynchronous Build
+
+```rust
+impl BuildExecutor {
+ pub fn build_project_async(
+ &self, project_root: &Path, config: &ProjectBuildConfig,
+ ) -> SpawnedAsyncCmd {
+ let project_dir = project_root.join(&config.path);
+ let cmd = self.build_command(&project_dir, config);
+ AsyncCmd::new(cmd).spawn()
+ }
+}
+```
+
+Returns a `SpawnedAsyncCmd` that allows streaming output line-by-line.
+
+### Test Execution
+
+```rust
+impl BuildExecutor {
+ pub fn test_project(
+ &self, project_root: &Path, config: &ProjectBuildConfig,
+ ) -> Result<fs::File, fs::File> {
+ let project_dir = project_root.join(&config.path);
+ let cmd = self.test_command(&project_dir, config);
+ self.run(cmd, true)
+ }
+}
+```
+
+If `config.test_command` is set, it is used directly. Otherwise, the default
+test command for the build system is used.
+
+---
+
+## Async Command Execution (`asynccmd.rs`)
+
+The `AsyncCmd` abstraction wraps `std::process::Command` to provide:
+- Non-blocking output streaming via channels
+- Separate stderr/stdout capture
+- Exit status monitoring
+
+```rust
+pub struct AsyncCmd {
+ command: Command,
+}
+
+pub struct SpawnedAsyncCmd {
+ waiter: JoinHandle<Option<Result<ExitStatus, io::Error>>>,
+ rx: Receiver<String>,
+}
+```
+
+### Spawning
+
+```rust
+impl AsyncCmd {
+ pub fn new(cmd: Command) -> AsyncCmd {
+ AsyncCmd { command: cmd }
+ }
+
+ pub fn spawn(mut self) -> SpawnedAsyncCmd {
+ let mut child = self.command
+ .stdin(Stdio::null())
+ .stderr(Stdio::piped())
+ .stdout(Stdio::piped())
+ .spawn()
+ .unwrap();
+
+ // Sets up channels and monitoring threads
+ // ...
+ }
+}
+```
+
+The spawn implementation:
+1. Creates a `sync_channel` for output lines (buffer size: 30).
+2. Spawns a reader thread for stdout.
+3. Spawns a reader thread for stderr.
+4. Spawns a waiter thread for each (stdout thread, stderr thread, child process).
+5. Returns a `SpawnedAsyncCmd` whose `rx` receiver yields lines as they arrive.
+
+```rust
+fn reader_tx<R: 'static + Read + Send>(
+ read: R, tx: SyncSender<String>,
+) -> thread::JoinHandle<()> {
+ let read = BufReader::new(read);
+ thread::spawn(move || {
+ for line in read.lines() {
+ let to_send = match line {
+ Ok(line) => line,
+ Err(e) => {
+ error!("Error reading data in reader_tx: {:?}", e);
+ "Non-UTF8 data omitted from the log.".to_owned()
+ }
+ };
+ if let Err(e) = tx.send(to_send) {
+ error!("Failed to send log line: {:?}", e);
+ }
+ }
+ })
+}
+```
+
+The channel buffer size is intentionally small (30) to apply backpressure:
+
+```rust
+const OUT_CHANNEL_BUFFER_SIZE: usize = 30;
+```
+
+---
+
+## The Builder Binary
+
+### Entry Point
+
+```rust
+// src/bin/builder.rs
+#[tokio::main]
+async fn main() -> Result<(), Box<dyn Error>> {
+ tickborg::setup_log();
+
+ let arg = env::args().nth(1).unwrap_or_else(|| panic!("usage: ..."));
+ let cfg = config::load(arg.as_ref());
+
+ let conn = easylapin::from_config(&builder_cfg.rabbitmq).await?;
+ let mut handles: Vec<Pin<Box<dyn Future<Output = ()> + Send>>> = Vec::new();
+
+ for system in &cfg.build.system {
+ handles.push(create_handle(&conn, &cfg, system.to_string()).await?);
+ }
+
+ future::join_all(handles).await;
+}
+```
+
+The builder creates one consumer handle per configured system. This allows a
+single builder process to serve multiple architectures (e.g., `x86_64-linux`
+and `aarch64-linux`).
+
+### Channel Setup
+
+```rust
+async fn create_handle(
+ conn: &lapin::Connection, cfg: &config::Config, system: String,
+) -> Result<Pin<Box<dyn Future<Output = ()> + Send>>, Box<dyn Error>> {
+ let mut chan = conn.create_channel().await?;
+ let cloner = checkout::cached_cloner(Path::new(&cfg.checkout.root));
+ let build_executor = cfg.build_executor();
+
+ // Declare build-jobs exchange (Fanout)
+ chan.declare_exchange(/* build-jobs, Fanout */);
+
+ // Declare and bind the system-specific queue
+ let queue_name = format!("build-inputs-{system}");
+ chan.declare_queue(/* queue_name, durable */);
+ chan.bind_queue(/* queue_name ← build-jobs */);
+
+ // Start consuming
+ let handle = easylapin::NotifyChannel(chan).consume(
+ tasks::build::BuildWorker::new(
+ cloner, build_executor, system, cfg.runner.identity.clone()
+ ),
+ easyamqp::ConsumeConfig {
+ queue: queue_name,
+ consumer_tag: format!("{}-builder", cfg.whoami()),
+ no_local: false, no_ack: false, no_wait: false, exclusive: false,
+ },
+ ).await?;
+
+ Ok(handle)
+}
+```
+
+### Development Mode (`build_all_jobs`)
+
+When `runner.build_all_jobs` is set to `true`, the builder creates an
+exclusive, auto-delete queue instead of the named durable one:
+
+```rust
+if cfg.runner.build_all_jobs != Some(true) {
+ // Normal: named durable queue
+ let queue_name = format!("build-inputs-{system}");
+ chan.declare_queue(QueueConfig { durable: true, exclusive: false, ... });
+} else {
+ // Dev mode: ephemeral queue (receives ALL jobs)
+ warn!("Building all jobs, please don't use this unless ...");
+ chan.declare_queue(QueueConfig { durable: false, exclusive: true, auto_delete: true, ... });
+}
+```
+
+---
+
+## The `BuildWorker`
+
+```rust
+// tasks/build.rs
+pub struct BuildWorker {
+ cloner: checkout::CachedCloner,
+ build_executor: buildtool::BuildExecutor,
+ system: String,
+ identity: String,
+}
+
+impl BuildWorker {
+ pub fn new(
+ cloner: checkout::CachedCloner,
+ build_executor: buildtool::BuildExecutor,
+ system: String,
+ identity: String,
+ ) -> BuildWorker { ... }
+}
+```
+
+The `BuildWorker` implements `SimpleNotifyWorker`, meaning it receives a
+`NotificationReceiver` that allows it to stream log lines back during
+processing.
+
+---
+
+## `JobActions` — The Streaming Helper
+
+`JobActions` wraps the build job context and provides methods for logging and
+reporting:
+
+```rust
+pub struct JobActions {
+ system: String,
+ identity: String,
+ receiver: Arc<dyn NotificationReceiver + Send + Sync>,
+ job: buildjob::BuildJob,
+ line_counter: AtomicU64,
+ snippet_log: parking_lot::RwLock<VecDeque<String>>,
+ attempt_id: String,
+ log_exchange: Option<String>,
+ log_routing_key: Option<String>,
+ result_exchange: Option<String>,
+ result_routing_key: Option<String>,
+}
+```
+
+### Attempt ID
+
+Each build execution gets a unique UUID v4 `attempt_id`:
+
+```rust
+attempt_id: Uuid::new_v4().to_string(),
+```
+
+### Snippet Log
+
+The last 10 lines of output are kept in a ring buffer for inclusion in the
+build result:
+
+```rust
+snippet_log: parking_lot::RwLock::new(VecDeque::with_capacity(10)),
+```
+
+### Log Streaming
+
+```rust
+impl JobActions {
+ pub async fn log_line(&self, line: String) {
+ self.line_counter.fetch_add(1, Ordering::SeqCst);
+
+ // Update snippet ring buffer
+ {
+ let mut snippet_log = self.snippet_log.write();
+ if snippet_log.len() >= 10 {
+ snippet_log.pop_front();
+ }
+ snippet_log.push_back(line.clone());
+ }
+
+ let msg = buildlogmsg::BuildLogMsg {
+ identity: self.identity.clone(),
+ system: self.system.clone(),
+ attempt_id: self.attempt_id.clone(),
+ line_number: self.line_counter.load(Ordering::SeqCst),
+ output: line,
+ };
+
+ self.tell(worker::publish_serde_action(
+ self.log_exchange.clone(),
+ self.log_routing_key.clone(),
+ &msg,
+ )).await;
+ }
+}
+```
+
+Each log line is published as a `BuildLogMsg` to the `logs` exchange in
+real-time. The `line_counter` uses `AtomicU64` for thread-safe incrementing.
+
+### Build Start Notification
+
+```rust
+pub async fn log_started(&self, can_build: Vec<String>, cannot_build: Vec<String>) {
+ let msg = buildlogmsg::BuildLogStart {
+ identity: self.identity.clone(),
+ system: self.system.clone(),
+ attempt_id: self.attempt_id.clone(),
+ attempted_attrs: Some(can_build),
+ skipped_attrs: Some(cannot_build),
+ };
+ self.tell(worker::publish_serde_action(
+ self.log_exchange.clone(), self.log_routing_key.clone(), &msg,
+ )).await;
+}
+```
+
+### Build Result Reporting
+
+```rust
+pub async fn merge_failed(&self) {
+ let msg = BuildResult::V1 {
+ tag: V1Tag::V1,
+ repo: self.job.repo.clone(),
+ pr: self.job.pr.clone(),
+ system: self.system.clone(),
+ output: vec![String::from("Merge failed")],
+ attempt_id: self.attempt_id.clone(),
+ request_id: self.job.request_id.clone(),
+ attempted_attrs: None,
+ skipped_attrs: None,
+ status: BuildStatus::Failure,
+ push: self.job.push.clone(),
+ };
+
+ self.tell(worker::publish_serde_action(
+ self.result_exchange.clone(),
+ self.result_routing_key.clone(),
+ &msg,
+ )).await;
+ self.tell(worker::Action::Ack).await;
+}
+```
+
+### Other Status Methods
+
+```rust
+impl JobActions {
+ pub async fn pr_head_missing(&self) { self.tell(Action::Ack).await; }
+ pub async fn commit_missing(&self) { self.tell(Action::Ack).await; }
+ pub async fn nothing_to_do(&self) { self.tell(Action::Ack).await; }
+ pub async fn merge_failed(&self) { /* publish Failure + Ack */ }
+ pub async fn log_started(&self, ...) { /* publish BuildLogStart */ }
+ pub async fn log_line(&self, line) { /* publish BuildLogMsg */ }
+ pub async fn log_instantiation_errors(&self, ...) { /* log each error */ }
+ pub fn log_snippet(&self) -> Vec<String> { /* return last 10 lines */ }
+}
+```
+
+---
+
+## Build Flow
+
+1. **Receive** `BuildJob` from queue
+2. **Clone** repository (using `CachedCloner`)
+3. **Checkout** target branch
+4. **Fetch** PR (if PR-triggered)
+5. **Merge** PR into target branch
+6. **Determine** which attrs can build on this system
+7. **Log start** (`BuildLogStart` message)
+8. **For each attr**:
+ a. Execute build command
+ b. Stream output lines (`BuildLogMsg` messages)
+ c. Check exit status
+9. **Publish result** (`BuildResult` with `BuildStatus`)
+10. **Ack** the original message
+
+---
+
+## Project Detection
+
+The `detect_changed_projects` function in `buildtool.rs` maps changed files
+to project names:
+
+```rust
+pub fn detect_changed_projects(changed_files: &[String]) -> Vec<String>;
+```
+
+It examines the first path component of each changed file and matches it
+against known project directories in the monorepo.
+
+The `find_project` function looks up a project by name:
+
+```rust
+pub fn find_project(name: &str) -> Option<ProjectBuildConfig>;
+```
+
+---
+
+## Build Timeout
+
+The build timeout is enforced at the configuration level:
+
+```rust
+pub struct BuildConfig {
+ pub system: Vec<String>,
+ pub build_timeout_seconds: u16,
+ pub extra_env: Option<HashMap<String, String>>,
+}
+```
+
+The minimum is 300 seconds (5 minutes). This is validated at startup:
+
+```rust
+if self.build.build_timeout_seconds < 300 {
+ error!("Please set build_timeout_seconds to at least 300");
+ panic!();
+}
+```
+
+When a build times out, the result status is set to `BuildStatus::TimedOut`.
+
+---
+
+## NixOS Service Configuration
+
+The builder has special systemd resource limits:
+
+```nix
+# service.nix
+"tickborg-builder" = mkTickborgService "Builder" {
+ binary = "builder";
+ serviceConfig = {
+ MemoryMax = "8G";
+ CPUQuota = "400%";
+ };
+};
+```
+
+The `CPUQuota = "400%"` allows the builder to use up to 4 CPU cores.
+
+The service PATH includes build tools:
+
+```nix
+path = with pkgs; [
+ git bash cmake gnumake gcc pkg-config
+ meson ninja
+ autoconf automake libtool
+ jdk17
+ rustc cargo
+];
+```
diff --git a/docs/handbook/ofborg/building.md b/docs/handbook/ofborg/building.md
new file mode 100644
index 0000000000..622be96356
--- /dev/null
+++ b/docs/handbook/ofborg/building.md
@@ -0,0 +1,530 @@
+# Tickborg — Building
+
+## Prerequisites
+
+| Prerequisite | Minimum Version | Notes |
+|-------------|-----------------|-------|
+| Rust | Edition 2024 | `rustup default stable` |
+| Cargo | Latest stable | Comes with Rust |
+| Git | 2.x | For repository cloning |
+| pkg-config | Any | Native dependency resolution |
+| CMake | 3.x | If building CMake-based sub-projects |
+| OpenSSL / rustls | — | TLS for AMQP + GitHub API |
+
+---
+
+## Quick Build
+
+```bash
+cd ofborg
+cargo build --workspace
+```
+
+This compiles both workspace members:
+- `tickborg` (main crate — library + 11 binaries)
+- `tickborg-simple-build` (simplified build tool)
+
+### Release Build
+
+```bash
+cargo build --workspace --release
+```
+
+The release profile includes debug symbols (`debug = true` in workspace
+`Cargo.toml`) so that backtraces are readable in production.
+
+### Build Individual Binaries
+
+```bash
+# Build only the webhook receiver
+cargo build -p tickborg --bin github-webhook-receiver
+
+# Build only the builder
+cargo build -p tickborg --bin builder
+
+# Build only the mass rebuilder
+cargo build -p tickborg --bin mass-rebuilder
+```
+
+### List All Binary Targets
+
+```bash
+cargo build -p tickborg --bins 2>&1 | head -20
+# Or:
+ls tickborg/src/bin/
+```
+
+Available binaries:
+
+| Binary | Source File |
+|--------|-----------|
+| `build-faker` | `src/bin/build-faker.rs` |
+| `builder` | `src/bin/builder.rs` |
+| `evaluation-filter` | `src/bin/evaluation-filter.rs` |
+| `github-comment-filter` | `src/bin/github-comment-filter.rs` |
+| `github-comment-poster` | `src/bin/github-comment-poster.rs` |
+| `github-webhook-receiver` | `src/bin/github-webhook-receiver.rs` |
+| `log-message-collector` | `src/bin/log-message-collector.rs` |
+| `logapi` | `src/bin/logapi.rs` |
+| `mass-rebuilder` | `src/bin/mass-rebuilder.rs` |
+| `push-filter` | `src/bin/push-filter.rs` |
+| `stats` | `src/bin/stats.rs` |
+
+---
+
+## Cargo.toml — Dependencies Deep Dive
+
+### `tickborg/Cargo.toml`
+
+```toml
+[package]
+name = "tickborg"
+version = "0.1.0"
+authors = ["Project Tick Contributors"]
+build = "build.rs"
+edition = "2024"
+description = "Distributed CI bot for Project Tick monorepo"
+license = "MIT"
+```
+
+### Core Dependencies
+
+#### Async Runtime & Networking
+
+```toml
+tokio = { version = "1", features = ["rt-multi-thread", "net", "macros", "sync"] }
+tokio-stream = "0.1"
+futures = "0.3.31"
+futures-util = "0.3.31"
+async-trait = "0.1.89"
+```
+
+- **tokio**: The async runtime. `rt-multi-thread` enables the work-stealing
+ scheduler. `net` provides TCP listeners. `macros` enables `#[tokio::main]`.
+ `sync` provides `RwLock`, `Mutex`, etc.
+- **tokio-stream**: `StreamExt` for consuming lapin message streams.
+- **futures / futures-util**: `join_all`, `TryFutureExt`, and stream utilities.
+- **async-trait**: Enables `async fn` in trait definitions (used by
+ `SimpleNotifyWorker` and `NotificationReceiver`).
+
+#### AMQP Client
+
+```toml
+lapin = "4.3.0"
+```
+
+- **lapin**: Pure-Rust AMQP 0-9-1 client. Provides `Connection`, `Channel`,
+ `Consumer`, publish/consume/ack/nack operations. Built on tokio.
+
+#### HTTP Server
+
+```toml
+hyper = { version = "1.0", features = ["full", "server", "http1"] }
+hyper-util = { version = "0.1", features = ["server", "tokio", "http1"] }
+http = "1"
+http-body-util = "0.1"
+```
+
+- **hyper**: The webhook receiver and logapi/stats HTTP servers use hyper 1.0
+ directly (no framework). `http1` feature is sufficient — no HTTP/2 needed.
+- **hyper-util**: `TokioIo` adapter and server utilities.
+- **http**: Standard HTTP types (`StatusCode`, `Method`, `Request`, `Response`).
+- **http-body-util**: `Full<Bytes>` response body, `BodyExt` for collecting
+ incoming bodies.
+
+#### GitHub API
+
+```toml
+hubcaps = { git = "https://github.com/ofborg/hubcaps.git", rev = "0d7466e..." }
+```
+
+- **hubcaps**: GitHub REST API client. The custom fork adds
+ `Conclusion::Skipped` for check runs. Provides:
+ - `Github` client
+ - `Credentials` (Client OAuth, JWT, InstallationToken)
+ - `JWTCredentials`, `InstallationTokenGenerator`
+ - Repository, Pull Request, Issue, Statuses, Check Runs APIs
+
+#### Serialization
+
+```toml
+serde = { version = "1.0.217", features = ["derive"] }
+serde_json = "1.0.135"
+```
+
+All message types, configuration, and GitHub event payloads use serde for
+JSON serialization/deserialization.
+
+#### Cryptography
+
+```toml
+hmac = "0.13.0"
+sha2 = "0.11.0"
+hex = "0.4.3"
+md5 = "0.8.0"
+```
+
+- **hmac + sha2**: HMAC-SHA256 for GitHub webhook signature verification.
+- **hex**: Hex encoding/decoding for signature comparison.
+- **md5**: Hashing repository names for cache directory names (not security-critical).
+
+#### TLS
+
+```toml
+rustls-pki-types = "1.14"
+```
+
+- Reading PEM-encoded private keys for GitHub App JWT authentication.
+
+#### Parsing
+
+```toml
+nom = "8"
+regex = "1.11.1"
+brace-expand = "0.1.0"
+```
+
+- **nom**: Parser combinator library for the `@tickbot` comment command parser.
+- **regex**: Pattern matching for PR title label extraction and commit scope
+ parsing.
+- **brace-expand**: Shell-style brace expansion (e.g., `{meshmc,mnv}`).
+
+#### Logging
+
+```toml
+tracing = "0.1.41"
+tracing-subscriber = { version = "0.3.19", features = ["json", "env-filter"] }
+```
+
+- **tracing**: Structured logging with spans and events.
+- **tracing-subscriber**: `EnvFilter` for `RUST_LOG`-based filtering, JSON
+ formatter for production logging.
+
+#### Concurrency
+
+```toml
+parking_lot = "0.12.4"
+fs2 = "0.4.3"
+```
+
+- **parking_lot**: Fast `Mutex` and `RwLock` (used for the snippet log in
+ `BuildWorker` and the `DummyNotificationReceiver` in tests).
+- **fs2**: File-based exclusive locking (`flock`) for git operations.
+
+#### Utilities
+
+```toml
+chrono = { version = "0.4.38", default-features = false, features = ["clock", "std"] }
+either = "1.13.0"
+lru-cache = "0.1.2"
+mime = "0.3"
+tempfile = "3.15.0"
+uuid = { version = "1.12", features = ["v4"] }
+```
+
+- **chrono**: Timestamps for check run `started_at` / `completed_at`.
+- **lru-cache**: LRU eviction for open log file handles in the log collector.
+- **tempfile**: Temporary files for build output capture.
+- **uuid**: v4 UUIDs for `attempt_id` and `request_id`.
+
+---
+
+## Build Script (`build.rs`)
+
+The crate has a build script at `tickborg/build.rs` that generates event
+definitions at compile time:
+
+```rust
+// tickborg/src/stats.rs
+include!(concat!(env!("OUT_DIR"), "/events.rs"));
+```
+
+The build script generates a `events.rs` file into `OUT_DIR` containing the
+`Event` enum and related metric functions used by the stats system.
+
+---
+
+## Running Tests
+
+```bash
+# Run all tests
+cargo test --workspace
+
+# Run tests for tickborg only
+cargo test -p tickborg
+
+# Run a specific test
+cargo test -p tickborg -- evaluationfilter::tests::changed_base
+
+# Run tests with output
+cargo test -p tickborg -- --nocapture
+
+# Run tests with logging
+RUST_LOG=tickborg=debug cargo test -p tickborg -- --nocapture
+```
+
+### Test Data
+
+Test fixtures are located in:
+
+```
+tickborg/test-srcs/events/ — GitHub webhook JSON payloads
+tickborg/test-scratch/ — Scratch test data
+tickborg/test-nix/ — Legacy Nix test data
+```
+
+Tests load fixtures at compile time:
+
+```rust
+let data = include_str!("../../test-srcs/events/pr-changed-base.json");
+let job: PullRequestEvent = serde_json::from_str(data).expect("...");
+```
+
+---
+
+## Linting
+
+```bash
+# Check formatting
+cargo fmt --check
+
+# Run clippy
+cargo clippy --workspace
+
+# Both (as defined in the dev shell)
+cargo fmt && cargo clippy
+```
+
+The dev shell sets `RUSTFLAGS = "-D warnings"` so that all warnings are treated
+as errors in CI.
+
+Known clippy allowances in the codebase:
+
+```rust
+#![allow(clippy::redundant_closure)] // lib.rs — readability preference
+#[allow(clippy::cognitive_complexity)] // githubcommentfilter — complex match
+#[allow(clippy::too_many_arguments)] // OneEval::new
+#[allow(clippy::upper_case_acronyms)] // Subset::Project
+#[allow(clippy::vec_init_then_push)] // githubcommentposter — readability
+```
+
+---
+
+## Nix-Based Build
+
+### Dev Shell
+
+```bash
+nix develop ./ofborg
+```
+
+This provides:
+
+```nix
+nativeBuildInputs = with pkgs; [
+ bash rustc cargo clippy rustfmt pkg-config git cmake
+];
+
+RUSTFLAGS = "-D warnings";
+RUST_BACKTRACE = "1";
+RUST_LOG = "tickborg=debug";
+```
+
+The dev shell also defines a `checkPhase` function:
+
+```bash
+checkPhase() (
+ cd ofborg
+ set -x
+ cargo fmt
+ git diff --exit-code
+ cargo clippy
+ cargo build && cargo test
+)
+```
+
+### Nix Package
+
+```bash
+nix build ./ofborg#tickborg
+```
+
+The flake defines a `rustPlatform.buildRustPackage` derivation:
+
+```nix
+pkg = pkgs.rustPlatform.buildRustPackage {
+ name = "tickborg";
+ src = pkgs.nix-gitignore.gitignoreSource [ ] ./.;
+ nativeBuildInputs = with pkgs; [ pkg-config pkgs.rustPackages.clippy ];
+ preBuild = ''cargo clippy'';
+ doCheck = false;
+ cargoLock = {
+ lockFile = ./Cargo.lock;
+ outputHashes = {
+ "hubcaps-0.6.2" = "sha256-Vl4wQIKQVRxkpQxL8fL9rndAN3TKLV4OjgnZOpT6HRo=";
+ };
+ };
+};
+```
+
+The `outputHashes` entry pins the git-sourced `hubcaps` dependency for
+reproducible builds.
+
+---
+
+## Docker Build
+
+```bash
+cd ofborg
+docker build -t tickborg .
+```
+
+The `Dockerfile` performs a multi-stage build:
+
+1. **Builder stage**: Compiles all binaries in release mode.
+2. **Runtime stage**: Copies only the compiled binaries and necessary runtime
+ dependencies.
+
+For the full stack:
+
+```bash
+docker compose build
+docker compose up -d
+```
+
+See [deployment.md](deployment.md) for production Docker usage.
+
+---
+
+## Dependency Management
+
+### Updating Dependencies
+
+```bash
+cargo update # Update all deps within semver ranges
+cargo update -p lapin # Update a specific dependency
+```
+
+### The Lockfile
+
+`Cargo.lock` is checked into version control because tickborg produces binaries.
+This ensures reproducible builds across all environments.
+
+### Git Dependencies
+
+```toml
+hubcaps = { git = "https://github.com/ofborg/hubcaps.git", rev = "0d7466e..." }
+```
+
+This is pinned to a specific commit for stability. When the upstream fork is
+updated, change the `rev` and update the Nix `outputHashes` accordingly.
+
+### Patching Dependencies
+
+The workspace `Cargo.toml` has commented-out patch sections:
+
+```toml
+[patch.crates-io]
+#hubcaps = { path = "../hubcaps" }
+#amq-proto = { path = "rust-amq-proto" }
+```
+
+Uncomment these to develop against local checkouts of forked dependencies.
+
+---
+
+## Build Output
+
+After `cargo build --release`, binaries are located at:
+
+```
+ofborg/target/release/build-faker
+ofborg/target/release/builder
+ofborg/target/release/evaluation-filter
+ofborg/target/release/github-comment-filter
+ofborg/target/release/github-comment-poster
+ofborg/target/release/github-webhook-receiver
+ofborg/target/release/log-message-collector
+ofborg/target/release/logapi
+ofborg/target/release/mass-rebuilder
+ofborg/target/release/push-filter
+ofborg/target/release/stats
+```
+
+Each binary is self-contained and takes a single argument: the path to the
+configuration JSON file.
+
+```bash
+./target/release/builder /etc/tickborg/config.json
+```
+
+---
+
+## Cross-Compilation
+
+The flake supports building on:
+
+```nix
+supportedSystems = [
+ "aarch64-darwin"
+ "x86_64-darwin"
+ "x86_64-linux"
+ "aarch64-linux"
+];
+```
+
+On macOS, additional build inputs are needed:
+
+```nix
+buildInputs = with pkgs; lib.optionals stdenv.isDarwin [
+ darwin.Security
+ libiconv
+];
+```
+
+---
+
+## Incremental Compilation Tips
+
+1. **Use `cargo check` for fast feedback**: Skips codegen, only type-checks.
+2. **Set `CARGO_INCREMENTAL=1`**: Enabled by default in debug builds.
+3. **Use `sccache`**: `RUSTC_WRAPPER=sccache cargo build` for cached
+ compilation across clean builds.
+4. **Link with `mold`**: On Linux, add to `.cargo/config.toml`:
+ ```toml
+ [target.x86_64-unknown-linux-gnu]
+ linker = "clang"
+ rustflags = ["-C", "link-arg=-fuse-ld=mold"]
+ ```
+
+---
+
+## Troubleshooting
+
+### `error[E0554]: #![feature] may not be used on the stable release channel`
+
+You're using an older Rust. Tickborg requires Edition 2024 features. Run:
+```bash
+rustup update stable
+```
+
+### `hubcaps` build failure
+
+The git dependency needs network access on first build. Ensure the rev is
+reachable:
+```bash
+git ls-remote https://github.com/ofborg/hubcaps.git 0d7466e
+```
+
+### Linking errors on macOS
+
+Ensure Xcode Command Line Tools are installed:
+```bash
+xcode-select --install
+```
+
+### `lapin` connection failures at runtime
+
+This is a runtime issue, not a build issue. Ensure RabbitMQ is running and
+the config file points to the correct host. See
+[configuration.md](configuration.md).
diff --git a/docs/handbook/ofborg/code-style.md b/docs/handbook/ofborg/code-style.md
new file mode 100644
index 0000000000..25f0d228d3
--- /dev/null
+++ b/docs/handbook/ofborg/code-style.md
@@ -0,0 +1,332 @@
+# Tickborg — Code Style & Conventions
+
+## Rust Edition and Toolchain
+
+- **Edition**: 2024
+- **Resolver**: Cargo workspace resolver v2
+- **MSRV**: Not pinned — follows latest stable
+
+---
+
+## Module Organization
+
+### Top-Level Layout
+
+```
+tickborg/src/
+├── lib.rs # Public API, module declarations, setup_log()
+├── config.rs # Configuration loading and types
+├── worker.rs # SimpleWorker trait, Action enum
+├── notifyworker.rs # SimpleNotifyWorker trait
+├── easyamqp.rs # AMQP abstraction types
+├── easylapin.rs # lapin-based AMQP implementations
+├── acl.rs # Access control
+├── systems.rs # Platform/architecture definitions
+├── commentparser.rs # @tickbot command parser (nom)
+├── checkout.rs # Git clone/checkout/merge
+├── buildtool.rs # Build system detection
+├── commitstatus.rs # GitHub commit status wrapper
+├── tagger.rs # PR label generation
+├── clone.rs # Low-level git operations
+├── locks.rs # File-based locking
+├── asynccmd.rs # Async subprocess execution
+├── evalchecker.rs # Generic command runner
+├── stats.rs # Metrics collection trait
+├── writetoline.rs # Line-targeted file writing
+├── bin/ # Binary entry points (11 files)
+├── tasks/ # Worker implementations
+├── message/ # AMQP message types
+├── ghevent/ # GitHub webhook event types
+└── eval/ # Evaluation strategies
+```
+
+### Convention: One Trait Per File
+
+Worker-related traits each get their own file:
+- `worker.rs` → `SimpleWorker`
+- `notifyworker.rs` → `SimpleNotifyWorker`
+
+### Convention: `mod.rs` in Sub-Modules
+
+Sub-directories use `mod.rs` for re-exports:
+
+```rust
+// message/mod.rs
+pub mod buildjob;
+pub mod buildresult;
+pub mod evaluationjob;
+pub mod buildlogmsg;
+pub mod common;
+```
+
+---
+
+## Naming Conventions
+
+### Types
+
+| Pattern | Example |
+|---------|---------|
+| Worker structs | `BuildWorker`, `EvaluationFilterWorker` |
+| Config structs | `RabbitMqConfig`, `BuilderConfig` |
+| Message structs | `BuildJob`, `BuildResult`, `EvaluationJob` |
+| Event structs | `PullRequestEvent`, `IssueComment`, `PushEvent` |
+| Enums | `BuildStatus`, `ExchangeType`, `System` |
+
+### Functions
+
+| Pattern | Example |
+|---------|---------|
+| Constructors | `new()`, `from_config()` |
+| Predicates | `is_tag()`, `is_delete()`, `is_zero_sha()` |
+| Accessors | `branch()`, `name()` |
+| Actions | `set_with_description()`, `analyze_changes()` |
+
+### Constants
+
+```rust
+pub const VERSION: &str = env!("CARGO_PKG_VERSION");
+```
+
+Upper-case `SCREAMING_SNAKE_CASE` for constants.
+
+---
+
+## Async Patterns
+
+### `async fn` in Traits
+
+Tickborg uses Rust 2024 edition which supports `async fn` in traits natively
+via `impl Future` return types:
+
+```rust
+pub trait SimpleWorker: Send {
+ type J: Send;
+
+ fn msg_to_job(/* ... */) -> impl Future<Output = Result<Self::J, String>> + Send;
+ fn consumer(&mut self, job: &Self::J) -> impl Future<Output = Actions> + Send;
+}
+```
+
+### Tokio Runtime
+
+All binaries use the multi-threaded Tokio runtime:
+
+```rust
+#[tokio::main]
+async fn main() {
+ // ...
+}
+```
+
+### `RwLock` for Shared State
+
+The `GithubAppVendingMachine` is wrapped in `tokio::sync::RwLock` to allow
+concurrent read access to cached tokens:
+
+```rust
+pub struct EvaluationWorker<E> {
+ github_vend: tokio::sync::RwLock<GithubAppVendingMachine>,
+ // ...
+}
+```
+
+---
+
+## Error Handling
+
+### Pattern: Enum-Based Errors
+
+```rust
+#[derive(Debug)]
+pub enum CommitStatusError {
+ ExpiredCreds(String),
+ MissingSha(String),
+ InternalError(String),
+ Error(String),
+}
+```
+
+### Pattern: String Errors for Worker Actions
+
+Worker methods return `Result<_, String>` for simplicity — the error message
+is logged and the job is acked or nacked.
+
+### Pattern: `unwrap_or_else` with `panic!` for Config
+
+```rust
+let config_str = std::fs::read_to_string(&path)
+ .unwrap_or_else(|e| panic!("Failed to read: {e}"));
+```
+
+Configuration errors are unrecoverable — panic is appropriate at startup.
+
+---
+
+## Serialization
+
+### Serde Conventions
+
+```rust
+// snake_case field renaming
+#[derive(Deserialize, Debug)]
+#[serde(rename_all = "snake_case")]
+pub enum PullRequestAction {
+ Opened,
+ Closed,
+ Synchronize,
+ // ...
+}
+
+// Optional fields
+#[derive(Deserialize, Debug)]
+pub struct Config {
+ pub builder: Option<BuilderConfig>,
+ // ...
+}
+
+// Default values
+#[derive(Deserialize, Debug)]
+pub struct QueueConfig {
+ #[serde(default = "default_true")]
+ pub durable: bool,
+}
+```
+
+### JSON Message Format
+
+All AMQP messages are `serde_json::to_vec()`:
+
+```rust
+pub fn publish_serde_action<T: Serialize>(
+ exchange: Option<String>,
+ routing_key: Option<String>,
+ msg: &T,
+) -> Action {
+ Action::Publish(QueueMsg {
+ exchange,
+ routing_key,
+ content: serde_json::to_vec(msg).unwrap(),
+ })
+}
+```
+
+---
+
+## Testing Patterns
+
+### Unit Tests in Module Files
+
+```rust
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn test_parse_build_command() {
+ let result = parse("@tickbot build meshmc");
+ assert_eq!(result, vec![Instruction::Build(
+ vec!["meshmc".to_owned()],
+ Subset::Project,
+ )]);
+ }
+}
+```
+
+### The `build-faker` Binary
+
+```rust
+// bin/build-faker.rs
+```
+
+A test utility that simulates a builder without actually running builds.
+Useful for testing the AMQP pipeline end-to-end.
+
+---
+
+## Logging
+
+### `tracing` Macros
+
+```rust
+use tracing::{info, warn, error, debug, trace};
+
+info!("Starting webhook receiver on port {}", port);
+warn!("Token expired, refreshing");
+error!("Failed to decode message: {}", err);
+debug!(routing_key = %key, "Received message");
+```
+
+### Structured Fields
+
+```rust
+tracing::info!(
+ pr = %job.pr.number,
+ repo = %job.repo.full_name,
+ project = %project_name,
+ "Starting build"
+);
+```
+
+---
+
+## Git Operations
+
+### `CachedCloner` Pattern
+
+All git operations go through the `CachedCloner` → `CachedProject` →
+`CachedProjectCo` chain:
+
+```rust
+let cloner = CachedCloner::new(checkout_root, 3); // 3 concurrent clones max
+let project = cloner.project("owner/repo", clone_url);
+let co = project.clone_for("purpose".into(), identity.into())?;
+co.fetch_pr(42)?;
+co.merge_commit(OsStr::new("pr"))?;
+```
+
+### File Locking
+
+```rust
+// locks.rs
+pub struct LockFile {
+ path: PathBuf,
+ file: Option<File>,
+}
+
+impl LockFile {
+ pub fn lock(path: &Path) -> Result<Self, io::Error>;
+}
+
+impl Drop for LockFile {
+ fn drop(&mut self) {
+ // Release lock automatically
+ }
+}
+```
+
+---
+
+## Clippy and Formatting
+
+```bash
+# Format
+cargo fmt --all
+
+# Lint
+cargo clippy --all-targets --all-features -- -D warnings
+```
+
+The CI pipeline enforces both. The workspace `Cargo.toml` does not set custom
+clippy lints — the defaults plus `-D warnings` are used.
+
+---
+
+## Dependencies Policy
+
+- **Minimal external crates** — only well-maintained crates with clear purpose.
+- **Pinned git dependencies** — the `hubcaps` fork is pinned to a specific rev.
+- **Feature-gated Tokio** — only `rt-multi-thread`, `net`, `macros`, `sync`.
+- **No `unwrap()` in library code** — except config loading at startup.
+- **Release profile**: `debug = true` is set to include debug symbols in
+ release builds for better crash diagnostics.
diff --git a/docs/handbook/ofborg/configuration.md b/docs/handbook/ofborg/configuration.md
new file mode 100644
index 0000000000..143ac75f8e
--- /dev/null
+++ b/docs/handbook/ofborg/configuration.md
@@ -0,0 +1,472 @@
+# Tickborg — Configuration Reference
+
+## Overview
+
+Tickborg is configured via a single JSON file, typically located at
+`config.json` or specified via the `CONFIG_PATH` environment variable.
+The file maps to the top-level `Config` struct in `tickborg/src/config.rs`.
+
+---
+
+## Loading Configuration
+
+```rust
+// config.rs
+pub fn load() -> Config {
+ let config_path = env::var("CONFIG_PATH")
+ .unwrap_or_else(|_| "config.json".to_owned());
+
+ let config_str = std::fs::read_to_string(&config_path)
+ .unwrap_or_else(|e| panic!("Failed to read config file {config_path}: {e}"));
+
+ serde_json::from_str(&config_str)
+ .unwrap_or_else(|e| panic!("Failed to parse config file {config_path}: {e}"))
+}
+```
+
+---
+
+## Top-Level `Config`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct Config {
+ pub identity: String,
+ pub rabbitmq: RabbitMqConfig,
+ pub github_app: Option<GithubAppConfig>,
+
+ // Per-service configs — only the relevant one needs to be present
+ pub github_webhook: Option<GithubWebhookConfig>,
+ pub log_api: Option<LogApiConfig>,
+ pub evaluation_filter: Option<EvaluationFilterConfig>,
+ pub mass_rebuilder: Option<MassRebuilderConfig>,
+ pub builder: Option<BuilderConfig>,
+ pub github_comment_filter: Option<GithubCommentFilterConfig>,
+ pub github_comment_poster: Option<GithubCommentPosterConfig>,
+ pub log_message_collector: Option<LogMessageCollectorConfig>,
+ pub push_filter: Option<PushFilterConfig>,
+ pub stats: Option<StatsConfig>,
+}
+```
+
+### `identity`
+
+A unique string identifying this instance. Used as:
+- AMQP consumer tags (`evaluation-filter-{identity}`)
+- Exclusive queue names (`build-inputs-{identity}`)
+- GitHub Check Run external ID
+
+```json
+{
+ "identity": "prod-worker-01"
+}
+```
+
+---
+
+## `RabbitMqConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct RabbitMqConfig {
+ pub ssl: bool,
+ pub host: String,
+ pub vhost: Option<String>,
+ pub username: String,
+ pub password_file: PathBuf,
+}
+```
+
+| Field | Type | Required | Description |
+|-------|------|----------|-------------|
+| `ssl` | bool | yes | Use `amqps://` instead of `amqp://` |
+| `host` | string | yes | RabbitMQ hostname (may include port) |
+| `vhost` | string | no | Virtual host (default: `/`) |
+| `username` | string | yes | AMQP username |
+| `password_file` | path | yes | File containing the password (not the password itself) |
+
+```json
+{
+ "rabbitmq": {
+ "ssl": true,
+ "host": "rabbitmq.example.com",
+ "vhost": "tickborg",
+ "username": "tickborg",
+ "password_file": "/run/secrets/rabbitmq-password"
+ }
+}
+```
+
+> **Security**: The password is read from a file rather than stored directly
+> in the config, allowing secure credential injection via systemd credentials,
+> Docker secrets, or similar mechanisms.
+
+---
+
+## `GithubAppConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct GithubAppConfig {
+ pub app_id: u64,
+ pub private_key_file: PathBuf,
+ pub owner: String,
+ pub repo: String,
+ pub installation_id: Option<u64>,
+}
+```
+
+| Field | Type | Required | Description |
+|-------|------|----------|-------------|
+| `app_id` | u64 | yes | GitHub App ID |
+| `private_key_file` | path | yes | PEM-encoded RSA private key |
+| `owner` | string | yes | Repository owner |
+| `repo` | string | yes | Repository name |
+| `installation_id` | u64 | no | Installation ID (auto-detected if omitted) |
+
+```json
+{
+ "github_app": {
+ "app_id": 12345,
+ "private_key_file": "/run/secrets/github-app-key.pem",
+ "owner": "project-tick",
+ "repo": "Project-Tick",
+ "installation_id": 67890
+ }
+}
+```
+
+---
+
+## Service-Specific Configs
+
+### `GithubWebhookConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct GithubWebhookConfig {
+ pub bind_address: Option<String>,
+ pub port: u16,
+ pub webhook_secret: String,
+}
+```
+
+```json
+{
+ "github_webhook": {
+ "bind_address": "0.0.0.0",
+ "port": 8080,
+ "webhook_secret": "your-webhook-secret-here"
+ }
+}
+```
+
+### `LogApiConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct LogApiConfig {
+ pub bind_address: Option<String>,
+ pub port: u16,
+ pub log_storage_path: PathBuf,
+}
+```
+
+```json
+{
+ "log_api": {
+ "port": 8081,
+ "log_storage_path": "/var/log/tickborg/builds"
+ }
+}
+```
+
+### `EvaluationFilterConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct EvaluationFilterConfig {
+ pub repos: Vec<String>,
+}
+```
+
+```json
+{
+ "evaluation_filter": {
+ "repos": [
+ "project-tick/Project-Tick"
+ ]
+ }
+}
+```
+
+### `MassRebuilderConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct MassRebuilderConfig {
+ pub checkout: CheckoutConfig,
+}
+```
+
+```json
+{
+ "mass_rebuilder": {
+ "checkout": {
+ "root": "/var/cache/tickborg/checkout"
+ }
+ }
+}
+```
+
+### `BuilderConfig` / `RunnerConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct BuilderConfig {
+ pub runner: RunnerConfig,
+ pub checkout: CheckoutConfig,
+ pub build: BuildConfig,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct RunnerConfig {
+ pub identity: Option<String>,
+ pub architectures: Vec<String>,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct BuildConfig {
+ pub timeout_seconds: u64,
+ pub log_tail_lines: usize,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct CheckoutConfig {
+ pub root: PathBuf,
+}
+```
+
+```json
+{
+ "builder": {
+ "runner": {
+ "identity": "builder-x86_64-linux",
+ "architectures": ["x86_64-linux"]
+ },
+ "checkout": {
+ "root": "/var/cache/tickborg/checkout"
+ },
+ "build": {
+ "timeout_seconds": 3600,
+ "log_tail_lines": 100
+ }
+ }
+}
+```
+
+### `GithubCommentFilterConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct GithubCommentFilterConfig {
+ pub repos: Vec<String>,
+ pub trusted_users: Option<Vec<String>>,
+}
+```
+
+```json
+{
+ "github_comment_filter": {
+ "repos": ["project-tick/Project-Tick"],
+ "trusted_users": ["maintainer1", "maintainer2"]
+ }
+}
+```
+
+### `LogMessageCollectorConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct LogMessageCollectorConfig {
+ pub log_storage_path: PathBuf,
+}
+```
+
+```json
+{
+ "log_message_collector": {
+ "log_storage_path": "/var/log/tickborg/builds"
+ }
+}
+```
+
+### `StatsConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct StatsConfig {
+ pub bind_address: Option<String>,
+ pub port: u16,
+}
+```
+
+```json
+{
+ "stats": {
+ "port": 9090
+ }
+}
+```
+
+---
+
+## Complete Example
+
+Based on `example.config.json`:
+
+```json
+{
+ "identity": "prod-01",
+ "rabbitmq": {
+ "ssl": false,
+ "host": "localhost",
+ "vhost": "tickborg",
+ "username": "tickborg",
+ "password_file": "/run/secrets/rabbitmq-password"
+ },
+ "github_app": {
+ "app_id": 12345,
+ "private_key_file": "/run/secrets/github-app-key.pem",
+ "owner": "project-tick",
+ "repo": "Project-Tick"
+ },
+ "github_webhook": {
+ "port": 8080,
+ "webhook_secret": "change-me"
+ },
+ "evaluation_filter": {
+ "repos": ["project-tick/Project-Tick"]
+ },
+ "mass_rebuilder": {
+ "checkout": {
+ "root": "/var/cache/tickborg/checkout"
+ }
+ },
+ "builder": {
+ "runner": {
+ "architectures": ["x86_64-linux"]
+ },
+ "checkout": {
+ "root": "/var/cache/tickborg/checkout"
+ },
+ "build": {
+ "timeout_seconds": 3600,
+ "log_tail_lines": 100
+ }
+ },
+ "github_comment_filter": {
+ "repos": ["project-tick/Project-Tick"]
+ },
+ "log_message_collector": {
+ "log_storage_path": "/var/log/tickborg/builds"
+ },
+ "log_api": {
+ "port": 8081,
+ "log_storage_path": "/var/log/tickborg/builds"
+ },
+ "stats": {
+ "port": 9090
+ }
+}
+```
+
+---
+
+## Environment Variables
+
+| Variable | Default | Description |
+|----------|---------|-------------|
+| `CONFIG_PATH` | `config.json` | Path to the JSON config file |
+| `RUST_LOG` | `info` | `tracing` filter directive |
+| `RUST_LOG_JSON` | (unset) | Set to `1` for structured JSON log output |
+
+### `RUST_LOG` Examples
+
+```bash
+# Default — info for everything
+RUST_LOG=info
+
+# Debug for tickborg, info for everything else
+RUST_LOG=info,tickborg=debug
+
+# Trace AMQP operations
+RUST_LOG=info,tickborg=debug,lapin=trace
+
+# Only errors
+RUST_LOG=error
+```
+
+### Logging Initialization
+
+```rust
+// lib.rs
+pub fn setup_log() {
+ let json = std::env::var("RUST_LOG_JSON").is_ok();
+
+ let subscriber = tracing_subscriber::fmt()
+ .with_env_filter(EnvFilter::from_default_env());
+
+ if json {
+ subscriber.json().init();
+ } else {
+ subscriber.init();
+ }
+}
+```
+
+---
+
+## ACL Configuration
+
+The ACL (Access Control List) is derived from the configuration and controls:
+
+- **Repository eligibility** — Which repos tickborg responds to
+- **Architecture access** — Which platforms a user can build on
+- **Unrestricted builds** — Whether a user can bypass project restrictions
+
+```rust
+// acl.rs
+pub struct Acl {
+ repos: Vec<String>,
+ trusted_users: Vec<String>,
+}
+
+impl Acl {
+ pub fn is_repo_eligible(&self, repo: &str) -> bool;
+ pub fn build_job_architectures_for_user_repo(
+ &self, user: &str, repo: &str
+ ) -> Vec<System>;
+ pub fn can_build_unrestricted(&self, user: &str, repo: &str) -> bool;
+}
+```
+
+---
+
+## Secrets Management
+
+Files containing secrets should be readable only by the tickborg service user:
+
+```bash
+# RabbitMQ password
+echo -n "secret-password" > /run/secrets/rabbitmq-password
+chmod 600 /run/secrets/rabbitmq-password
+
+# GitHub App private key
+cp github-app.pem /run/secrets/github-app-key.pem
+chmod 600 /run/secrets/github-app-key.pem
+```
+
+With NixOS and systemd `DynamicUser`, secrets can be placed in
+`/run/credentials/tickborg-*` using systemd's `LoadCredential` or
+`SetCredential` directives.
diff --git a/docs/handbook/ofborg/contributing.md b/docs/handbook/ofborg/contributing.md
new file mode 100644
index 0000000000..17d41ace76
--- /dev/null
+++ b/docs/handbook/ofborg/contributing.md
@@ -0,0 +1,326 @@
+# Tickborg — Contributing Guide
+
+## Getting Started
+
+### Prerequisites
+
+- **Rust** (latest stable) — via `rustup` or Nix
+- **RabbitMQ** — local instance for integration testing
+- **Git** — recent version with submodule support
+- **Nix** (optional) — provides a reproducible dev environment
+
+### Quick Setup with Nix
+
+```bash
+# Enter the development shell
+nix develop
+
+# This provides: cargo, rustc, clippy, rustfmt, pkg-config, openssl
+```
+
+### Manual Setup
+
+```bash
+# Install Rust
+curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+
+# Install system dependencies (Debian/Ubuntu)
+sudo apt install pkg-config libssl-dev
+
+# Install system dependencies (Fedora)
+sudo dnf install pkg-config openssl-devel
+
+# Install RabbitMQ (for integration testing)
+sudo apt install rabbitmq-server
+sudo systemctl start rabbitmq-server
+```
+
+---
+
+## Building
+
+```bash
+# Debug build (fast compilation)
+cargo build
+
+# Release build (optimized, includes debug symbols)
+cargo build --release
+
+# Build a specific binary
+cargo build --bin github-webhook-receiver
+```
+
+All 11 binaries are built from the `tickborg` crate. The workspace also
+includes `tickborg-simple-build` as a secondary crate.
+
+---
+
+## Running Tests
+
+```bash
+# Run all tests
+cargo test
+
+# Run tests for a specific module
+cargo test --lib commentparser
+
+# Run tests with output
+cargo test -- --nocapture
+
+# Run a specific test
+cargo test test_parse_build_command
+```
+
+---
+
+## Code Quality
+
+### Formatting
+
+```bash
+# Check formatting
+cargo fmt --all -- --check
+
+# Apply formatting
+cargo fmt --all
+```
+
+### Linting
+
+```bash
+# Run clippy with warnings as errors
+cargo clippy --all-targets --all-features -- -D warnings
+```
+
+Both checks run in CI. PRs with formatting or clippy violations will fail.
+
+---
+
+## Project Structure
+
+See [architecture.md](architecture.md) for the full module hierarchy.
+
+Key directories:
+
+| Directory | What goes here |
+|-----------|---------------|
+| `tickborg/src/bin/` | Binary entry points — one file per service |
+| `tickborg/src/tasks/` | Worker implementations |
+| `tickborg/src/message/` | AMQP message type definitions |
+| `tickborg/src/ghevent/` | GitHub webhook event types |
+| `tickborg/src/eval/` | Evaluation strategies |
+| `docs/handbook/ofborg/` | This documentation |
+
+---
+
+## Making Changes
+
+### Adding a New Worker
+
+1. Create the task implementation in `tickborg/src/tasks/`:
+
+```rust
+// tasks/myworker.rs
+pub struct MyWorker { /* ... */ }
+
+impl worker::SimpleWorker for MyWorker {
+ type J = MyMessageType;
+
+ async fn consumer(&mut self, job: &Self::J) -> worker::Actions {
+ // Process the job
+ vec![worker::Action::Ack]
+ }
+}
+```
+
+2. Create the binary entry point in `tickborg/src/bin/`:
+
+```rust
+// bin/my-worker.rs
+#[tokio::main]
+async fn main() {
+ tickborg::setup_log();
+ let cfg = tickborg::config::load();
+ // Connect to AMQP, declare queues, start consumer
+}
+```
+
+3. Add the binary to `tickborg/Cargo.toml`:
+
+```toml
+[[bin]]
+name = "my-worker"
+path = "src/bin/my-worker.rs"
+```
+
+4. Add any necessary config fields to `Config` in `config.rs`.
+
+5. Add the service to `service.nix` and `docker-compose.yml`.
+
+### Adding a New Message Type
+
+1. Create the message type in `tickborg/src/message/`:
+
+```rust
+// message/mymessage.rs
+#[derive(Serialize, Deserialize, Debug)]
+pub struct MyMessage {
+ pub field: String,
+}
+```
+
+2. Add the module to `message/mod.rs`:
+
+```rust
+pub mod mymessage;
+```
+
+### Adding a New GitHub Event Type
+
+1. Create the event type in `tickborg/src/ghevent/`:
+
+```rust
+// ghevent/myevent.rs
+#[derive(Deserialize, Debug)]
+pub struct MyEvent {
+ pub action: String,
+ pub repository: Repository,
+}
+```
+
+2. Add the module to `ghevent/mod.rs`.
+
+3. Add routing in the webhook receiver's `route_event` function.
+
+---
+
+## Testing Locally
+
+### With `build-faker`
+
+The `build-faker` binary simulates a builder without running actual builds:
+
+```bash
+# Terminal 1: Start RabbitMQ
+sudo systemctl start rabbitmq-server
+
+# Terminal 2: Start the webhook receiver
+CONFIG_PATH=example.config.json cargo run --bin github-webhook-receiver
+
+# Terminal 3: Start the build faker
+CONFIG_PATH=example.config.json cargo run --bin build-faker
+```
+
+### Sending Test Webhooks
+
+```bash
+# Compute HMAC signature
+BODY='{"action":"opened","pull_request":{...}}'
+SIG=$(echo -n "$BODY" | openssl dgst -sha256 -hmac "your-webhook-secret" | awk '{print $2}')
+
+# Send webhook
+curl -X POST http://localhost:8080/github-webhook \
+ -H "Content-Type: application/json" \
+ -H "X-GitHub-Event: pull_request" \
+ -H "X-Hub-Signature-256: sha256=$SIG" \
+ -d "$BODY"
+```
+
+---
+
+## Commit Messages
+
+Follow **Conventional Commits** format:
+
+```
+<type>(<scope>): <description>
+
+[optional body]
+
+[optional footer(s)]
+```
+
+### Types
+
+| Type | When to use |
+|------|-------------|
+| `feat` | New feature |
+| `fix` | Bug fix |
+| `docs` | Documentation changes |
+| `refactor` | Code change that neither fixes a bug nor adds a feature |
+| `test` | Adding or correcting tests |
+| `chore` | Maintenance tasks |
+| `ci` | CI/CD changes |
+
+### Scopes
+
+Use the sub-project or module name:
+
+```
+feat(meshmc): add block renderer
+fix(builder): handle timeout correctly
+docs(ofborg): add deployment guide
+ci(github): update workflow matrix
+```
+
+The evaluation system uses commit scopes to detect changed projects — see
+[evaluation-system.md](evaluation-system.md).
+
+---
+
+## Pull Request Workflow
+
+1. **Fork & branch** — Create a feature branch from `main`.
+2. **Develop** — Make changes, run tests locally.
+3. **Push** — Push to your fork.
+4. **Open PR** — Target the `main` branch.
+5. **CI** — Tickborg automatically evaluates the PR:
+ - Detects changed projects
+ - Adds `project: <name>` labels
+ - Schedules builds on eligible platforms
+6. **Review** — Maintainers review the code and build results.
+7. **Merge** — Squash-merge into `main`.
+
+### Bot Commands
+
+Maintainers can use `@tickbot` commands on PRs:
+
+```
+@tickbot build meshmc Build meshmc on all platforms
+@tickbot build meshmc neozip Build multiple projects
+@tickbot test mnv Run tests for mnv
+@tickbot eval Re-run evaluation
+```
+
+---
+
+## Documentation
+
+Documentation lives in `docs/handbook/ofborg/`. When making changes to
+tickborg:
+
+- Update relevant docs if the change affects architecture or configuration.
+- Reference real struct names, function signatures, and module paths.
+- Include code snippets from the actual source.
+
+---
+
+## Release Process
+
+Releases are built via the Nix flake:
+
+```bash
+nix build .#tickborg
+```
+
+The output includes all 11 binaries in a single package. Deploy by updating
+the NixOS module's `package` option or rebuilding the Docker image.
+
+---
+
+## Getting Help
+
+- Read the [overview](overview.md) for a high-level understanding.
+- Check [architecture](architecture.md) for the module structure.
+- See [data-flow](data-flow.md) for end-to-end message tracing.
+- Review [configuration](configuration.md) for config file reference.
diff --git a/docs/handbook/ofborg/data-flow.md b/docs/handbook/ofborg/data-flow.md
new file mode 100644
index 0000000000..528974d0ce
--- /dev/null
+++ b/docs/handbook/ofborg/data-flow.md
@@ -0,0 +1,346 @@
+# Tickborg — Data Flow
+
+## Overview
+
+This document traces the complete path of messages through the tickborg system
+for the three primary event types: **pull request**, **comment command**, and
+**push event**.
+
+---
+
+## Pull Request Flow
+
+A PR opened against the monorepo triggers evaluation and automatic builds.
+
+### Step-by-Step
+
+```
+GitHub Webhook Receiver RabbitMQ
+─────── ───────────────── ────────
+POST /github-webhook ───► HMAC verify ──────────► github-events exchange
+ X-Hub-Signature-256 route by event type routing_key: pull_request.opened
+ X-GitHub-Event: pull_request
+```
+
+```
+RabbitMQ Evaluation Filter RabbitMQ
+──────── ───────────────── ────────
+mass-rebuild-check-inputs PR filter logic ───────► mass-rebuild-check-jobs
+ ◄── github-events - Repo eligible? (direct queue publish)
+ pull_request.* - Action interesting?
+ - PR open?
+```
+
+```
+RabbitMQ Mass Rebuilder RabbitMQ / GitHub
+──────── ────────────── ─────────────────
+mass-rebuild-check-jobs EvaluationWorker - Commit status: pending
+ OneEval: - Clone + merge PR
+ 1. Check PR state - Detect changed projects
+ 2. Clone repo - Generate labels
+ 3. Fetch PR - Commit status: success
+ 4. Merge - Publish BuildJob(s)
+ 5. Detect changes ──► build-jobs exchange (fanout)
+ 6. Run eval checks
+ 7. Tag PR labels ──► GitHub API: add labels
+```
+
+```
+RabbitMQ Builder RabbitMQ / GitHub
+──────── ─────── ─────────────────
+build-inputs-{id} BuildWorker - Check Run: in_progress
+ ◄── build-jobs 1. Clone repo - Publish log lines ──► logs exchange
+ 2. Checkout PR - Check Run: completed
+ 3. Detect build system - Publish BuildResult ──► build-results
+ 4. Build
+ 5. Test (if requested)
+```
+
+```
+RabbitMQ Comment Poster GitHub
+──────── ────────────── ──────
+build-results Format result ───────► PR comment with build summary
+ ◄── build-results as markdown
+```
+
+```
+RabbitMQ Log Collector Disk
+──────── ───────────── ────
+build-logs LogMessageCollector ────► /var/log/tickborg/builds/{id}.log
+ ◄── logs exchange
+ logs.*
+```
+
+### Sequence Diagram
+
+```
+GitHub ──► Webhook Receiver ──► [github-events]
+ │
+ pull_request.*
+ ▼
+ Evaluation Filter
+ │
+ ▼
+ [mass-rebuild-check-jobs]
+ │
+ ▼
+ Mass Rebuilder ──► GitHub (status + labels)
+ │
+ BuildJob × N
+ ▼
+ [build-jobs]
+ │
+ ▼
+ Builder ──► GitHub (check run)
+ / \
+ [logs] [build-results]
+ │ │
+ ▼ ▼
+ Log Collector Comment Poster ──► GitHub (PR comment)
+```
+
+---
+
+## Comment Command Flow
+
+A user posts `@tickbot build meshmc` on a PR.
+
+### Step-by-Step
+
+```
+GitHub Webhook Receiver RabbitMQ
+─────── ───────────────── ────────
+POST /github-webhook ───► HMAC verify ──────────► github-events exchange
+ X-GitHub-Event: route: issue_comment routing_key: issue_comment.created
+ issue_comment
+```
+
+```
+RabbitMQ Comment Filter RabbitMQ
+──────── ────────────── ────────
+comment-jobs GitHubCommentWorker build-jobs exchange
+ ◄── github-events 1. Ignore !Created
+ issue_comment.* 2. Parse @tickbot
+ 3. Extract instruction
+ 4. ACL check
+ 5. Produce BuildJob(s) ──► build-jobs (fanout)
+```
+
+The rest of the flow (builder → log collector → comment poster) is identical
+to the PR flow.
+
+### Comment Parser Detail
+
+```
+Input: "@tickbot build meshmc neozip"
+
+commentparser::parse()
+ ┌──────────────────────────────────────────┐
+ │ nom parser pipeline: │
+ │ 1. tag("@tickbot") │
+ │ 2. space1 │
+ │ 3. alt((tag("build"), tag("test"), │
+ │ tag("eval"))) │
+ │ 4. space1 │
+ │ 5. separated_list1(space1, alphanumeric1) │
+ └──────────────────────────────────────────┘
+
+Output: [Instruction::Build(["meshmc", "neozip"], Subset::Project)]
+```
+
+### Message Expansion
+
+A single comment can generate multiple AMQP messages:
+
+```
+@tickbot build meshmc
+ │
+ ▼
+ACL: user allowed on [x86_64-linux, aarch64-linux, x86_64-darwin]
+ │
+ ▼
+3 BuildJob messages:
+ ├── BuildJob { project: "meshmc", system: "x86_64-linux", ... }
+ ├── BuildJob { project: "meshmc", system: "aarch64-linux", ... }
+ └── BuildJob { project: "meshmc", system: "x86_64-darwin", ... }
+```
+
+---
+
+## Push Event Flow
+
+A push to a tracked branch (e.g., `main`).
+
+### Step-by-Step
+
+```
+GitHub Webhook Receiver RabbitMQ
+─────── ───────────────── ────────
+POST /github-webhook ───► HMAC verify ──────────► github-events exchange
+ X-GitHub-Event: push route: push routing_key: push.push
+```
+
+```
+RabbitMQ Push Filter RabbitMQ / External
+──────── ─────────── ─────────────────
+push-jobs PushFilterWorker
+ ◄── github-events 1. Skip tags
+ push.* 2. Skip deletes
+ 3. Skip zero-SHA
+ 4. Check branch name
+ 5. Trigger rebuild ──► (future: deployment hooks)
+```
+
+### Push Event Guards
+
+```rust
+impl worker::SimpleWorker for PushFilterWorker {
+ async fn consumer(&mut self, job: &ghevent::PushEvent) -> worker::Actions {
+ // Skip tags
+ if job.is_tag() {
+ return vec![worker::Action::Ack];
+ }
+
+ // Skip branch deletions
+ if job.is_delete() {
+ return vec![worker::Action::Ack];
+ }
+
+ // Skip zero-SHA (orphan push)
+ if job.is_zero_sha() {
+ return vec![worker::Action::Ack];
+ }
+
+ // Only process main branch
+ if job.branch() != Some("main") {
+ return vec![worker::Action::Ack];
+ }
+
+ // Process the push event...
+ }
+}
+```
+
+---
+
+## Statistics Flow
+
+All services emit `EventMessage` events to the stats exchange.
+
+```
+Any Service
+ │
+ ├── worker::Action::Publish ──► [stats] exchange (fanout)
+ │ │
+ │ ▼
+ │ stats-events queue
+ │ │
+ │ ▼
+ │ StatCollectorWorker
+ │ │
+ └── Metrics: ▼
+ - JobReceived MetricCollector
+ - JobDecodeSuccess │
+ - JobDecodeFailure ▼
+ - BuildStarted HTTP endpoint (:9090)
+ - BuildCompleted /metrics
+ - EvalStarted
+ - EvalCompleted
+```
+
+### `SysEvents` Trait
+
+```rust
+// stats.rs
+pub trait SysEvents: Send {
+ fn notify(&mut self, event: Event)
+ -> impl Future<Output = ()>;
+}
+```
+
+Every worker is generic over `E: SysEvents`, allowing stats collection
+to be plugged in or replaced with a no-op.
+
+---
+
+## Log Collection Flow
+
+Build logs are streamed in real-time via the `logs` exchange.
+
+```
+Builder (BuildWorker)
+ │
+ │ During build execution, for each output line:
+ │
+ ├── BuildLogStart { /* ... */ } ──► [logs] routing_key: logs.{attempt_id}
+ ├── BuildLogMsg { line: "..." } ──► [logs] routing_key: logs.{attempt_id}
+ ├── BuildLogMsg { line: "..." } ──► [logs] routing_key: logs.{attempt_id}
+ └── BuildLogMsg { line: "..." } ──► [logs] routing_key: logs.{attempt_id}
+```
+
+```
+RabbitMQ Log Collector Disk
+──────── ───────────── ────
+build-logs LogMessageCollector
+ ◄── logs matches by attempt_id
+ logs.* writes to file:
+ {log_storage_path}/{attempt_id}.log
+```
+
+### `LogFrom` Enum
+
+```rust
+pub enum LogFrom {
+ Worker(BuildLogMsg),
+ Start(BuildLogStart),
+}
+```
+
+The collector distinguishes between log start (creates the file with metadata
+header) and log lines (appends to the file).
+
+---
+
+## Message Format Summary
+
+All messages are JSON-serialized via `serde_json`. Key message types and their
+flows:
+
+| Message Type | Producer | Consumer | Exchange |
+|-------------|----------|----------|----------|
+| `PullRequestEvent` | Webhook Receiver | Evaluation Filter | `github-events` |
+| `IssueComment` | Webhook Receiver | Comment Filter | `github-events` |
+| `PushEvent` | Webhook Receiver | Push Filter | `github-events` |
+| `EvaluationJob` | Eval Filter / Comment Filter | Mass Rebuilder | _(direct queue)_ |
+| `BuildJob` | Mass Rebuilder / Comment Filter | Builder | `build-jobs` |
+| `BuildResult` | Builder | Comment Poster, Stats | `build-results` |
+| `BuildLogMsg` | Builder | Log Collector | `logs` |
+| `EventMessage` | Any service | Stats Collector | `stats` |
+
+---
+
+## Failure Modes and Recovery
+
+### Transient Failures
+
+| Failure | Recovery Mechanism |
+|---------|-------------------|
+| GitHub API 401 (expired token) | `NackRequeue` → retry after token refresh |
+| GitHub API 5xx | `NackRequeue` → retry |
+| RabbitMQ connection lost | `lapin` reconnect / systemd restart |
+| Build timeout | `BuildStatus::TimedOut` → report to GitHub |
+
+### Permanent Failures
+
+| Failure | Handling |
+|---------|----------|
+| Invalid message JSON | `Ack` (discard) + log error |
+| PR force-pushed (SHA gone) | `Ack` (skip) — `MissingSha` |
+| GitHub API 4xx (not 401/422) | `Ack` + add `tickborg-internal-error` label |
+| Merge conflict | Report failure status to GitHub, `Ack` |
+
+### Dead Letter Behavior
+
+Messages `NackDump`'d (rejected without requeue) are discarded unless a
+dead-letter exchange is configured in RabbitMQ. This is used for permanently
+invalid messages that should not be retried.
diff --git a/docs/handbook/ofborg/deployment.md b/docs/handbook/ofborg/deployment.md
new file mode 100644
index 0000000000..4a9497b0c3
--- /dev/null
+++ b/docs/handbook/ofborg/deployment.md
@@ -0,0 +1,413 @@
+# Tickborg — Deployment
+
+## Overview
+
+Tickborg can be deployed via **NixOS modules**, **Docker Compose**, or manual
+systemd units. The preferred method is the NixOS module defined in
+`service.nix`, which orchestrates all eight binaries as individual systemd
+services.
+
+---
+
+## Key Files
+
+| File | Purpose |
+|------|---------|
+| `service.nix` | NixOS module — systemd services |
+| `docker-compose.yml` | Full-stack Docker Compose |
+| `flake.nix` | Nix flake — package + dev shell |
+| `example.config.json` | Reference configuration file |
+
+---
+
+## NixOS Deployment
+
+### Module Structure (`service.nix`)
+
+```nix
+{ config, pkgs, lib, ... }:
+let
+ cfg = config.services.tickborg;
+ tickborg = cfg.package;
+in
+{
+ options.services.tickborg = {
+ enable = lib.mkEnableOption "Enable tickborg CI services";
+
+ package = lib.mkOption {
+ type = lib.types.package;
+ description = "The tickborg package to use";
+ };
+
+ configFile = lib.mkOption {
+ type = lib.types.path;
+ description = "Path to the tickborg config.json";
+ };
+
+ logConfig = lib.mkOption {
+ type = lib.types.str;
+ default = "info";
+ description = "RUST_LOG filter string";
+ };
+
+ services = {
+ github-webhook-receiver = lib.mkEnableOption "webhook receiver";
+ evaluation-filter = lib.mkEnableOption "evaluation filter";
+ mass-rebuilder = lib.mkEnableOption "mass rebuilder (evaluation)";
+ builder = lib.mkEnableOption "build executor";
+ github-comment-filter = lib.mkEnableOption "comment filter";
+ github-comment-poster = lib.mkEnableOption "comment poster";
+ log-message-collector = lib.mkEnableOption "log collector";
+ stats = lib.mkEnableOption "stats collector";
+ };
+ };
+}
+```
+
+### Per-Service Configuration
+
+Each service is toggled independently. A common template generates systemd
+units:
+
+```nix
+commonServiceConfig = binary: {
+ description = "tickborg ${binary}";
+ wantedBy = [ "multi-user.target" ];
+ after = [ "network-online.target" "rabbitmq.service" ];
+ wants = [ "network-online.target" ];
+
+ environment = {
+ RUST_LOG = cfg.logConfig;
+ RUST_LOG_JSON = "1";
+ CONFIG_PATH = toString cfg.configFile;
+ };
+
+ serviceConfig = {
+ ExecStart = "${tickborg}/bin/${binary}";
+ Restart = "always";
+ RestartSec = "10s";
+ DynamicUser = true;
+
+ # Hardening
+ NoNewPrivileges = true;
+ ProtectSystem = "strict";
+ ProtectHome = true;
+ PrivateTmp = true;
+ PrivateDevices = true;
+ ProtectKernelTunables = true;
+ ProtectKernelModules = true;
+ ProtectKernelLogs = true;
+ ProtectControlGroups = true;
+ RestrictNamespaces = true;
+ LockPersonality = true;
+ MemoryDenyWriteExecute = true;
+ RestrictRealtime = true;
+ SystemCallFilter = [ "@system-service" "~@mount" ];
+ };
+};
+```
+
+### Applying the Module
+
+```nix
+# In your NixOS configuration.nix or flake:
+{
+ imports = [ ./service.nix ];
+
+ services.tickborg = {
+ enable = true;
+ package = tickborg-pkg;
+ configFile = /etc/tickborg/config.json;
+ logConfig = "info,tickborg=debug";
+
+ services = {
+ github-webhook-receiver = true;
+ evaluation-filter = true;
+ mass-rebuilder = true;
+ builder = true;
+ github-comment-filter = true;
+ github-comment-poster = true;
+ log-message-collector = true;
+ stats = true;
+ };
+ };
+}
+```
+
+### Service Management
+
+```bash
+# View all tickborg services
+systemctl list-units 'tickborg-*'
+
+# Restart a single service
+systemctl restart tickborg-builder
+
+# View logs
+journalctl -u tickborg-builder -f
+
+# Structured JSON logs (when RUST_LOG_JSON=1)
+journalctl -u tickborg-builder -o cat | jq .
+```
+
+---
+
+## Docker Compose Deployment
+
+### `docker-compose.yml`
+
+```yaml
+services:
+ rabbitmq:
+ image: rabbitmq:3-management
+ ports:
+ - "5672:5672"
+ - "15672:15672"
+ environment:
+ RABBITMQ_DEFAULT_USER: tickborg
+ RABBITMQ_DEFAULT_PASS: tickborg
+ volumes:
+ - rabbitmq-data:/var/lib/rabbitmq
+
+ webhook-receiver:
+ build: .
+ command: github-webhook-receiver
+ ports:
+ - "8080:8080"
+ environment:
+ CONFIG_PATH: /config/config.json
+ RUST_LOG: info
+ volumes:
+ - ./config:/config:ro
+ depends_on:
+ - rabbitmq
+
+ evaluation-filter:
+ build: .
+ command: evaluation-filter
+ environment:
+ CONFIG_PATH: /config/config.json
+ RUST_LOG: info
+ volumes:
+ - ./config:/config:ro
+ depends_on:
+ - rabbitmq
+
+ mass-rebuilder:
+ build: .
+ command: mass-rebuilder
+ environment:
+ CONFIG_PATH: /config/config.json
+ RUST_LOG: info
+ volumes:
+ - ./config:/config:ro
+ - checkout-cache:/var/cache/tickborg
+ depends_on:
+ - rabbitmq
+
+ builder:
+ build: .
+ command: builder
+ environment:
+ CONFIG_PATH: /config/config.json
+ RUST_LOG: info
+ volumes:
+ - ./config:/config:ro
+ - checkout-cache:/var/cache/tickborg
+ depends_on:
+ - rabbitmq
+
+ comment-filter:
+ build: .
+ command: github-comment-filter
+ environment:
+ CONFIG_PATH: /config/config.json
+ RUST_LOG: info
+ volumes:
+ - ./config:/config:ro
+ depends_on:
+ - rabbitmq
+
+ comment-poster:
+ build: .
+ command: github-comment-poster
+ environment:
+ CONFIG_PATH: /config/config.json
+ RUST_LOG: info
+ volumes:
+ - ./config:/config:ro
+ depends_on:
+ - rabbitmq
+
+ log-collector:
+ build: .
+ command: log-message-collector
+ environment:
+ CONFIG_PATH: /config/config.json
+ RUST_LOG: info
+ volumes:
+ - ./config:/config:ro
+ - log-data:/var/log/tickborg
+ depends_on:
+ - rabbitmq
+
+ stats:
+ build: .
+ command: stats
+ ports:
+ - "9090:9090"
+ environment:
+ CONFIG_PATH: /config/config.json
+ RUST_LOG: info
+ volumes:
+ - ./config:/config:ro
+ depends_on:
+ - rabbitmq
+
+volumes:
+ rabbitmq-data:
+ checkout-cache:
+ log-data:
+```
+
+### Running
+
+```bash
+# Start all services
+docker compose up -d
+
+# View webhook receiver logs
+docker compose logs -f webhook-receiver
+
+# Scale builders
+docker compose up -d --scale builder=3
+
+# Stop everything
+docker compose down
+```
+
+---
+
+## Nix Flake
+
+### `flake.nix` Outputs
+
+```nix
+{
+ outputs = { self, nixpkgs, ... }: {
+ packages.x86_64-linux.default = /* tickborg cargo build */ ;
+ packages.x86_64-linux.tickborg = self.packages.x86_64-linux.default;
+
+ devShells.x86_64-linux.default = pkgs.mkShell {
+ nativeBuildInputs = with pkgs; [
+ cargo
+ rustc
+ clippy
+ rustfmt
+ pkg-config
+ openssl
+ ];
+ RUST_SRC_PATH = "${pkgs.rust.packages.stable.rustPlatform.rustLibSrc}";
+ };
+
+ nixosModules.default = import ./service.nix;
+ };
+}
+```
+
+### Building with Nix
+
+```bash
+# Build the package
+nix build
+
+# Enter dev shell
+nix develop
+
+# Run directly
+nix run .#tickborg -- github-webhook-receiver
+```
+
+---
+
+## Environment Variables
+
+| Variable | Default | Description |
+|----------|---------|-------------|
+| `CONFIG_PATH` | `./config.json` | Path to configuration file |
+| `RUST_LOG` | `info` | tracing filter directive |
+| `RUST_LOG_JSON` | (unset) | Set to `1` for JSON-formatted logs |
+
+---
+
+## Reverse Proxy
+
+The webhook receiver requires an HTTPS endpoint exposed to GitHub. Typical
+setup with nginx:
+
+```nginx
+server {
+ listen 443 ssl;
+ server_name ci.example.com;
+
+ ssl_certificate /etc/letsencrypt/live/ci.example.com/fullchain.pem;
+ ssl_certificate_key /etc/letsencrypt/live/ci.example.com/privkey.pem;
+
+ location /github-webhook {
+ proxy_pass http://127.0.0.1:8080;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ # GitHub sends large payloads
+ client_max_body_size 25m;
+ }
+
+ location /logs/ {
+ proxy_pass http://127.0.0.1:8081/;
+ }
+}
+```
+
+---
+
+## RabbitMQ Setup
+
+### Required Configuration
+
+```bash
+# Create vhost
+rabbitmqctl add_vhost tickborg
+
+# Create user
+rabbitmqctl add_user tickborg <password>
+
+# Grant permissions
+rabbitmqctl set_permissions -p tickborg tickborg ".*" ".*" ".*"
+```
+
+### Management UI
+
+Available at `http://localhost:15672` when using Docker Compose. Useful for
+monitoring queue depths and consumer counts.
+
+---
+
+## Health Checks
+
+Monitor these indicators:
+
+| Check | Healthy | Problem |
+|-------|---------|---------|
+| Queue depth `mass-rebuild-check-inputs` | < 50 | Evaluation filter slow/down |
+| Queue depth `build-inputs-*` | < 20 | Builder slow/down |
+| Consumer count per queue | ≥ 1 | No consumers (service down) |
+| `stats` HTTP endpoint | 200 OK | Stats collector down |
+| Webhook receiver `/health` | 200 OK | Webhook receiver down |
+
+### Systemd Watchdog
+
+Services configured with `Restart = "always"` will be automatically restarted
+on crash. The 10-second `RestartSec` prevents restart loops on persistent
+failures.
diff --git a/docs/handbook/ofborg/evaluation-system.md b/docs/handbook/ofborg/evaluation-system.md
new file mode 100644
index 0000000000..73d6898c30
--- /dev/null
+++ b/docs/handbook/ofborg/evaluation-system.md
@@ -0,0 +1,602 @@
+# Tickborg — Evaluation System
+
+## Overview
+
+The evaluation system determines **which sub-projects changed** in a pull
+request and schedules builds accordingly. It replaces the original ofborg's
+Nix expression evaluation with a monorepo-aware strategy that inspects changed
+files, commit messages, and PR metadata.
+
+---
+
+## Key Source Files
+
+| File | Purpose |
+|------|---------|
+| `tickborg/src/tasks/evaluate.rs` | `EvaluationWorker`, `OneEval` — orchestrates eval |
+| `tickborg/src/tasks/eval/mod.rs` | `EvaluationStrategy` trait, `EvaluationComplete` |
+| `tickborg/src/tasks/eval/monorepo.rs` | `MonorepoStrategy` — Project Tick specific |
+| `tickborg/src/tasks/evaluationfilter.rs` | `EvaluationFilterWorker` — PR event gating |
+| `tickborg/src/bin/evaluation-filter.rs` | Evaluation filter binary |
+| `tickborg/src/bin/mass-rebuilder.rs` | Mass rebuilder binary (runs evaluations) |
+| `tickborg/src/tagger.rs` | `ProjectTagger` — PR label generation |
+| `tickborg/src/evalchecker.rs` | `EvalChecker` — generic command runner |
+| `tickborg/src/buildtool.rs` | `detect_changed_projects()`, `find_project()` |
+
+---
+
+## Stage 1: Evaluation Filter
+
+The evaluation filter is the gateway that decides whether a PR event warrants
+full evaluation.
+
+### `EvaluationFilterWorker`
+
+```rust
+// tasks/evaluationfilter.rs
+pub struct EvaluationFilterWorker {
+ acl: acl::Acl,
+}
+
+impl worker::SimpleWorker for EvaluationFilterWorker {
+ type J = ghevent::PullRequestEvent;
+
+ async fn consumer(&mut self, job: &ghevent::PullRequestEvent) -> worker::Actions {
+ // Check 1: Is the repo eligible?
+ if !self.acl.is_repo_eligible(&job.repository.full_name) {
+ return vec![worker::Action::Ack];
+ }
+
+ // Check 2: Is the PR open?
+ if job.pull_request.state != ghevent::PullRequestState::Open {
+ return vec![worker::Action::Ack];
+ }
+
+ // Check 3: Is the action interesting?
+ let interesting = match job.action {
+ PullRequestAction::Opened => true,
+ PullRequestAction::Synchronize => true,
+ PullRequestAction::Reopened => true,
+ PullRequestAction::Edited => {
+ if let Some(ref changes) = job.changes {
+ changes.base.is_some() // base branch changed
+ } else {
+ false
+ }
+ }
+ _ => false,
+ };
+
+ if !interesting {
+ return vec![worker::Action::Ack];
+ }
+
+ // Produce an EvaluationJob
+ let msg = evaluationjob::EvaluationJob {
+ repo: Repo { /* ... */ },
+ pr: Pr { /* ... */ },
+ };
+
+ vec![
+ worker::publish_serde_action(
+ None, Some("mass-rebuild-check-jobs".to_owned()), &msg
+ ),
+ worker::Action::Ack,
+ ]
+ }
+}
+```
+
+### Filtering Rules
+
+| PR Action | Result |
+|-----------|--------|
+| `Opened` | Evaluate |
+| `Synchronize` (new commits pushed) | Evaluate |
+| `Reopened` | Evaluate |
+| `Edited` with base branch change | Evaluate |
+| `Edited` without base change | Skip |
+| `Closed` | Skip |
+| Any unknown action | Skip |
+
+### AMQP Flow
+
+```
+mass-rebuild-check-inputs (queue)
+ ← github-events (exchange), routing: pull_request.*
+ → EvaluationFilterWorker
+ → mass-rebuild-check-jobs (queue, direct publish)
+```
+
+---
+
+## Stage 2: The Evaluation Worker
+
+### `EvaluationWorker`
+
+```rust
+// tasks/evaluate.rs
+pub struct EvaluationWorker<E> {
+ cloner: checkout::CachedCloner,
+ github_vend: tokio::sync::RwLock<GithubAppVendingMachine>,
+ acl: Acl,
+ identity: String,
+ events: E,
+}
+```
+
+The `EvaluationWorker` implements `SimpleWorker` and orchestrates the full
+evaluation pipeline.
+
+### Message Decoding
+
+```rust
+impl<E: stats::SysEvents + 'static> worker::SimpleWorker for EvaluationWorker<E> {
+ type J = evaluationjob::EvaluationJob;
+
+ async fn msg_to_job(&mut self, _: &str, _: &Option<String>, body: &[u8])
+ -> Result<Self::J, String>
+ {
+ self.events.notify(Event::JobReceived).await;
+ match evaluationjob::from(body) {
+ Ok(job) => {
+ self.events.notify(Event::JobDecodeSuccess).await;
+ Ok(job)
+ }
+ Err(err) => {
+ self.events.notify(Event::JobDecodeFailure).await;
+ Err("Failed to decode message".to_owned())
+ }
+ }
+ }
+}
+```
+
+### Per-Job Evaluation (`OneEval`)
+
+```rust
+struct OneEval<'a, E> {
+ client_app: &'a hubcaps::Github,
+ repo: hubcaps::repositories::Repository,
+ acl: &'a Acl,
+ events: &'a mut E,
+ identity: &'a str,
+ cloner: &'a checkout::CachedCloner,
+ job: &'a evaluationjob::EvaluationJob,
+}
+```
+
+### Evaluation Pipeline
+
+The `evaluate_job` method executes these steps:
+
+#### 1. Check if PR is closed
+
+```rust
+match issue_ref.get().await {
+ Ok(iss) => {
+ if iss.state == "closed" {
+ self.events.notify(Event::IssueAlreadyClosed).await;
+ return Ok(self.actions().skip(job));
+ }
+ // ...
+ }
+}
+```
+
+#### 2. Determine auto-schedule architectures
+
+```rust
+if issue_is_wip(&iss) {
+ auto_schedule_build_archs = vec![];
+} else {
+ auto_schedule_build_archs = self.acl.build_job_architectures_for_user_repo(
+ &iss.user.login, &job.repo.full_name,
+ );
+}
+```
+
+WIP PRs get no automatic builds. The architecture list depends on whether the
+user is trusted (7 platforms) or not (3 primary platforms).
+
+#### 3. Create the evaluation strategy
+
+```rust
+let mut evaluation_strategy = eval::MonorepoStrategy::new(job, &issue_ref);
+```
+
+#### 4. Set commit status
+
+```rust
+let mut overall_status = CommitStatus::new(
+ repo.statuses(),
+ job.pr.head_sha.clone(),
+ format!("{prefix}-eval"),
+ "Starting".to_owned(),
+ None,
+);
+overall_status.set_with_description(
+ "Starting", hubcaps::statuses::State::Pending
+).await?;
+```
+
+#### 5. Pre-clone actions
+
+```rust
+evaluation_strategy.pre_clone().await?;
+```
+
+#### 6. Clone and checkout
+
+```rust
+let project = self.cloner.project(&job.repo.full_name, job.repo.clone_url.clone());
+let co = project.clone_for("mr-est".to_string(), self.identity.to_string())?;
+```
+
+#### 7. Checkout target branch, fetch PR, merge
+
+```rust
+evaluation_strategy.on_target_branch(&co_path, &mut overall_status).await?;
+co.fetch_pr(job.pr.number)?;
+evaluation_strategy.after_fetch(&co)?;
+co.merge_commit(OsStr::new("pr"))?;
+evaluation_strategy.after_merge(&mut overall_status).await?;
+```
+
+#### 8. Run evaluation checks
+
+```rust
+let checks = evaluation_strategy.evaluation_checks();
+// Execute each check and update commit status
+```
+
+#### 9. Complete evaluation
+
+```rust
+let eval_complete = evaluation_strategy.all_evaluations_passed(
+ &mut overall_status
+).await?;
+```
+
+### Error Handling
+
+```rust
+async fn worker_actions(&mut self) -> worker::Actions {
+ let eval_result = match self.evaluate_job().await {
+ Ok(v) => Ok(v),
+ Err(eval_error) => match eval_error {
+ EvalWorkerError::EvalError(eval::Error::Fail(msg)) =>
+ Err(self.update_status(msg, None, State::Failure).await),
+ EvalWorkerError::EvalError(eval::Error::CommitStatusWrite(e)) =>
+ Err(Err(e)),
+ EvalWorkerError::CommitStatusWrite(e) =>
+ Err(Err(e)),
+ },
+ };
+
+ match eval_result {
+ Ok(eval_actions) => {
+ // Remove tickborg-internal-error label
+ update_labels(&issue_ref, &[], &["tickborg-internal-error".into()]).await;
+ eval_actions
+ }
+ Err(Ok(())) => {
+ // Error, but PR updated successfully
+ self.actions().skip(self.job)
+ }
+ Err(Err(CommitStatusError::ExpiredCreds(_))) => {
+ self.actions().retry_later(self.job) // NackRequeue
+ }
+ Err(Err(CommitStatusError::MissingSha(_))) => {
+ self.actions().skip(self.job) // Ack (force pushed)
+ }
+ Err(Err(CommitStatusError::InternalError(_))) => {
+ self.actions().retry_later(self.job) // NackRequeue
+ }
+ Err(Err(CommitStatusError::Error(_))) => {
+ // Add tickborg-internal-error label
+ update_labels(&issue_ref, &["tickborg-internal-error".into()], &[]).await;
+ self.actions().skip(self.job)
+ }
+ }
+}
+```
+
+---
+
+## The `EvaluationStrategy` Trait
+
+```rust
+// tasks/eval/mod.rs
+pub trait EvaluationStrategy {
+ fn pre_clone(&mut self)
+ -> impl Future<Output = StepResult<()>>;
+
+ fn on_target_branch(&mut self, co: &Path, status: &mut CommitStatus)
+ -> impl Future<Output = StepResult<()>>;
+
+ fn after_fetch(&mut self, co: &CachedProjectCo)
+ -> StepResult<()>;
+
+ fn after_merge(&mut self, status: &mut CommitStatus)
+ -> impl Future<Output = StepResult<()>>;
+
+ fn evaluation_checks(&self) -> Vec<EvalChecker>;
+
+ fn all_evaluations_passed(&mut self, status: &mut CommitStatus)
+ -> impl Future<Output = StepResult<EvaluationComplete>>;
+}
+
+pub type StepResult<T> = Result<T, Error>;
+
+#[derive(Default)]
+pub struct EvaluationComplete {
+ pub builds: Vec<BuildJob>,
+}
+
+#[derive(Debug)]
+pub enum Error {
+ CommitStatusWrite(CommitStatusError),
+ Fail(String),
+}
+```
+
+---
+
+## The `MonorepoStrategy`
+
+### Title-Based Label Detection
+
+```rust
+// tasks/eval/monorepo.rs
+const TITLE_LABELS: [(&str, &str); 12] = [
+ ("meshmc", "project: meshmc"),
+ ("mnv", "project: mnv"),
+ ("neozip", "project: neozip"),
+ ("cmark", "project: cmark"),
+ ("cgit", "project: cgit"),
+ ("json4cpp", "project: json4cpp"),
+ ("tomlplusplus", "project: tomlplusplus"),
+ ("corebinutils", "project: corebinutils"),
+ ("forgewrapper", "project: forgewrapper"),
+ ("genqrcode", "project: genqrcode"),
+ ("darwin", "platform: macos"),
+ ("windows", "platform: windows"),
+];
+
+fn label_from_title(title: &str) -> Vec<String> {
+ let title_lower = title.to_lowercase();
+ TITLE_LABELS.iter()
+ .filter(|(word, _)| {
+ let re = Regex::new(&format!("\\b{word}\\b")).unwrap();
+ re.is_match(&title_lower)
+ })
+ .map(|(_, label)| (*label).into())
+ .collect()
+}
+```
+
+This uses word boundary regex (`\b`) to prevent false matches (e.g., "cmake"
+won't match "cmark").
+
+### Commit Scope Parsing
+
+```rust
+fn parse_commit_scopes(messages: &[String]) -> Vec<String> {
+ let scope_re = Regex::new(r"^[a-z]+\(([^)]+)\)").unwrap();
+ let colon_re = Regex::new(r"^([a-z0-9_-]+):").unwrap();
+
+ let mut projects: Vec<String> = messages.iter()
+ .filter_map(|line| {
+ let trimmed = line.trim();
+ // Conventional Commits: "feat(meshmc): add block renderer"
+ if let Some(caps) = scope_re.captures(trimmed) {
+ Some(caps[1].to_string())
+ }
+ // Simple: "meshmc: fix crash"
+ else if let Some(caps) = colon_re.captures(trimmed) {
+ let candidate = caps[1].to_string();
+ if crate::buildtool::find_project(&candidate).is_some() {
+ Some(candidate)
+ } else {
+ None
+ }
+ } else {
+ None
+ }
+ })
+ .collect();
+
+ projects.sort();
+ projects.dedup();
+ projects
+}
+```
+
+This recognises both Conventional Commits (`feat(meshmc): ...`) and simple
+scope prefixes (`meshmc: ...`).
+
+### File Change Detection
+
+The strategy uses `CachedProjectCo::files_changed_from_head()` to get the
+list of changed files, then passes them through
+`buildtool::detect_changed_projects()` which maps each file to its top-level
+directory and matches against known projects.
+
+---
+
+## The `EvalChecker`
+
+```rust
+// evalchecker.rs
+pub struct EvalChecker {
+ name: String,
+ command: String,
+ args: Vec<String>,
+}
+
+impl EvalChecker {
+ pub fn new(name: &str, command: &str, args: Vec<String>) -> EvalChecker;
+ pub fn name(&self) -> &str;
+ pub fn execute(&self, path: &Path) -> Result<File, File>;
+ pub fn cli_cmd(&self) -> String;
+}
+```
+
+`EvalChecker` is a generic command execution wrapper. It runs a command in the
+checkout directory and returns `Ok(File)` on success, `Err(File)` on failure.
+The `File` contains captured stdout + stderr.
+
+```rust
+pub fn execute(&self, path: &Path) -> Result<File, File> {
+ let output = Command::new(&self.command)
+ .args(&self.args)
+ .current_dir(path)
+ .output();
+
+ match output {
+ Ok(result) => {
+ // Write stdout + stderr to temp file
+ if result.status.success() {
+ Ok(file)
+ } else {
+ Err(file)
+ }
+ }
+ Err(e) => {
+ // Write error message to temp file
+ Err(file)
+ }
+ }
+}
+```
+
+---
+
+## The `ProjectTagger`
+
+```rust
+// tagger.rs
+pub struct ProjectTagger {
+ selected: Vec<String>,
+}
+
+impl ProjectTagger {
+ pub fn new() -> Self;
+
+ pub fn analyze_changes(&mut self, changed_files: &[String]) {
+ let projects = detect_changed_projects(changed_files);
+ for project in projects {
+ self.selected.push(format!("project: {project}"));
+ }
+
+ // Cross-cutting labels
+ let has_ci = changed_files.iter().any(|f|
+ f.starts_with(".github/") || f.starts_with("ci/")
+ );
+ let has_docs = changed_files.iter().any(|f|
+ f.starts_with("docs/") || f.ends_with(".md")
+ );
+ let has_root = changed_files.iter().any(|f|
+ !f.contains('/') && !f.ends_with(".md")
+ );
+
+ if has_ci { self.selected.push("scope: ci".into()); }
+ if has_docs { self.selected.push("scope: docs".into()); }
+ if has_root { self.selected.push("scope: root".into()); }
+ }
+
+ pub fn tags_to_add(&self) -> Vec<String>;
+ pub fn tags_to_remove(&self) -> Vec<String>;
+}
+```
+
+### Label Examples
+
+| Changed Files | Generated Labels |
+|--------------|------------------|
+| `meshmc/CMakeLists.txt` | `project: meshmc` |
+| `mnv/src/main.c` | `project: mnv` |
+| `.github/workflows/ci.yml` | `scope: ci` |
+| `README.md` | `scope: docs` |
+| `flake.nix` | `scope: root` |
+
+---
+
+## Commit Status Updates
+
+Throughout evaluation, the commit status is updated to reflect progress:
+
+```
+Starting → Cloning project → Checking out target → Fetching PR →
+Merging → Running checks → Evaluation complete
+```
+
+Or on failure:
+
+```
+Starting → ... → Merge failed (Failure)
+Starting → ... → Check 'xyz' failed (Failure)
+```
+
+The commit status context includes a prefix determined dynamically:
+
+```rust
+let prefix = get_prefix(repo.statuses(), &job.pr.head_sha).await?;
+let context = format!("{prefix}-eval");
+```
+
+---
+
+## Auto-Scheduled vs. Manual Builds
+
+### Auto-Scheduled (from PR evaluation)
+
+When a PR is evaluated, builds are automatically scheduled for the detected
+changed projects. The set of architectures depends on the ACL:
+
+- **Trusted users**: All 7 platforms
+- **Untrusted users**: 3 primary platforms (x86_64 Linux/macOS/Windows)
+- **WIP PRs**: No automatic builds
+
+### Manual (from `@tickbot` commands)
+
+Users can manually trigger builds or re-evaluations:
+
+```
+@tickbot build meshmc → Build meshmc on all eligible platforms
+@tickbot eval → Re-run evaluation
+@tickbot test mnv → Run tests for mnv
+@tickbot build meshmc neozip → Build multiple projects
+```
+
+These are handled by the `github-comment-filter`, not the evaluation system.
+
+---
+
+## Label Management
+
+The evaluation system manages PR labels via the GitHub API:
+
+```rust
+async fn update_labels(
+ issue_ref: &IssueRef,
+ add: &[String],
+ remove: &[String],
+) {
+ // Add labels
+ for label in add {
+ issue_ref.labels().add(vec![label.clone()]).await;
+ }
+ // Remove labels
+ for label in remove {
+ issue_ref.labels().remove(label).await;
+ }
+}
+```
+
+Labels managed:
+- `project: <name>` — Which sub-projects are affected
+- `scope: ci` / `scope: docs` / `scope: root` — Cross-cutting changes
+- `platform: macos` / `platform: windows` — Platform-specific changes
+- `tickborg-internal-error` — Added when tickborg encounters an internal error
diff --git a/docs/handbook/ofborg/github-integration.md b/docs/handbook/ofborg/github-integration.md
new file mode 100644
index 0000000000..4f33f77466
--- /dev/null
+++ b/docs/handbook/ofborg/github-integration.md
@@ -0,0 +1,603 @@
+# Tickborg — GitHub Integration
+
+## Overview
+
+Tickborg communicates with GitHub through the **GitHub App** model. A custom
+fork of the `hubcaps` crate provides the Rust API client. Integration covers
+webhook reception, commit statuses, check runs, issue/PR manipulation, and
+comment posting.
+
+---
+
+## GitHub App Authentication
+
+### `GithubAppVendingMachine`
+
+```rust
+// config.rs
+pub struct GithubAppVendingMachine {
+ conf: GithubAppConfig,
+ current_token: Option<String>,
+ token_expiry: Option<Instant>,
+}
+```
+
+Handles two-stage GitHub App auth:
+
+1. **JWT**: Signed with the App's private RSA key, valid for up to 10 minutes.
+2. **Installation token**: Obtained with the JWT, valid for ~1 hour.
+
+### Token Lifecycle
+
+```rust
+impl GithubAppVendingMachine {
+ pub fn new(conf: GithubAppConfig) -> Self {
+ GithubAppVendingMachine {
+ conf,
+ current_token: None,
+ token_expiry: None,
+ }
+ }
+
+ fn is_token_fresh(&self) -> bool {
+ match self.token_expiry {
+ Some(exp) => Instant::now() < exp,
+ None => false,
+ }
+ }
+
+ pub async fn get_token(&mut self) -> Result<String, String> {
+ if self.is_token_fresh() {
+ return Ok(self.current_token.clone().unwrap());
+ }
+ // Generate a fresh JWT
+ let jwt = self.make_jwt()?;
+ // Exchange JWT for installation token
+ let client = hubcaps::Github::new(
+ "tickborg".to_owned(),
+ hubcaps::Credentials::Jwt(hubcaps::JwtToken::new(jwt)),
+ )?;
+ let installation = client.app()
+ .find_repo_installation(&self.conf.owner, &self.conf.repo)
+ .await?;
+ let token_result = client.app()
+ .create_installation_token(installation.id)
+ .await?;
+
+ self.current_token = Some(token_result.token.clone());
+ // Expire tokens 5 minutes early to avoid edge cases
+ self.token_expiry = Some(
+ Instant::now() + Duration::from_secs(55 * 60) - Duration::from_secs(5 * 60)
+ );
+
+ Ok(token_result.token)
+ }
+
+ pub async fn github(&mut self) -> Result<hubcaps::Github, String> {
+ let token = self.get_token().await?;
+ Ok(hubcaps::Github::new(
+ "tickborg".to_owned(),
+ hubcaps::Credentials::Token(token),
+ )?)
+ }
+}
+```
+
+### JWT Generation
+
+```rust
+fn make_jwt(&self) -> Result<String, String> {
+ let now = SystemTime::now()
+ .duration_since(UNIX_EPOCH).unwrap()
+ .as_secs() as i64;
+
+ let payload = json!({
+ "iat": now - 60, // 1 minute in the past (clock skew)
+ "exp": now + (10 * 60), // 10 minutes from now
+ "iss": self.conf.app_id,
+ });
+
+ let key = EncodingKey::from_rsa_pem(
+ &std::fs::read(&self.conf.private_key_file)?
+ )?;
+
+ encode(&Header::new(Algorithm::RS256), &payload, &key)
+ .map_err(|e| format!("JWT encoding error: {}", e))
+}
+```
+
+### `GithubAppConfig`
+
+```rust
+#[derive(Deserialize, Debug)]
+pub struct GithubAppConfig {
+ pub app_id: u64,
+ pub private_key_file: PathBuf,
+ pub owner: String,
+ pub repo: String,
+ pub installation_id: Option<u64>,
+}
+```
+
+---
+
+## GitHub App Configuration
+
+The `GithubAppConfig` is nested under the top-level `Config`:
+
+```json
+{
+ "github_app": {
+ "app_id": 12345,
+ "private_key_file": "/etc/tickborg/private-key.pem",
+ "owner": "project-tick",
+ "repo": "Project-Tick",
+ "installation_id": 67890
+ }
+}
+```
+
+---
+
+## Commit Statuses
+
+### `CommitStatus`
+
+```rust
+// commitstatus.rs
+pub struct CommitStatus {
+ api: hubcaps::statuses::Statuses,
+ sha: String,
+ context: String,
+ description: String,
+ url: Option<String>,
+}
+```
+
+### State Machine
+
+```rust
+impl CommitStatus {
+ pub fn new(
+ statuses: hubcaps::statuses::Statuses,
+ sha: String,
+ context: String,
+ description: String,
+ url: Option<String>,
+ ) -> Self;
+
+ pub async fn set_url(&mut self, url: Option<String>);
+
+ pub async fn set_with_description(
+ &mut self,
+ description: &str,
+ state: hubcaps::statuses::State,
+ ) -> Result<(), CommitStatusError> {
+ self.description = description.to_owned();
+ self.send_status(state).await
+ }
+
+ pub async fn set(
+ &mut self,
+ state: hubcaps::statuses::State,
+ ) -> Result<(), CommitStatusError>;
+
+ async fn send_status(
+ &self,
+ state: hubcaps::statuses::State,
+ ) -> Result<(), CommitStatusError> {
+ let options = hubcaps::statuses::StatusOptions::builder(state)
+ .description(&self.description)
+ .context(&self.context);
+
+ let options = match &self.url {
+ Some(u) => options.target_url(u).build(),
+ None => options.build(),
+ };
+
+ self.api.create(&self.sha, &options)
+ .await
+ .map_err(|e| CommitStatusError::from(e))?;
+
+ Ok(())
+ }
+}
+```
+
+### Error Classification
+
+```rust
+#[derive(Debug)]
+pub enum CommitStatusError {
+ ExpiredCreds(String), // GitHub App token expired
+ MissingSha(String), // Commit was force-pushed away
+ InternalError(String), // 5xx from GitHub API
+ Error(String), // Other errors
+}
+```
+
+Error mapping from HTTP response:
+
+| HTTP Status | CommitStatusError Variant | Worker Action |
+|------------|--------------------------|---------------|
+| 401 | `ExpiredCreds` | `NackRequeue` (retry) |
+| 422 ("No commit found") | `MissingSha` | `Ack` (skip) |
+| 500-599 | `InternalError` | `NackRequeue` (retry) |
+| Other | `Error` | `Ack` + add error label |
+
+---
+
+## Check Runs
+
+### `job_to_check()`
+
+Creates a Check Run when a build job is started:
+
+```rust
+pub async fn job_to_check(
+ github: &hubcaps::Github,
+ repo_full_name: &str,
+ job: &BuildJob,
+ runner_identity: &str,
+) -> Result<(), String> {
+ let (owner, repo) = parse_repo_name(repo_full_name);
+ let checks = github.repo(owner, repo).check_runs();
+
+ checks.create(&hubcaps::checks::CheckRunOptions {
+ name: format!("build-{}-{}", job.project, job.system),
+ head_sha: job.pr.head_sha.clone(),
+ status: Some(hubcaps::checks::CheckRunStatus::InProgress),
+ external_id: Some(format!("{runner_identity}")),
+ started_at: Some(Utc::now()),
+ output: Some(hubcaps::checks::Output {
+ title: format!("Building {} on {}", job.project, job.system),
+ summary: format!("Runner: {runner_identity}"),
+ text: None,
+ annotations: vec![],
+ }),
+ ..Default::default()
+ }).await.map_err(|e| format!("Failed to create check run: {e}"))?;
+
+ Ok(())
+}
+```
+
+### `result_to_check()`
+
+Updates a Check Run when a build completes:
+
+```rust
+pub async fn result_to_check(
+ github: &hubcaps::Github,
+ repo_full_name: &str,
+ result: &BuildResult,
+) -> Result<(), String> {
+ let (owner, repo) = parse_repo_name(repo_full_name);
+ let checks = github.repo(owner, repo).check_runs();
+
+ let conclusion = match &result.status {
+ BuildStatus::Success => hubcaps::checks::Conclusion::Success,
+ BuildStatus::Failure => hubcaps::checks::Conclusion::Failure,
+ BuildStatus::TimedOut => hubcaps::checks::Conclusion::TimedOut,
+ BuildStatus::Skipped => hubcaps::checks::Conclusion::Skipped,
+ BuildStatus::UnexpectedError { .. } => hubcaps::checks::Conclusion::Failure,
+ };
+
+ // Find and update the existing check run
+ // ...
+}
+```
+
+---
+
+## GitHub Event Types (ghevent)
+
+### Common Types
+
+```rust
+// ghevent/common.rs
+#[derive(Deserialize, Debug)]
+pub struct GenericWebhook {
+ pub repository: Repository,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct Repository {
+ pub owner: User,
+ pub name: String,
+ pub full_name: String,
+ pub clone_url: String,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct User {
+ pub login: String,
+ pub id: u64,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct Comment {
+ pub id: u64,
+ pub body: String,
+ pub user: User,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct Issue {
+ pub number: u64,
+ pub title: String,
+ pub state: String,
+ pub user: User,
+ pub labels: Vec<Label>,
+}
+```
+
+### Pull Request Events
+
+```rust
+// ghevent/pullrequestevent.rs
+#[derive(Deserialize, Debug)]
+pub struct PullRequestEvent {
+ pub action: PullRequestAction,
+ pub number: u64,
+ pub pull_request: PullRequest,
+ pub repository: Repository,
+ pub changes: Option<PullRequestChanges>,
+}
+
+#[derive(Deserialize, Debug)]
+#[serde(rename_all = "snake_case")]
+pub enum PullRequestAction {
+ Opened,
+ Closed,
+ Synchronize,
+ Reopened,
+ Edited,
+ Labeled,
+ Unlabeled,
+ ReviewRequested,
+ Assigned,
+ Unassigned,
+ ReadyForReview,
+}
+
+#[derive(Deserialize, Debug)]
+pub enum PullRequestState {
+ #[serde(rename = "open")]
+ Open,
+ #[serde(rename = "closed")]
+ Closed,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct PullRequest {
+ pub id: u64,
+ pub number: u64,
+ pub state: PullRequestState,
+ pub title: String,
+ pub head: PullRequestRef,
+ pub base: PullRequestRef,
+ pub user: User,
+ pub merged: Option<bool>,
+ pub mergeable: Option<bool>,
+}
+
+#[derive(Deserialize, Debug)]
+pub struct PullRequestRef {
+ pub sha: String,
+ #[serde(rename = "ref")]
+ pub git_ref: String,
+ pub repo: Repository,
+}
+```
+
+### Issue Comment Events
+
+```rust
+// ghevent/issuecomment.rs
+#[derive(Deserialize, Debug)]
+pub struct IssueComment {
+ pub action: IssueCommentAction,
+ pub comment: Comment,
+ pub issue: Issue,
+ pub repository: Repository,
+}
+
+#[derive(Deserialize, Debug)]
+#[serde(rename_all = "snake_case")]
+pub enum IssueCommentAction {
+ Created,
+ Edited,
+ Deleted,
+}
+```
+
+### Push Events
+
+```rust
+// ghevent/pushevent.rs
+#[derive(Deserialize, Debug)]
+pub struct PushEvent {
+ #[serde(rename = "ref")]
+ pub git_ref: String,
+ pub after: String,
+ pub before: String,
+ pub deleted: bool,
+ pub forced: bool,
+ pub created: bool,
+ pub pusher: Pusher,
+ pub head_commit: Option<HeadCommit>,
+ pub repository: Repository,
+ pub commits: Vec<HeadCommit>,
+}
+
+impl PushEvent {
+ pub fn branch(&self) -> Option<&str>;
+ pub fn is_tag(&self) -> bool;
+ pub fn is_delete(&self) -> bool;
+ pub fn is_zero_sha(&self) -> bool;
+}
+```
+
+---
+
+## Comment Posting
+
+### `GitHubCommentPoster`
+
+```rust
+// tasks/githubcommentposter.rs
+pub struct GitHubCommentPoster {
+ github_vend: tokio::sync::RwLock<GithubAppVendingMachine>,
+}
+
+pub trait PostableEvent: Send {
+ fn owner(&self) -> &str;
+ fn repo(&self) -> &str;
+ fn number(&self) -> u64;
+}
+```
+
+### Posting a Result
+
+```rust
+impl worker::SimpleWorker for GitHubCommentPoster {
+ type J = buildresult::BuildResult;
+
+ async fn consumer(&mut self, job: &buildresult::BuildResult) -> worker::Actions {
+ let github = self.github_vend.write().await.github().await;
+ let repo = github.repo(&job.repo.owner, &job.repo.name);
+ let issue = repo.issue(job.pr.number);
+
+ // Build a markdown summary
+ let comment_body = format_build_result(job);
+
+ issue.comments().create(&hubcaps::comments::CommentOptions {
+ body: comment_body,
+ }).await;
+
+ vec![worker::Action::Ack]
+ }
+}
+```
+
+---
+
+## Comment Filtering
+
+### `GitHubCommentWorker`
+
+```rust
+// tasks/githubcommentfilter.rs
+pub struct GitHubCommentWorker {
+ acl: Acl,
+ github_vend: tokio::sync::RwLock<GithubAppVendingMachine>,
+}
+```
+
+The comment filter processes incoming `IssueComment` events:
+
+1. **Ignore non-creation actions** — Only `Created` matters.
+2. **Parse command** — `commentparser::parse()` extracts `@tickbot` instructions.
+3. **ACL check** — Verifies the commenter is allowed to issue the command.
+4. **Generate build/eval jobs** — Creates `BuildJob` or `EvaluationJob` messages.
+5. **Publish to AMQP** — Routes to the appropriate exchange.
+
+```rust
+async fn consumer(&mut self, job: &ghevent::IssueComment) -> worker::Actions {
+ if job.action != IssueCommentAction::Created {
+ return vec![worker::Action::Ack];
+ }
+
+ let instructions = commentparser::parse(&job.comment.body);
+ if instructions.is_empty() {
+ return vec![worker::Action::Ack];
+ }
+
+ let mut actions = Vec::new();
+
+ for instruction in instructions {
+ match instruction {
+ Instruction::Build(projects, subset) => {
+ // Verify ACL
+ let architectures = self.acl.build_job_architectures_for_user_repo(
+ &job.comment.user.login,
+ &job.repository.full_name,
+ );
+ // Create BuildJob per project × architecture
+ for project in projects {
+ for arch in &architectures {
+ let build_job = BuildJob { /* ... */ };
+ actions.push(worker::publish_serde_action(
+ Some("build-jobs".to_owned()),
+ None,
+ &build_job,
+ ));
+ }
+ }
+ }
+ Instruction::Eval => {
+ let eval_job = EvaluationJob { /* ... */ };
+ actions.push(worker::publish_serde_action(
+ None,
+ Some("mass-rebuild-check-jobs".to_owned()),
+ &eval_job,
+ ));
+ }
+ Instruction::Test(projects) => { /* ... */ }
+ }
+ }
+
+ actions.push(worker::Action::Ack);
+ actions
+}
+```
+
+---
+
+## The `hubcaps` Fork
+
+Tickborg uses a forked version of `hubcaps` from:
+
+```toml
+[dependencies]
+hubcaps = { git = "https://github.com/ofborg/hubcaps.git", rev = "0d7466e..." }
+```
+
+Key differences from upstream:
+- **Check Runs API support** — Full CRUD for GitHub Checks API
+- **GitHub App authentication** — JWT + installation token flow
+- **Async/await** — Full Tokio-based async API
+- **App API** — `find_repo_installation()`, `create_installation_token()`
+
+---
+
+## Webhook Signature Verification
+
+See [webhook-receiver.md](webhook-receiver.md) for the full HMAC-SHA256
+verification flow.
+
+```rust
+fn verify_signature(secret: &[u8], signature: &str, body: &[u8]) -> bool {
+ let sig_bytes = match hex::decode(signature.trim_start_matches("sha256=")) {
+ Ok(b) => b,
+ Err(_) => return false,
+ };
+
+ let mut mac = Hmac::<Sha256>::new_from_slice(secret).unwrap();
+ mac.update(body);
+ mac.verify_slice(&sig_bytes).is_ok()
+}
+```
+
+---
+
+## Rate Limiting
+
+The GitHub API has rate limits (5000 requests/hour for GitHub App installations).
+Tickborg mitigates this by:
+
+1. **Caching installation tokens** — Reused until 5 minutes before expiry.
+2. **Minimal API calls** — Only essential status updates and label operations.
+3. **Batching** — Label additions batched into single API calls where possible.
+4. **Backoff on 403** — When rate-limited, jobs are `NackRequeue`'d for retry.
diff --git a/docs/handbook/ofborg/message-system.md b/docs/handbook/ofborg/message-system.md
new file mode 100644
index 0000000000..197152737d
--- /dev/null
+++ b/docs/handbook/ofborg/message-system.md
@@ -0,0 +1,731 @@
+# Tickborg — Message System
+
+## Overview
+
+Tickborg's entire architecture is built on **AMQP 0-9-1** messaging via
+**RabbitMQ**. Every component is a standalone binary that communicates
+exclusively through message queues. There is no shared database, no direct
+RPC between services, and no in-memory coupling.
+
+This document covers:
+- The AMQP topology (exchanges, queues, bindings)
+- Message types and their serialization
+- Publishing and consuming patterns
+- The worker abstraction layer
+
+---
+
+## Exchanges
+
+Tickborg uses four RabbitMQ exchanges:
+
+### `github-events` (Topic Exchange)
+
+**Declared by:** `github-webhook-receiver`
+
+The primary ingestion exchange. All GitHub webhook payloads are published here
+with routing keys of the form `{event_type}.{owner}/{repo}`.
+
+```rust
+chan.declare_exchange(easyamqp::ExchangeConfig {
+ exchange: "github-events".to_owned(),
+ exchange_type: easyamqp::ExchangeType::Topic,
+ passive: false,
+ durable: true,
+ auto_delete: false,
+ no_wait: false,
+ internal: false,
+}).await?;
+```
+
+**Routing key patterns:**
+
+| Pattern | Example | Consumer |
+|---------|---------|----------|
+| `pull_request.*` | `pull_request.project-tick/Project-Tick` | evaluation-filter |
+| `issue_comment.*` | `issue_comment.project-tick/Project-Tick` | github-comment-filter |
+| `push.*` | `push.project-tick/Project-Tick` | push-filter |
+| `unknown.*` | `unknown.project-tick/Project-Tick` | (monitoring) |
+
+### `build-jobs` (Fanout Exchange)
+
+**Declared by:** `github-comment-filter`, `builder`, `push-filter`
+
+Distributes build jobs to all connected builder instances. As a **fanout**
+exchange, every bound queue receives a copy of every message.
+
+```rust
+chan.declare_exchange(easyamqp::ExchangeConfig {
+ exchange: "build-jobs".to_owned(),
+ exchange_type: easyamqp::ExchangeType::Fanout,
+ passive: false,
+ durable: true,
+ auto_delete: false,
+ no_wait: false,
+ internal: false,
+}).await?;
+```
+
+### `build-results` (Fanout Exchange)
+
+**Declared by:** `github-comment-filter`, `github-comment-poster`, `push-filter`
+
+Collects build results (both "queued" notifications and "completed" results).
+The `github-comment-poster` consumes from this to create GitHub Check Runs.
+
+```rust
+chan.declare_exchange(easyamqp::ExchangeConfig {
+ exchange: "build-results".to_owned(),
+ exchange_type: easyamqp::ExchangeType::Fanout,
+ passive: false,
+ durable: true,
+ auto_delete: false,
+ no_wait: false,
+ internal: false,
+}).await?;
+```
+
+### `logs` (Topic Exchange)
+
+**Declared by:** `log-message-collector`
+
+Receives streaming build log messages from builders. The routing key encodes
+the repository and PR/push identifier.
+
+```rust
+chan.declare_exchange(easyamqp::ExchangeConfig {
+ exchange: "logs".to_owned(),
+ exchange_type: easyamqp::ExchangeType::Topic,
+ passive: false,
+ durable: true,
+ auto_delete: false,
+ no_wait: false,
+ internal: false,
+}).await?;
+```
+
+### `stats` (Fanout Exchange)
+
+**Declared by:** `stats`
+
+Receives operational metric events from all workers. The stats collector
+aggregates these into Prometheus-compatible metrics.
+
+```rust
+chan.declare_exchange(easyamqp::ExchangeConfig {
+ exchange: "stats".to_owned(),
+ exchange_type: easyamqp::ExchangeType::Fanout,
+ passive: false,
+ durable: true,
+ auto_delete: false,
+ no_wait: false,
+ internal: false,
+}).await?;
+```
+
+---
+
+## Queues
+
+### Durable Queues
+
+| Queue Name | Exchange | Routing Key | Consumer |
+|------------|----------|-------------|----------|
+| `build-inputs` | `github-events` | `issue_comment.*` | github-comment-filter |
+| `github-events-unknown` | `github-events` | `unknown.*` | (monitoring) |
+| `mass-rebuild-check-inputs` | `github-events` | `pull_request.*` | evaluation-filter |
+| `push-build-inputs` | `github-events` | `push.*` | push-filter |
+| `mass-rebuild-check-jobs` | (direct publish) | — | mass-rebuilder |
+| `build-inputs-x86_64-linux` | `build-jobs` | — | builder (x86_64-linux) |
+| `build-inputs-aarch64-linux` | `build-jobs` | — | builder (aarch64-linux) |
+| `build-inputs-x86_64-darwin` | `build-jobs` | — | builder (x86_64-darwin) |
+| `build-inputs-aarch64-darwin` | `build-jobs` | — | builder (aarch64-darwin) |
+| `build-inputs-x86_64-windows` | `build-jobs` | — | builder (x86_64-windows) |
+| `build-inputs-aarch64-windows` | `build-jobs` | — | builder (aarch64-windows) |
+| `build-inputs-x86_64-freebsd` | `build-jobs` | — | builder (x86_64-freebsd) |
+| `build-results` | `build-results` | — | github-comment-poster |
+| `stats-events` | `stats` | — | stats |
+
+### Ephemeral Queues
+
+| Queue Name | Exchange | Routing Key | Consumer |
+|------------|----------|-------------|----------|
+| `logs` | `logs` | `*.*` | log-message-collector |
+
+The `logs` queue is declared `durable: false, exclusive: true, auto_delete:
+true`. This means:
+- It only exists while the log collector is connected.
+- If the log collector disconnects, the queue is deleted.
+- Log messages published while no collector is connected are lost.
+- This is intentional: logs are not critical path data and the exchange itself
+ is durable.
+
+---
+
+## Message Types
+
+All messages are serialized as JSON using `serde_json`.
+
+### `EvaluationJob`
+
+**Published by:** evaluation-filter, github-comment-filter
+**Consumed by:** mass-rebuilder
+**Queue:** `mass-rebuild-check-jobs`
+
+```rust
+// message/evaluationjob.rs
+#[derive(Serialize, Deserialize, Debug)]
+pub struct EvaluationJob {
+ pub repo: Repo,
+ pub pr: Pr,
+}
+```
+
+Example JSON:
+
+```json
+{
+ "repo": {
+ "owner": "project-tick",
+ "name": "Project-Tick",
+ "full_name": "project-tick/Project-Tick",
+ "clone_url": "https://github.com/project-tick/Project-Tick.git"
+ },
+ "pr": {
+ "number": 42,
+ "head_sha": "abc123def456...",
+ "target_branch": "main"
+ }
+}
+```
+
+### `BuildJob`
+
+**Published by:** github-comment-filter, mass-rebuilder, push-filter
+**Consumed by:** builder
+**Queue:** `build-inputs-{system}`
+
+```rust
+// message/buildjob.rs
+#[derive(Serialize, Deserialize, Debug)]
+pub struct BuildJob {
+ pub repo: Repo,
+ pub pr: Pr,
+ pub subset: Option<Subset>,
+ pub attrs: Vec<String>,
+ pub request_id: String,
+ pub logs: Option<ExchangeQueue>,
+ pub statusreport: Option<ExchangeQueue>,
+ pub push: Option<PushTrigger>,
+}
+```
+
+The `logs` and `statusreport` fields are tuples of `(Option<Exchange>,
+Option<RoutingKey>)` that tell the builder where to send log messages and
+build results.
+
+Two constructors exist:
+
+```rust
+// For PR-triggered builds
+impl BuildJob {
+ pub fn new(
+ repo: Repo, pr: Pr, subset: Subset, attrs: Vec<String>,
+ logs: Option<ExchangeQueue>, statusreport: Option<ExchangeQueue>,
+ request_id: String,
+ ) -> BuildJob;
+
+ // For push-triggered builds
+ pub fn new_push(
+ repo: Repo, push: PushTrigger, attrs: Vec<String>,
+ request_id: String,
+ ) -> BuildJob;
+
+ pub fn is_push(&self) -> bool;
+}
+```
+
+### `QueuedBuildJobs`
+
+**Published by:** github-comment-filter, push-filter
+**Consumed by:** github-comment-poster
+**Exchange/Queue:** `build-results`
+
+```rust
+#[derive(Serialize, Deserialize, Debug)]
+pub struct QueuedBuildJobs {
+ pub job: BuildJob,
+ pub architectures: Vec<String>,
+}
+```
+
+This message tells the comment poster that builds have been queued so it can
+create "Queued" check runs on GitHub.
+
+### `BuildResult`
+
+**Published by:** builder
+**Consumed by:** github-comment-poster, log-message-collector
+**Exchange/Queue:** `build-results`, `logs`
+
+```rust
+// message/buildresult.rs
+#[derive(Serialize, Deserialize, Debug)]
+pub enum BuildResult {
+ V1 {
+ tag: V1Tag,
+ repo: Repo,
+ pr: Pr,
+ system: String,
+ output: Vec<String>,
+ attempt_id: String,
+ request_id: String,
+ status: BuildStatus,
+ skipped_attrs: Option<Vec<String>>,
+ attempted_attrs: Option<Vec<String>>,
+ push: Option<PushTrigger>,
+ },
+ Legacy { /* backward compat */ },
+}
+```
+
+```rust
+#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
+pub enum BuildStatus {
+ Skipped,
+ Success,
+ Failure,
+ TimedOut,
+ HashMismatch,
+ UnexpectedError { err: String },
+}
+```
+
+### `BuildLogMsg`
+
+**Published by:** builder
+**Consumed by:** log-message-collector
+**Exchange:** `logs`
+
+```rust
+// message/buildlogmsg.rs
+#[derive(Serialize, Deserialize, Debug, Clone)]
+pub struct BuildLogMsg {
+ pub system: String,
+ pub identity: String,
+ pub attempt_id: String,
+ pub line_number: u64,
+ pub output: String,
+}
+```
+
+### `BuildLogStart`
+
+**Published by:** builder
+**Consumed by:** log-message-collector
+**Exchange:** `logs`
+
+```rust
+#[derive(Serialize, Deserialize, Debug, Clone)]
+pub struct BuildLogStart {
+ pub system: String,
+ pub identity: String,
+ pub attempt_id: String,
+ pub attempted_attrs: Option<Vec<String>>,
+ pub skipped_attrs: Option<Vec<String>>,
+}
+```
+
+### `EventMessage`
+
+**Published by:** all workers (via `stats::RabbitMq`)
+**Consumed by:** stats
+**Exchange:** `stats`
+
+```rust
+// stats.rs
+#[derive(Serialize, Deserialize, Debug)]
+pub struct EventMessage {
+ pub sender: String,
+ pub events: Vec<Event>,
+}
+```
+
+---
+
+## Common Message Structures
+
+### `Repo`
+
+```rust
+// message/common.rs
+#[derive(Serialize, Deserialize, Debug, Clone)]
+pub struct Repo {
+ pub owner: String,
+ pub name: String,
+ pub full_name: String,
+ pub clone_url: String,
+}
+```
+
+### `Pr`
+
+```rust
+#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
+pub struct Pr {
+ pub target_branch: Option<String>,
+ pub number: u64,
+ pub head_sha: String,
+}
+```
+
+For push-triggered builds, `pr.number` is set to `0` and `pr.head_sha`
+contains the push commit SHA.
+
+### `PushTrigger`
+
+```rust
+#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
+pub struct PushTrigger {
+ pub head_sha: String,
+ pub branch: String,
+ pub before_sha: Option<String>,
+}
+```
+
+---
+
+## Publishing Messages
+
+### The `publish_serde_action` Helper
+
+```rust
+// worker.rs
+pub fn publish_serde_action<T: Serialize + ?Sized>(
+ exchange: Option<String>,
+ routing_key: Option<String>,
+ msg: &T,
+) -> Action {
+ Action::Publish(Arc::new(QueueMsg {
+ exchange,
+ routing_key,
+ mandatory: false,
+ immediate: false,
+ content_type: Some("application/json".to_owned()),
+ content: serde_json::to_string(&msg).unwrap().into_bytes(),
+ }))
+}
+```
+
+This is the primary way workers produce outgoing messages. The message is
+serialized to JSON and wrapped in a `QueueMsg` which is then wrapped in an
+`Action::Publish`.
+
+### Message Delivery
+
+The `action_deliver` function in `easylapin.rs` handles all action types:
+
+```rust
+async fn action_deliver(
+ chan: &Channel, deliver: &Delivery, action: Action,
+) -> Result<(), lapin::Error> {
+ match action {
+ Action::Ack => {
+ chan.basic_ack(deliver.delivery_tag, BasicAckOptions::default()).await
+ }
+ Action::NackRequeue => {
+ chan.basic_nack(deliver.delivery_tag,
+ BasicNackOptions { requeue: true, ..Default::default() }).await
+ }
+ Action::NackDump => {
+ chan.basic_nack(deliver.delivery_tag,
+ BasicNackOptions::default()).await
+ }
+ Action::Publish(msg) => {
+ let exch = msg.exchange.as_deref().unwrap_or("");
+ let key = msg.routing_key.as_deref().unwrap_or("");
+
+ let mut props = BasicProperties::default()
+ .with_delivery_mode(2); // persistent
+
+ if let Some(s) = msg.content_type.as_deref() {
+ props = props.with_content_type(s.into());
+ }
+
+ chan.basic_publish(
+ exch.into(), key.into(),
+ BasicPublishOptions::default(),
+ &msg.content, props,
+ ).await?.await?;
+ Ok(())
+ }
+ }
+}
+```
+
+Key details:
+- **delivery_mode = 2**: All published messages are persistent.
+- The double `.await` on `basic_publish`: the first await sends the message,
+ the second awaits the publisher confirm from the broker.
+- When `exchange` is `None`, an empty string is used (the default exchange).
+- When `routing_key` is `None`, an empty string is used.
+
+---
+
+## Consuming Messages
+
+### Consumer Loop (SimpleWorker)
+
+```rust
+// easylapin.rs
+impl<'a, W: SimpleWorker + 'a> ConsumerExt<'a, W> for Channel {
+ async fn consume(self, mut worker: W, config: ConsumeConfig)
+ -> Result<Self::Handle, Self::Error>
+ {
+ let mut consumer = self.basic_consume(
+ config.queue.into(),
+ config.consumer_tag.into(),
+ BasicConsumeOptions::default(),
+ FieldTable::default(),
+ ).await?;
+
+ Ok(Box::pin(async move {
+ while let Some(Ok(deliver)) = consumer.next().await {
+ let job = worker.msg_to_job(
+ deliver.routing_key.as_str(),
+ &content_type,
+ &deliver.data,
+ ).await.expect("worker unexpected message consumed");
+
+ for action in worker.consumer(&job).await {
+ action_deliver(&self, &deliver, action)
+ .await.expect("action deliver failure");
+ }
+ }
+ }))
+ }
+}
+```
+
+### Consumer Loop (SimpleNotifyWorker)
+
+```rust
+impl<'a, W: SimpleNotifyWorker + 'a + Send> ConsumerExt<'a, W> for NotifyChannel {
+ async fn consume(self, worker: W, config: ConsumeConfig)
+ -> Result<Self::Handle, Self::Error>
+ {
+ self.0.basic_qos(1, BasicQosOptions::default()).await?;
+
+ let mut consumer = self.0.basic_consume(/* ... */).await?;
+
+ Ok(Box::pin(async move {
+ while let Some(Ok(deliver)) = consumer.next().await {
+ let receiver = ChannelNotificationReceiver {
+ channel: chan.clone(),
+ deliver,
+ };
+
+ let job = worker.msg_to_job(
+ receiver.deliver.routing_key.as_str(),
+ &content_type,
+ &receiver.deliver.data,
+ ).expect("worker unexpected message consumed");
+
+ worker.consumer(job, Arc::new(receiver)).await;
+ }
+ }))
+ }
+}
+```
+
+### Prefetch (QoS)
+
+- **`WorkerChannel`** and **`NotifyChannel`** both set `basic_qos(1)`.
+ This means the broker will only deliver one unacknowledged message at a time
+ to each consumer. This provides fair dispatch when multiple instances consume
+ from the same queue.
+- **Raw `Channel`** has no prefetch limit set. This is used by the log
+ collector which benefits from prefetching many small messages.
+
+---
+
+## Message Routing Diagram
+
+```
+ github-events (Topic)
+ ┌───────────────────────────────────────────┐
+ │ │
+ │ issue_comment.* ──► build-inputs │
+ │ pull_request.* ──► mass-rebuild-check- │
+ │ inputs │
+ │ push.* ──► push-build-inputs │
+ │ unknown.* ──► github-events- │
+ │ unknown │
+ └───────────────────────────────────────────┘
+
+ build-jobs (Fanout)
+ ┌───────────────────────────────────────────┐
+ │ │
+ │ ──► build-inputs-x86_64-linux │
+ │ ──► build-inputs-aarch64-linux │
+ │ ──► build-inputs-x86_64-darwin │
+ │ ──► build-inputs-aarch64-darwin │
+ │ ──► build-inputs-x86_64-windows │
+ │ ──► build-inputs-aarch64-windows │
+ │ ──► build-inputs-x86_64-freebsd │
+ └───────────────────────────────────────────┘
+
+ build-results (Fanout)
+ ┌───────────────────────────────────────────┐
+ │ ──► build-results │
+ └───────────────────────────────────────────┘
+
+ logs (Topic)
+ ┌───────────────────────────────────────────┐
+ │ *.* ──► logs (ephemeral) │
+ └───────────────────────────────────────────┘
+
+ stats (Fanout)
+ ┌───────────────────────────────────────────┐
+ │ ──► stats-events │
+ └───────────────────────────────────────────┘
+```
+
+---
+
+## Direct Queue Publishing
+
+Some messages bypass exchanges and are published directly to queues:
+
+| Source | Target Queue | Method |
+|--------|-------------|--------|
+| evaluation-filter | `mass-rebuild-check-jobs` | `publish_serde_action(None, Some("mass-rebuild-check-jobs"))` |
+| github-comment-filter | `build-inputs-{system}` | `publish_serde_action(None, Some("build-inputs-x86_64-linux"))` |
+| push-filter | `build-inputs-{system}` | `publish_serde_action(None, Some("build-inputs-x86_64-linux"))` |
+
+When the exchange is `None` (empty string `""`), AMQP uses the **default
+exchange**, which routes messages directly to the queue named by the routing key.
+
+---
+
+## Message Acknowledgment Patterns
+
+### Typical Worker Flow
+
+```
+1. Receive message from queue
+2. Deserialize (msg_to_job)
+3. Process (consumer)
+4. Return [Action::Publish(...), Action::Publish(...), Action::Ack]
+5. All Publish actions are executed
+6. Final Ack removes the message from the queue
+```
+
+### Error Handling
+
+| Situation | Action | Effect |
+|-----------|--------|--------|
+| Job decoded, processed successfully | `Ack` | Message removed from queue |
+| Temporary error (e.g., expired creds) | `NackRequeue` | Message returned to queue for retry |
+| Permanent error (e.g., force-pushed) | `Ack` | Message discarded (no point retrying) |
+| Decode failure | `panic!` or `Err` | Consumer thread crashes (message stays in queue) |
+
+### Builder Flow (Notify Worker)
+
+```
+1. Receive message
+2. Deserialize (msg_to_job)
+3. Begin build
+4. notifier.tell(Publish(BuildLogStart)) → logs exchange
+5. For each line of build output:
+ notifier.tell(Publish(BuildLogMsg)) → logs exchange
+6. notifier.tell(Publish(BuildResult)) → build-results exchange
+7. notifier.tell(Ack) → acknowledge original message
+```
+
+---
+
+## Connection Management
+
+### Creating a Connection
+
+```rust
+// easylapin.rs
+pub async fn from_config(cfg: &RabbitMqConfig) -> Result<Connection, lapin::Error> {
+ let opts = ConnectionProperties::default()
+ .with_client_property("tickborg_version".into(), tickborg::VERSION.into());
+ Connection::connect(&cfg.as_uri()?, opts).await
+}
+```
+
+The connection URI is constructed from the config:
+
+```rust
+impl RabbitMqConfig {
+ pub fn as_uri(&self) -> Result<String, std::io::Error> {
+ let password = std::fs::read_to_string(&self.password_file)?;
+ Ok(format!(
+ "{}://{}:{}@{}/{}",
+ if self.ssl { "amqps" } else { "amqp" },
+ self.username, password, self.host,
+ self.virtualhost.clone().unwrap_or_else(|| "/".to_owned()),
+ ))
+ }
+}
+```
+
+### Channel Creation
+
+Each binary creates one or more channels from its connection:
+
+```rust
+let conn = easylapin::from_config(&cfg.rabbitmq).await?;
+let mut chan = conn.create_channel().await?;
+```
+
+The builder creates one channel per system architecture:
+
+```rust
+for system in &cfg.build.system {
+ handles.push(create_handle(&conn, &cfg, system.to_string()).await?);
+}
+// Each create_handle call does: conn.create_channel().await?
+```
+
+### Connection Lifecycle
+
+Connections are held for the lifetime of the process. When the main consumer
+future completes (all messages consumed or an error), the connection is dropped:
+
+```rust
+handle.await;
+drop(conn); // Close connection.
+info!("Closed the session... EOF");
+```
+
+---
+
+## Consumer Tags
+
+Each consumer is identified by a unique tag derived from the runner identity:
+
+```rust
+easyamqp::ConsumeConfig {
+ queue: queue_name.clone(),
+ consumer_tag: format!("{}-builder", cfg.whoami()),
+ // ...
+}
+```
+
+Where `whoami()` returns `"{identity}-{system}"`:
+
+```rust
+impl Config {
+ pub fn whoami(&self) -> String {
+ format!("{}-{}", self.runner.identity, self.build.system.join(","))
+ }
+}
+```
+
+This ensures that consumer tags are unique across multiple instances and
+architectures.
diff --git a/docs/handbook/ofborg/overview.md b/docs/handbook/ofborg/overview.md
new file mode 100644
index 0000000000..51cc18cb83
--- /dev/null
+++ b/docs/handbook/ofborg/overview.md
@@ -0,0 +1,571 @@
+# Tickborg (ofborg) — Overview
+
+## What is Tickborg?
+
+Tickborg is the distributed Continuous Integration (CI) bot purpose-built for the
+**Project Tick monorepo**. It is a Rust-based system derived from the original
+[ofborg](https://github.com/NixOS/ofborg) — a CI system created for the NixOS
+project — and adapted for the multi-project, multi-language, multi-platform
+reality of Project Tick.
+
+Where the original ofborg was tightly coupled to Nix package evaluation, tickborg
+has been generalised to handle arbitrary build systems (CMake, Meson, Autotools,
+Cargo, Gradle, Make, and custom commands) while retaining the proven AMQP-based
+distributed worker architecture that made ofborg reliable at scale.
+
+The crate name remains **`tickborg`** in code, the workspace lives under
+`ofborg/` in the Project Tick tree, and the bot responds to the handle
+**`@tickbot`** in GitHub comments.
+
+---
+
+## High-Level Goals
+
+| Goal | How Tickborg achieves it |
+|------|--------------------------|
+| **Automated PR evaluation** | Every opened / synchronised PR is evaluated for which sub-projects changed and builds are scheduled automatically. |
+| **On-demand builds** | Maintainers comment `@tickbot build <attr>` or `@tickbot eval` on a PR to trigger builds or re-evaluations. |
+| **Push-triggered CI** | Direct pushes to protected branches (`main`, `staging`, etc.) are detected and build jobs are dispatched. |
+| **Multi-platform builds** | Builds can be fanned out to `x86_64-linux`, `aarch64-linux`, `x86_64-darwin`, `aarch64-darwin`, `x86_64-windows`, `aarch64-windows`, and `x86_64-freebsd`. |
+| **GitHub Check Runs** | Build results are reported back via the GitHub Checks API, giving inline status on every PR. |
+| **Build log collection** | Build output is streamed over AMQP to a central log collector and served via a log viewer web UI. |
+| **Prometheus metrics** | Operational statistics are published to RabbitMQ and exposed on a `/metrics`-compatible HTTP endpoint. |
+
+---
+
+## Design Principles
+
+### 1. Message-Oriented Architecture
+
+Every component communicates exclusively through **RabbitMQ (AMQP 0-9-1)**
+messages. There is no shared database, no direct RPC between services, and no
+in-memory coupling between workers. This means:
+
+- Each worker binary can be deployed, scaled, and restarted independently.
+- Work is durable — RabbitMQ queues are declared `durable: true` and messages
+ are published with `delivery_mode: 2` (persistent).
+- Load balancing is implicit: multiple builder instances consuming from the same
+ queue will each receive a fair share of jobs via `basic_qos(1)`.
+
+### 2. Worker Trait Abstraction
+
+All business logic is expressed through two traits:
+
+```rust
+// tickborg/src/worker.rs
+pub trait SimpleWorker: Send {
+ type J: Send;
+ fn consumer(&mut self, job: &Self::J) -> impl Future<Output = Actions>;
+ fn msg_to_job(
+ &mut self, method: &str, headers: &Option<String>, body: &[u8],
+ ) -> impl Future<Output = Result<Self::J, String>>;
+}
+```
+
+```rust
+// tickborg/src/notifyworker.rs
+#[async_trait]
+pub trait SimpleNotifyWorker {
+ type J;
+ async fn consumer(
+ &self, job: Self::J,
+ notifier: Arc<dyn NotificationReceiver + Send + Sync>,
+ );
+ fn msg_to_job(
+ &self, routing_key: &str, content_type: &Option<String>, body: &[u8],
+ ) -> Result<Self::J, String>;
+}
+```
+
+`SimpleWorker` is for purely functional message processors: receive a message,
+return a list of `Action`s. `SimpleNotifyWorker` is for long-running tasks (like
+builds) that need to stream intermediate results back during processing.
+
+### 3. One Binary per Concern
+
+Each responsibility is compiled into its own binary target under
+`tickborg/src/bin/`:
+
+| Binary | Role |
+|--------|------|
+| `github-webhook-receiver` | HTTP server that validates GitHub webhook payloads, verifies HMAC-SHA256 signatures, and publishes them to the `github-events` exchange. |
+| `evaluation-filter` | Consumes `pull_request.*` events and decides whether a PR warrants evaluation. Publishes `EvaluationJob` to `mass-rebuild-check-jobs`. |
+| `github-comment-filter` | Consumes `issue_comment.*` events, parses `@tickbot` commands, and publishes `BuildJob` messages. |
+| `github-comment-poster` | Consumes `build-results` and creates GitHub Check Runs. |
+| `mass-rebuilder` | Performs full monorepo evaluation on a PR checkout: detects changed projects, schedules builds. |
+| `builder` | Executes actual builds using the configured build system (CMake, Cargo, etc.) and reports results. |
+| `push-filter` | Consumes `push.*` events and creates build jobs for pushes to tracked branches. |
+| `log-message-collector` | Collects streaming build log messages and writes them to disk. |
+| `logapi` | HTTP server that serves collected build logs via a REST API. |
+| `stats` | Collects stat events from RabbitMQ and exposes Prometheus metrics on port 9898. |
+| `build-faker` | Development/testing tool that publishes fake build jobs. |
+
+---
+
+## Key Data Structures
+
+### Repo
+
+```rust
+// tickborg/src/message/common.rs
+#[derive(Serialize, Deserialize, Debug, Clone)]
+pub struct Repo {
+ pub owner: String,
+ pub name: String,
+ pub full_name: String,
+ pub clone_url: String,
+}
+```
+
+### Pr
+
+```rust
+#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
+pub struct Pr {
+ pub target_branch: Option<String>,
+ pub number: u64,
+ pub head_sha: String,
+}
+```
+
+### PushTrigger
+
+```rust
+#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
+pub struct PushTrigger {
+ pub head_sha: String,
+ pub branch: String,
+ pub before_sha: Option<String>,
+}
+```
+
+### BuildJob
+
+```rust
+// tickborg/src/message/buildjob.rs
+#[derive(Serialize, Deserialize, Debug)]
+pub struct BuildJob {
+ pub repo: Repo,
+ pub pr: Pr,
+ pub subset: Option<Subset>,
+ pub attrs: Vec<String>,
+ pub request_id: String,
+ pub logs: Option<ExchangeQueue>,
+ pub statusreport: Option<ExchangeQueue>,
+ pub push: Option<PushTrigger>,
+}
+```
+
+### BuildResult
+
+```rust
+// tickborg/src/message/buildresult.rs
+#[derive(Serialize, Deserialize, Debug)]
+pub enum BuildResult {
+ V1 {
+ tag: V1Tag,
+ repo: Repo,
+ pr: Pr,
+ system: String,
+ output: Vec<String>,
+ attempt_id: String,
+ request_id: String,
+ status: BuildStatus,
+ skipped_attrs: Option<Vec<String>>,
+ attempted_attrs: Option<Vec<String>>,
+ push: Option<PushTrigger>,
+ },
+ Legacy { /* ... backward compat ... */ },
+}
+```
+
+### BuildStatus
+
+```rust
+#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
+pub enum BuildStatus {
+ Skipped,
+ Success,
+ Failure,
+ TimedOut,
+ HashMismatch,
+ UnexpectedError { err: String },
+}
+```
+
+---
+
+## Supported Build Systems
+
+The `BuildExecutor` struct in `tickborg/src/buildtool.rs` supports:
+
+```rust
+pub enum BuildSystem {
+ CMake,
+ Meson,
+ Autotools,
+ Cargo,
+ Gradle,
+ Make,
+ Custom { command: String },
+}
+```
+
+For each build system, tickborg knows how to invoke the configure, build, and
+test phases. A `ProjectBuildConfig` ties a sub-project to its build system:
+
+```rust
+pub struct ProjectBuildConfig {
+ pub name: String,
+ pub path: String,
+ pub build_system: BuildSystem,
+ pub build_timeout_seconds: u16,
+ pub configure_args: Vec<String>,
+ pub build_args: Vec<String>,
+ pub test_command: Option<Vec<String>>,
+}
+```
+
+---
+
+## Supported Platforms (Systems)
+
+```rust
+// tickborg/src/systems.rs
+pub enum System {
+ X8664Linux,
+ Aarch64Linux,
+ X8664Darwin,
+ Aarch64Darwin,
+ X8664Windows,
+ Aarch64Windows,
+ X8664FreeBSD,
+}
+```
+
+Primary CI platforms (used for untrusted users):
+
+- `x86_64-linux`
+- `x86_64-darwin`
+- `x86_64-windows`
+
+Trusted users get access to all seven platforms, including ARM and FreeBSD.
+
+---
+
+## Comment Parser
+
+Users interact with tickborg by posting comments on GitHub PRs/issues:
+
+```
+@tickbot build meshmc
+@tickbot eval
+@tickbot test mnv
+@tickbot build meshmc json4cpp neozip
+```
+
+The parser is implemented in `tickborg/src/commentparser.rs` using the `nom`
+parser combinator library. It produces:
+
+```rust
+pub enum Instruction {
+ Build(Subset, Vec<String>),
+ Test(Vec<String>),
+ Eval,
+}
+
+pub enum Subset {
+ Project,
+}
+```
+
+Multiple commands can appear in a single comment, even interspersed with prose:
+
+```markdown
+I noticed the target was broken — let's re-eval:
+@tickbot eval
+
+Also, try building meshmc:
+@tickbot build meshmc
+```
+
+---
+
+## Access Control (ACL)
+
+```rust
+// tickborg/src/acl.rs
+pub struct Acl {
+ trusted_users: Option<Vec<String>>,
+ repos: Vec<String>,
+}
+```
+
+- `repos` — list of GitHub repositories tickborg is responsible for.
+- `trusted_users` — users who can build on *all* architectures (including ARM,
+ FreeBSD). When `None` (disabled), everyone gets unrestricted access.
+- Non-trusted users only build on primary platforms.
+
+```rust
+impl Acl {
+ pub fn is_repo_eligible(&self, name: &str) -> bool;
+ pub fn build_job_architectures_for_user_repo(
+ &self, user: &str, repo: &str
+ ) -> Vec<System>;
+ pub fn can_build_unrestricted(&self, user: &str, repo: &str) -> bool;
+}
+```
+
+---
+
+## Project Tagger
+
+The `ProjectTagger` in `tickborg/src/tagger.rs` analyses changed files in a PR
+and generates labels:
+
+```rust
+pub struct ProjectTagger {
+ selected: Vec<String>,
+}
+
+impl ProjectTagger {
+ pub fn analyze_changes(&mut self, changed_files: &[String]);
+ pub fn tags_to_add(&self) -> Vec<String>;
+}
+```
+
+It produces labels like:
+- `project: meshmc`
+- `project: mnv`
+- `scope: ci`
+- `scope: docs`
+- `scope: root`
+
+---
+
+## The Monorepo Evaluation Strategy
+
+When a PR is evaluated, the `MonorepoStrategy` in
+`tickborg/src/tasks/eval/monorepo.rs` implements the `EvaluationStrategy` trait:
+
+```rust
+pub trait EvaluationStrategy {
+ fn pre_clone(&mut self) -> impl Future<Output = StepResult<()>>;
+ fn on_target_branch(&mut self, co: &Path, status: &mut CommitStatus)
+ -> impl Future<Output = StepResult<()>>;
+ fn after_fetch(&mut self, co: &CachedProjectCo) -> StepResult<()>;
+ fn after_merge(&mut self, status: &mut CommitStatus)
+ -> impl Future<Output = StepResult<()>>;
+ fn evaluation_checks(&self) -> Vec<EvalChecker>;
+ fn all_evaluations_passed(&mut self, status: &mut CommitStatus)
+ -> impl Future<Output = StepResult<EvaluationComplete>>;
+}
+```
+
+The strategy:
+
+1. Labels the PR from its title (extracting project names like `meshmc`,
+ `mnv`, etc. using regex word boundaries).
+2. Parses Conventional Commit messages to find affected scopes.
+3. Uses file-change detection to identify which sub-projects changed.
+4. Returns an `EvaluationComplete` containing `BuildJob`s to be dispatched.
+
+---
+
+## How It All Fits Together
+
+```
+GitHub Webhook
+ │
+ ▼
+┌──────────────────┐
+│ Webhook Receiver │──► github-events (Topic Exchange)
+└──────────────────┘ │
+ ┌─────────────────┼──────────────────┐
+ ▼ ▼ ▼
+ ┌─────────────┐ ┌───────────────┐ ┌──────────────┐
+ │ Eval Filter │ │ Comment Filter│ │ Push Filter │
+ └──────┬──────┘ └──────┬────────┘ └──────┬───────┘
+ │ │ │
+ ▼ ▼ ▼
+ mass-rebuild- build-jobs build-inputs-*
+ check-jobs (Fanout) queues
+ │ │ │
+ ▼ ▼ ▼
+ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
+ │Mass Rebuilder │ │ Builder │ │ Builder │
+ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘
+ │ │ │
+ └────────┬────────┘ │
+ ▼ ▼
+ build-results build-results
+ (Fanout Exchange) (Fanout Exchange)
+ │
+ ▼
+ ┌────────────────┐ ┌──────────────────┐
+ │ Comment Poster │ │ Log Collector │
+ └────────────────┘ └──────────────────┘
+ │ │
+ ▼ ▼
+ GitHub Checks API /var/log/tickborg/
+```
+
+---
+
+## Repository Layout
+
+```
+ofborg/
+├── Cargo.toml # Workspace root
+├── Cargo.lock # Pinned dependency versions
+├── docker-compose.yml # Full stack for local dev / production
+├── Dockerfile # Multi-stage build for all binaries
+├── service.nix # NixOS module for systemd services
+├── flake.nix # Nix flake for dev shell & building
+├── example.config.json # Example configuration file
+├── config.production.json # Production config template
+├── config.public.json # Public (non-secret) config
+├── deploy/ # Deployment scripts
+├── doc/ # Legacy upstream docs
+├── ofborg/ # Original ofborg crate (deprecated)
+├── ofborg-simple-build/ # Original simple build (deprecated)
+├── ofborg-viewer/ # Log viewer web UI (JavaScript)
+├── tickborg/ # Main crate
+│ ├── Cargo.toml # Crate manifest with all dependencies
+│ ├── build.rs # Build script (generates events.rs)
+│ ├── src/
+│ │ ├── lib.rs # Library root — module declarations
+│ │ ├── bin/ # Binary entry points (11 binaries)
+│ │ ├── acl.rs # Access control lists
+│ │ ├── asynccmd.rs # Async command execution
+│ │ ├── buildtool.rs # Build system abstraction
+│ │ ├── checkout.rs # Git checkout / caching
+│ │ ├── clone.rs # Git clone trait
+│ │ ├── commentparser.rs # @tickbot command parser (nom)
+│ │ ├── commitstatus.rs # GitHub commit status wrapper
+│ │ ├── config.rs # Configuration types & loading
+│ │ ├── easyamqp.rs # AMQP config types & traits
+│ │ ├── easylapin.rs # lapin (AMQP) integration layer
+│ │ ├── evalchecker.rs # Generic command checker
+│ │ ├── files.rs # File utilities
+│ │ ├── ghevent/ # GitHub event type definitions
+│ │ ├── locks.rs # File-based locking
+│ │ ├── message/ # Message types (jobs, results, logs)
+│ │ ├── notifyworker.rs # Streaming notification worker trait
+│ │ ├── stats.rs # Metrics / event system
+│ │ ├── systems.rs # Platform / architecture enum
+│ │ ├── tagger.rs # PR label tagger
+│ │ ├── tasks/ # Task implementations
+│ │ ├── worker.rs # Core worker trait
+│ │ └── writetoline.rs # Line-based file writer
+│ ├── test-nix/ # Test fixtures (Nix-era, kept)
+│ ├── test-scratch/ # Scratch test data
+│ └── test-srcs/ # Test source data (JSON events)
+└── tickborg-simple-build/ # Simplified build tool crate
+ ├── Cargo.toml
+ └── src/
+```
+
+---
+
+## Technology Stack
+
+| Component | Technology |
+|-----------|-----------|
+| Language | Rust (Edition 2024) |
+| Async runtime | Tokio (multi-thread) |
+| AMQP client | lapin 4.3 |
+| HTTP server | hyper 1.0 + hyper-util |
+| JSON | serde + serde_json |
+| GitHub API | hubcaps (custom fork) |
+| Logging | tracing + tracing-subscriber |
+| Parser | nom 8 |
+| Cryptography | hmac + sha2 (webhook verification) |
+| Concurrency | parking_lot, tokio::sync |
+| UUID | uuid v4 |
+| Caching | lru-cache |
+| File locking | fs2 |
+| Date/time | chrono |
+
+---
+
+## Versioning
+
+The crate version is declared in `tickborg/Cargo.toml`:
+
+```toml
+[package]
+name = "tickborg"
+version = "0.1.0"
+```
+
+The version is accessible at runtime via:
+
+```rust
+pub const VERSION: &str = env!("CARGO_PKG_VERSION");
+```
+
+It is also embedded in the RabbitMQ connection properties:
+
+```rust
+let opts = ConnectionProperties::default()
+ .with_client_property("tickborg_version".into(), tickborg::VERSION.into());
+```
+
+---
+
+## Relation to the Original ofborg
+
+Tickborg was forked from ofborg (NixOS/ofborg) and adapted:
+
+| Aspect | ofborg | tickborg |
+|--------|--------|----------|
+| Purpose | Nix package evaluation for nixpkgs | Monorepo CI for Project Tick |
+| Build system | `nix-build` only | CMake, Meson, Cargo, Gradle, Make, Custom |
+| Bot handle | `@ofborg` | `@tickbot` |
+| Platforms | Linux, macOS | Linux, macOS, Windows, FreeBSD |
+| Evaluation | Nix expression evaluation | File-change detection + project mapping |
+| Package crate | `ofborg` | `tickborg` |
+
+The `ofborg/` and `ofborg-simple-build/` directories are kept for reference but
+are no longer compiled as part of the workspace.
+
+---
+
+## Quick Start (for developers)
+
+```bash
+# Enter the dev shell (requires Nix)
+nix develop ./ofborg
+
+# Or without Nix, ensure Rust 2024+ is installed
+cd ofborg
+cargo build --workspace
+
+# Run tests
+cargo test --workspace
+
+# Start local infra
+docker compose up -d rabbitmq
+```
+
+See [building.md](building.md) for comprehensive build instructions and
+[configuration.md](configuration.md) for setting up a config file.
+
+---
+
+## Further Reading
+
+- [architecture.md](architecture.md) — Crate structure, module hierarchy, worker pattern
+- [building.md](building.md) — Cargo build, dependencies, features, build targets
+- [webhook-receiver.md](webhook-receiver.md) — GitHub webhook handling
+- [message-system.md](message-system.md) — AMQP/RabbitMQ messaging
+- [build-executor.md](build-executor.md) — Build execution, build system abstraction
+- [evaluation-system.md](evaluation-system.md) — Monorepo evaluation, project detection
+- [github-integration.md](github-integration.md) — GitHub API interaction
+- [amqp-infrastructure.md](amqp-infrastructure.md) — RabbitMQ connection management
+- [deployment.md](deployment.md) — NixOS module, Docker Compose
+- [configuration.md](configuration.md) — Config file format, environment variables
+- [data-flow.md](data-flow.md) — End-to-end data flow
+- [code-style.md](code-style.md) — Rust coding conventions
+- [contributing.md](contributing.md) — Contribution guide
diff --git a/docs/handbook/ofborg/webhook-receiver.md b/docs/handbook/ofborg/webhook-receiver.md
new file mode 100644
index 0000000000..7eddf7173b
--- /dev/null
+++ b/docs/handbook/ofborg/webhook-receiver.md
@@ -0,0 +1,470 @@
+# Tickborg — Webhook Receiver
+
+## Overview
+
+The **GitHub Webhook Receiver** (`github-webhook-receiver`) is the entry point
+for all GitHub events into the tickborg system. It is an HTTP server that:
+
+1. Listens for incoming POST requests from GitHub's webhook delivery system.
+2. Validates the HMAC-SHA256 signature of every payload.
+3. Extracts the event type from the `X-Github-Event` header.
+4. Parses the payload to determine the target repository.
+5. Publishes the raw payload to the `github-events` RabbitMQ topic exchange.
+6. Declares and binds the downstream queues that other workers consume from.
+
+**Source file:** `tickborg/src/bin/github-webhook-receiver.rs`
+
+---
+
+## HTTP Server
+
+The webhook receiver uses **hyper 1.0** directly — no web framework is
+involved. The server is configured to listen on the address specified in the
+configuration file:
+
+```rust
+let addr: SocketAddr = listen.parse().expect("Invalid listen address");
+let listener = TcpListener::bind(addr).await?;
+```
+
+The main accept loop:
+
+```rust
+loop {
+ let (stream, _) = listener.accept().await?;
+ let io = TokioIo::new(stream);
+
+ let secret = webhook_secret.clone();
+ let chan = chan.clone();
+
+ tokio::task::spawn(async move {
+ let service = service_fn(move |req| {
+ handle_request(req, secret.clone(), chan.clone())
+ });
+ http1::Builder::new().serve_connection(io, service).await
+ });
+}
+```
+
+Each incoming connection is spawned as an independent tokio task. The service
+function (`handle_request`) processes one request at a time per connection.
+
+---
+
+## Request Handling
+
+### HTTP Method Validation
+
+```rust
+if req.method() != Method::POST {
+ return Ok(empty_response(StatusCode::METHOD_NOT_ALLOWED));
+}
+```
+
+Only `POST` requests are accepted. Any other method receives a `405 Method Not
+Allowed`.
+
+### Header Extraction
+
+Three headers are extracted before consuming the request body:
+
+```rust
+let sig_header = req.headers().get("X-Hub-Signature-256")
+ .and_then(|v| v.to_str().ok())
+ .map(|s| s.to_string());
+
+let event_type = req.headers().get("X-Github-Event")
+ .and_then(|v| v.to_str().ok())
+ .map(|s| s.to_string());
+
+let content_type = req.headers().get("Content-Type")
+ .and_then(|v| v.to_str().ok())
+ .map(|s| s.to_string());
+```
+
+### Body Collection
+
+```rust
+let raw = match req.collect().await {
+ Ok(collected) => collected.to_bytes(),
+ Err(e) => {
+ warn!("Failed to read body from client: {e}");
+ return Ok(response(StatusCode::INTERNAL_SERVER_ERROR, "Failed to read body"));
+ }
+};
+```
+
+The full body is collected into a `Bytes` buffer using `http-body-util`'s
+`BodyExt::collect()`.
+
+---
+
+## HMAC-SHA256 Signature Verification
+
+GitHub sends a `X-Hub-Signature-256` header with the format:
+
+```
+sha256=<hex-encoded HMAC-SHA256>
+```
+
+The webhook receiver verifies this signature against the configured webhook
+secret:
+
+### Step 1: Parse the signature header
+
+```rust
+let Some(sig) = sig_header else {
+ return Ok(response(StatusCode::BAD_REQUEST, "Missing signature header"));
+};
+
+let mut components = sig.splitn(2, '=');
+let Some(algo) = components.next() else {
+ return Ok(response(StatusCode::BAD_REQUEST, "Signature hash method missing"));
+};
+let Some(hash) = components.next() else {
+ return Ok(response(StatusCode::BAD_REQUEST, "Signature hash missing"));
+};
+let Ok(hash) = hex::decode(hash) else {
+ return Ok(response(StatusCode::BAD_REQUEST, "Invalid signature hash hex"));
+};
+```
+
+### Step 2: Validate the algorithm
+
+```rust
+if algo != "sha256" {
+ return Ok(response(StatusCode::BAD_REQUEST, "Invalid signature hash method"));
+}
+```
+
+Only SHA-256 is accepted. GitHub also supports SHA-1 (`X-Hub-Signature`) but
+tickborg does not accept it.
+
+### Step 3: Compute and compare
+
+```rust
+let Ok(mut mac) = Hmac::<Sha256>::new_from_slice(webhook_secret.as_bytes()) else {
+ error!("Unable to create HMAC from secret");
+ return Ok(response(StatusCode::INTERNAL_SERVER_ERROR, "Internal error"));
+};
+
+mac.update(&raw);
+
+if mac.verify_slice(&hash).is_err() {
+ return Ok(response(StatusCode::FORBIDDEN, "Signature verification failed"));
+}
+```
+
+The HMAC is computed using `hmac::Hmac<sha2::Sha256>` from the `hmac` and `sha2`
+crates. `verify_slice` performs a constant-time comparison to prevent timing
+attacks.
+
+---
+
+## Event Type Routing
+
+After signature verification, the event type and repository are determined:
+
+```rust
+let event_type = event_type.unwrap_or_else(|| "unknown".to_owned());
+
+let body_json: GenericWebhook = match serde_json::from_slice(&raw) {
+ Ok(webhook) => webhook,
+ Err(_) => {
+ // If we can't parse the body, route to the unknown queue
+ // ...
+ }
+};
+
+let routing_key = format!("{}.{}", event_type, body_json.repository.full_name);
+```
+
+The `GenericWebhook` struct is minimal — it only extracts the `repository`
+field:
+
+```rust
+// ghevent/common.rs
+#[derive(Serialize, Deserialize, Debug)]
+pub struct GenericWebhook {
+ pub repository: Repository,
+}
+
+#[derive(Serialize, Deserialize, Debug)]
+pub struct Repository {
+ pub owner: User,
+ pub name: String,
+ pub full_name: String,
+ pub clone_url: String,
+}
+```
+
+### Routing Key Format
+
+```
+{event_type}.{owner}/{repo}
+```
+
+Examples:
+- `pull_request.project-tick/Project-Tick`
+- `issue_comment.project-tick/Project-Tick`
+- `push.project-tick/Project-Tick`
+- `unknown.project-tick/Project-Tick`
+
+---
+
+## AMQP Setup
+
+The `setup_amqp` function declares the exchange and all downstream queues:
+
+### Exchange Declaration
+
+```rust
+chan.declare_exchange(easyamqp::ExchangeConfig {
+ exchange: "github-events".to_owned(),
+ exchange_type: easyamqp::ExchangeType::Topic,
+ passive: false,
+ durable: true,
+ auto_delete: false,
+ no_wait: false,
+ internal: false,
+}).await?;
+```
+
+The `github-events` exchange is a **topic** exchange. This means routing keys
+are matched against binding patterns using `.`-separated segments and `*`/`#`
+wildcards.
+
+### Queue Declarations and Bindings
+
+| Queue | Binding Pattern | Consumer |
+|-------|----------------|----------|
+| `build-inputs` | `issue_comment.*` | github-comment-filter |
+| `github-events-unknown` | `unknown.*` | (monitoring/debugging) |
+| `mass-rebuild-check-inputs` | `pull_request.*` | evaluation-filter |
+| `push-build-inputs` | `push.*` | push-filter |
+
+Each queue is declared with:
+
+```rust
+chan.declare_queue(easyamqp::QueueConfig {
+ queue: queue_name.clone(),
+ passive: false,
+ durable: true, // survive broker restart
+ exclusive: false, // accessible by other connections
+ auto_delete: false, // don't delete when last consumer disconnects
+ no_wait: false,
+}).await?;
+```
+
+And bound to the exchange:
+
+```rust
+chan.bind_queue(easyamqp::BindQueueConfig {
+ queue: queue_name.clone(),
+ exchange: "github-events".to_owned(),
+ routing_key: Some(String::from("issue_comment.*")),
+ no_wait: false,
+}).await?;
+```
+
+---
+
+## Message Publishing
+
+After validation and routing key construction, the raw GitHub payload is
+published:
+
+```rust
+let props = BasicProperties::default()
+ .with_content_type("application/json".into())
+ .with_delivery_mode(2); // persistent
+
+chan.lock().await.basic_publish(
+ "github-events".into(),
+ routing_key.into(),
+ BasicPublishOptions::default(),
+ &raw,
+ props,
+).await?;
+```
+
+Key properties:
+- **delivery_mode = 2**: Message is persisted to disk by RabbitMQ.
+- **content_type**: `application/json` — the raw GitHub payload.
+- The **entire raw body** is published, not a parsed/re-serialized version.
+ This preserves all fields that downstream consumers might need, even if the
+ webhook receiver itself doesn't parse them.
+
+---
+
+## Configuration
+
+The webhook receiver reads from the `github_webhook_receiver` section of the
+config:
+
+```rust
+#[derive(Serialize, Deserialize, Debug)]
+pub struct GithubWebhookConfig {
+ pub listen: String,
+ pub webhook_secret_file: String,
+ pub rabbitmq: RabbitMqConfig,
+}
+```
+
+Example configuration:
+
+```json
+{
+ "github_webhook_receiver": {
+ "listen": "0.0.0.0:9899",
+ "webhook_secret_file": "/run/secrets/tickborg/webhook-secret",
+ "rabbitmq": {
+ "ssl": false,
+ "host": "rabbitmq:5672",
+ "virtualhost": "tickborg",
+ "username": "tickborg",
+ "password_file": "/run/secrets/tickborg/rabbitmq-password"
+ }
+ }
+}
+```
+
+The webhook secret is read from a file (not inline in the config) to prevent
+accidental exposure in version control.
+
+---
+
+## Response Codes
+
+| Code | Meaning |
+|------|---------|
+| `200 OK` | Webhook received and published successfully |
+| `400 Bad Request` | Missing or malformed signature header |
+| `403 Forbidden` | Signature verification failed |
+| `405 Method Not Allowed` | Non-POST request |
+| `500 Internal Server Error` | Body read failure or HMAC creation failure |
+
+---
+
+## GitHub Webhook Configuration
+
+### Required Events
+
+The GitHub App or webhook should be configured to send:
+
+| Event | Used By |
+|-------|---------|
+| `pull_request` | evaluation-filter (auto-eval on PR open/sync) |
+| `issue_comment` | github-comment-filter (@tickbot commands) |
+| `push` | push-filter (branch push CI) |
+| `check_run` | (optional, for re-run triggers) |
+
+### Required Permissions (GitHub App)
+
+| Permission | Level | Purpose |
+|------------|-------|---------|
+| Pull requests | Read & Write | Read PR details, post comments |
+| Commit statuses | Read & Write | Set commit status checks |
+| Issues | Read & Write | Read comments, manage labels |
+| Contents | Read | Clone repository, read files |
+| Checks | Read & Write | Create/update check runs |
+
+### Webhook URL
+
+```
+https://<your-domain>:9899/github-webhooks
+```
+
+The receiver accepts POSTs on any path — the path segment is not validated.
+However, conventionally `/github-webhooks` is used.
+
+---
+
+## Security Considerations
+
+### Signature Verification
+
+**Every** request must have a valid `X-Hub-Signature-256` header. Requests
+without this header, or with an invalid signature, are rejected before any
+processing occurs. The HMAC comparison uses `verify_slice` which is
+constant-time.
+
+### Secret File
+
+The webhook secret is read from a file rather than an environment variable or
+inline config value. This:
+- Prevents accidental exposure in process listings (`/proc/*/environ`)
+- Allows secrets management via Docker secrets, Kubernetes secrets, or
+ NixOS `sops-nix`
+
+### No Path Traversal
+
+The webhook receiver does not serve files or interact with the filesystem beyond
+reading the config and secret files. There is no path traversal risk.
+
+### Rate Limiting
+
+The webhook receiver does **not** implement application-level rate limiting.
+This should be handled by:
+- An upstream reverse proxy (nginx, Caddy)
+- GitHub's own delivery rate limiting
+- RabbitMQ's flow control mechanisms
+
+---
+
+## Deployment
+
+### Docker Compose
+
+```yaml
+webhook-receiver:
+ build:
+ context: .
+ dockerfile: Dockerfile
+ command: ["github-webhook-receiver", "/etc/tickborg/config.json"]
+ ports:
+ - "9899:9899"
+ volumes:
+ - ./config.json:/etc/tickborg/config.json:ro
+ - ./secrets:/run/secrets/tickborg:ro
+ depends_on:
+ rabbitmq:
+ condition: service_healthy
+ restart: unless-stopped
+```
+
+### NixOS (`service.nix`)
+
+```nix
+systemd.services."tickborg-webhook-receiver" = mkTickborgService "Webhook Receiver" {
+ binary = "github_webhook_receiver";
+};
+```
+
+Note: The binary name uses underscores (`github_webhook_receiver`) while the
+Cargo target uses hyphens (`github-webhook-receiver`). Cargo generates both
+forms but the NixOS service uses the underscore variant.
+
+---
+
+## Monitoring
+
+The webhook receiver logs:
+- Every accepted webhook (event type, routing key)
+- Signature verification failures (at `warn` level)
+- AMQP publish errors (at `error` level)
+- Body read failures (at `warn` level)
+
+Check the `github-events-unknown` queue for events that couldn't be routed to
+a handler — these indicate new event types that may need new consumers.
+
+---
+
+## Event Type Reference
+
+| GitHub Event | Routing Key Pattern | Queue | Handler |
+|-------------|--------------------|---------|---------|
+| `pull_request` | `pull_request.{owner}/{repo}` | `mass-rebuild-check-inputs` | evaluation-filter |
+| `issue_comment` | `issue_comment.{owner}/{repo}` | `build-inputs` | github-comment-filter |
+| `push` | `push.{owner}/{repo}` | `push-build-inputs` | push-filter |
+| (any other) | `unknown.{owner}/{repo}` | `github-events-unknown` | none (monitoring) |