Running Tests
Run the full test suite:
cargo test
Run tests with output visible (useful for debugging):
cargo test -- --nocapture
Run a specific test:
cargo test test_hilbert_index
Test Organization
Tests are organized in two locations:
Inline Tests
Each module contains unit tests in a #[cfg(test)] block:
// src/geometry/projection.rs
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_wgs84_to_mercator() {
let (x, y) = wgs84_to_mercator(0.0, 0.0);
assert!((x - 0.0).abs() < 1e-6);
assert!((y - 0.0).abs() < 1e-6);
}
}
Inline tests verify internal logic like coordinate projection, clipping, simplification, and tile ID computation.
Integration Tests
The tests/ directory contains end-to-end tests:
tests/
├── pipeline_test.rs — Full pipeline with fixture data
├── parquet_reader_test.rs — GeoParquet reading and column projection
├── mvt_encoding_test.rs — MVT protobuf output validation
└── fixtures/
└── sample.parquet — Small test dataset
Integration tests verify that the full pipeline produces valid PMTiles output from sample input.
Running Benchmarks
Benchmarks use Criterion.rs for statistically rigorous measurement:
# Run all benchmarks
cargo bench
# Run a specific benchmark group
cargo bench --bench pipeline
# Run with HTML report generation
cargo bench -- --output-format bencher
Benchmark results are saved to target/criterion/ with HTML reports showing performance trends over time.
Test Coverage
Generate a coverage report using cargo-llvm-cov:
# Install the tool
cargo install cargo-llvm-cov
# Generate HTML coverage report
cargo llvm-cov --html
# Open the report
open target/llvm-cov/html/index.html
Continuous Integration
Tests run automatically on every pull request. The CI pipeline:
- Runs
cargo fmt --check— formatting verification - Runs
cargo clippy -- -D warnings— lint checks - Runs
cargo test— full test suite - Runs
cargo build --release— release build verification