Understanding performance characteristics is crucial when developing a game like Mirador. The benchmarking library provides a simple way to measure performance of different code sections and track frame rates. This isn't a sophisticated profiling system - it's just a collection of utilities to help identify slow parts of the code.
The benchmarking library consists of a few basic components:
The system is designed to be lightweight and have minimal impact on performance when not actively measuring.
There are two main timer types for measuring code execution:
Manual Timer: You start it explicitly and stop it when done:let timer = Timer::new("my_operation", config);
// ... do some work ...
let duration = timer.stop();
Scoped Timer: Automatically measures the time between when it's created and when it goes out of scope:
{
let _timer = ScopedTimer::new("my_operation", config);
// ... do some work ...
} // Timer automatically stops and records timing here
The scoped timer is convenient because you don't have to remember to stop it - it automatically records the timing when the variable goes out of scope.
The FrameRateCounter
tracks how long each frame takes to render:
pub struct FrameRateCounter {
pub frame_times: Vec<Duration>,
max_samples: usize,
last_frame_time: Option<Instant>,
}
It keeps a rolling window of frame times and calculates the current FPS. This helps identify if the game is running smoothly or if there are performance issues.
All measurements are stored in a central BenchmarkData
structure:
pub struct BenchmarkData {
measurements: HashMap<String, PerformanceMetrics>,
config: BenchmarkConfig,
fps_counter: FrameRateCounter,
}
Each measurement includes basic statistics like count, total time, minimum/maximum times, and average duration.
The BenchmarkConfig
controls how the benchmarking system behaves:
pub struct BenchmarkConfig {
pub enabled: bool,
pub print_results: bool,
pub write_to_file: bool,
pub min_duration_threshold: Duration,
pub max_samples: usize,
}
By default, benchmarking is only enabled in debug builds. This prevents the overhead from affecting release performance. You can also set minimum duration thresholds to filter out very fast operations that aren't worth measuring.
use crate::benchmarks::utils::time;
let result = time("expensive_calculation", || {
// ... do expensive work ...
return some_value;
});
use crate::benchmarks::utils::scoped_timer;
{
let _timer = scoped_timer("maze_generation");
generate_maze();
} // Timing automatically recorded
use crate::benchmarks::utils::print_summary;
// After running some benchmarks
print_summary();
This prints a formatted table showing all measurements, separated into initialization benchmarks and update benchmarks based on naming patterns.
The library provides several ways to view results:
The summary output separates measurements into categories and shows statistics like total time, average time, and execution count for each measured operation.
This is a simple benchmarking system with some obvious limitations:
The system is designed for development-time performance analysis, not production monitoring.
The benchmarking library is used throughout Mirador's codebase to measure performance of key operations like:
It's particularly useful for identifying which parts of the code are taking the most time during development, helping focus optimization efforts on the right areas. The library is intentionally simple - it's just a collection of timing utilities that make it easy to measure performance without adding much complexity to the codebase.