Abstract
The optimisation of maintenance systems in transport depots is critical for infrastructure sustainability, yet robust methodological frameworks for quantifying efficiency gains in operational settings are lacking, particularly within the African context. This study provides a methodological evaluation of quasi-experimental designs to determine the most robust approach for measuring the causal impact of maintenance interventions on depot efficiency. A comparative analysis of three quasi-experimental designs—difference-in-differences, regression discontinuity, and interrupted time series—was conducted using simulated data calibrated to real-world depot operational parameters. The core statistical model was a difference-in-differences specification: $Y{it} = \beta0 + \beta1 \text{Treat}i + \beta2 \text{Post}t + \delta (\text{Treat}i \times \text{Post}t) + \epsilon_{it}$, where inference relied on cluster-robust standard errors. The regression discontinuity design was found to be most robust to unobserved confounders in this application, yielding a precise estimate of a 17.5% efficiency gain (95% CI: 12.2% to 22.8%) for the intervention. The difference-in-differences approach proved sensitive to violations of the parallel trends assumption common in depot data. The selection of a quasi-experimental design is not methodologically neutral; it significantly influences the validity of estimated efficiency gains in maintenance systems. Practitioners should prioritise regression discontinuity designs where a clear eligibility threshold exists, and rigorously test the parallel trends assumption when employing difference-in-differences. quasi-experimental design, maintenance efficiency, transport infrastructure, causal inference, regression discontinuity This paper provides the first formal comparison of causal inference methods for depot maintenance systems, introducing a simulation-based validation framework tailored to engineering operational data.