Why do interventions fail?
Technology tools in development contexts often show early promise — uptake is high, outcomes improve, and funders celebrate. Then adoption collapses. Trust erodes. Communities end up worse off than before. The question this project asks is: why, and under what conditions can we predict it?
This project builds an agent-based model (ABM) of how small-scale fishers in Malawi decide whether to adopt a technology intervention — and how that decision evolves over time in response to outcomes, peers, and environmental conditions.
Each simulated fisher is shaped by trust in the institution behind the tool, perceived risk, access to alternative information, social network position, fishing skill, and resource constraints. The environment fluctuates seasonally and is subject to shocks. The social network — clustered, hub-and-spoke, or dispersed — shapes how influence flows through the community.
By varying these conditions systematically across thousands of simulated runs, the model is designed to identify the specific configurations of human behavior, social structure, and environmental context that determine whether an intervention takes root, fades out, or actively causes harm over the long run.
A seasonal, networked world of
boundedly rational fishers
The model is built in Python using the Mesa agent-based modeling framework. It consists of five interacting components.
Fisher Agents
Each agent holds 11 attributes including trust, perceived risk, perceived usefulness, resource level, time horizon, tech literacy, fishing skill, and social network position. Attributes are updated each season based on outcomes and peer observation.
Seasonal Fish Population
Three seasons model Lake Malawi's fishing calendar: cool dry (peak), hot dry (moderate), and rainy (low). Fish availability follows a normal distribution around seasonal baselines, with stochastic volatility and discrete shock events.
Three Network Topologies
Clustered (tight village cliques with weak inter-group ties), hub-and-spoke (community leaders bridging sparse connections), and dispersed (random Erdős–Rényi). Network topology is a primary experimental variable.
Technology Intervention
The intervention improves outcomes through two pathways: skill augmentation (multiplier on fishing effectiveness) and information improvement (variance reduction in catch outcomes). A learning curve governs how quickly agents realize the intervention's full benefit.
Adoption & Dropout
Agents adopt based on a logistic function over trust, perceived usefulness, social influence, and risk. Dropout is tracked separately from never-adoption — the two have different behavioral signatures and different policy implications.
Seasonal Time Series
Each run produces adoption rates, trust distributions, catch outcomes, and overfishing indicators across seasons and years. Summary statistics across trials include means, medians, standard deviations, and 95% confidence intervals.
Four questions, four studies
The model is designed to support a program of four studies, each targeting a distinct theoretical contribution.
How does network topology shape adoption diffusion and its resilience to hub dropout?
Systematic comparison of clustered, hub-and-spoke, and dispersed networks across adoption speed, final uptake, and sensitivity to the removal of highly connected individuals. Includes seeding experiments testing optimal initial targets for intervention rollout.
What human and psychological conditions determine whether a technology intervention takes root in a fishing community?
Explores how trust dynamics, loss aversion, time horizon, and alternative information access shape individual adoption decisions and community-level uptake patterns. Identifies which agent attributes most strongly predict persistent adoption versus early dropout.
How does environmental volatility and seasonal context shape adoption trajectories across different community types?
Tests the model across stable, moderate, and volatile environmental regimes and examines how seasonal shocks interact with the attribution problem — the difficulty agents face in distinguishing app-caused failures from environmentally-caused ones.
Under what conditions does a beneficial decision tool cause long-run harm despite short-run adoption success?
Activates the endogenous resource dynamics module to model the full adoption-overuse-collapse cycle. Examines how short-run adoption success can mask the conditions that lead to resource depletion, trust collapse, and community-wide worse outcomes than the pre-intervention baseline.
Confirming the model does
what the theory says it should
Before running experiments, the model is subjected to a series of logical consistency checks. These tests require no empirical data — they verify that each mechanism behaves as expected when pushed to its extremes.
Parameter Boundaries
Each parameter is set to 0 and then to its maximum in isolation. Trust at zero should produce near-zero adoption; perceived risk at maximum should suppress uptake; shock probability at zero should yield clean seasonal patterns.
Single-Mechanism Tests
Each mechanism is activated in isolation with all others held neutral. Social influence alone should produce faster diffusion in clustered networks; trust updating alone should erode after bad seasons and recover after good ones.
Edge Cases
Zero agents, one agent, and full pre-adoption at t=0 are each tested to confirm the model handles edge cases without crashing or producing nonsensical outputs.
Long-Run Equilibrium
The model is run for an extended period under neutral parameters and no shocks. Results should converge to a stable equilibrium rather than drift, oscillate, or explode — confirming the absence of unintended feedback loops.
Which parameters actually
drive the results?
Sensitivity analysis is conducted in two stages using the SALib Python library. The first stage screens all parameters cheaply; the second stage quantifies the influence of the most important ones in detail.
Parameter Screening
The Morris method runs approximately 3,000 model evaluations across the full parameter space, producing two indices per parameter: μ* (overall influence on adoption outcomes) and σ (non-linearity and interaction effects). Parameters with low μ* are fixed at default values and excluded from further analysis.
Variance Decomposition
Sobol analysis is applied to the subset of parameters identified as important by Morris. First-order indices (S1) quantify each parameter's individual contribution to output variance; total-order indices (ST) capture its contribution including all interactions. Where ST significantly exceeds S1, parameter interactions are theoretically meaningful and reported.
Results pending model completion. Sensitivity analysis will be reported as an Online Appendix accompanying all four studies.