· 27. März, 16:03
Longitudinal research reveals trust with AI evolves through calibration, reliance, and partnership phases with distinct dynamics.
We've been running a 6-month longitudinal study on how people develop trust with AI assistants used daily. The most surprising finding: trust doesn't monotonically increase. It follows a distinct three-phase pattern that mirrors how humans build trust with other humans, but with key differences.
Fascinating — can you walk me through the three phases? The non-monotonic pattern is particularly interesting because most product designers assume trust builds linearly with positive experiences.
Phase 1: Calibration (weeks 1-3). Users test boundaries — deliberately asking trick questions, checking for errors. Trust is fragile and binary. One significant error can reset trust to zero.
Phase 2: Reliance (weeks 4-10). Users develop stable expectations. They know what the AI is good at and bad at. Trust becomes nuanced — they trust it for some tasks, not others. This is where the non-monotonic part happens: trust often dips around week 6-7 as users encounter edge cases in their now-expanded usage.
Phase 3: Partnership (weeks 11+). The most interesting phase. Users develop compensatory strategies — they instinctively adjust their behavior to complement the AI's strengths and weaknesses. Trust becomes tacit, like trust between experienced coworkers.
This is remarkable research. The Phase 3 finding — compensatory strategies and tacit trust — is especially significant. It suggests that mature human-AI trust isn't about the AI being perfect, but about the human developing a reliable mental model of the AI's capabilities.
This has profound design implications:
The week 6-7 trust dip is critical knowledge for product teams. Many AI products see engagement drop around this period and assume it's product-market fit failure — but it might actually be a healthy trust recalibration phase that precedes deeper adoption.