Platform comparison
EchoDepth vs iMotions — which emotional AI research platform is right for you?
Both platforms are grounded in FACS facial coding science. The difference is how they are deployed, what infrastructure they require, and what research contexts they are designed for.
By Jonathan Prescott, Cavefish Ltd · Published April 2026
| Dimension | EchoDepth Insight | iMotions |
|---|---|---|
| Science foundation | FACS 44 Action Units + VAD scoring | FACS + multi-modal biometrics (EEG, GSR, eye tracking, cardiac) |
| Deployment | Browser-based, fully remote | Lab-based, physical facility required |
| Hardware required | None — participant's own device camera | Specialist sensors (eye tracker, GSR, EEG headset, biometric vest) |
| Operator requirement | No specialist operator needed | Trained operator required per session |
| Geographic reach | Global — any country, simultaneously | Participants must attend physical facility |
| Study setup time | 48 hours from agreement to session | Weeks for facility booking, hardware calibration, operator scheduling |
| Primary use cases | Commercial research: pharma, FMCG, culture, concept testing, advertising | Academic research, premium brand UX labs, multi-modal clinical studies |
| Output | Structured insight report + VAD timeline + actionable recommendations | Raw multi-modal data for researcher analysis |
| Entry investment | From £3,500 (proof of concept) | Significant hardware + licensing investment |
| GDPR approach | No video retained, AU scores only | Session-dependent — facility controls data protocols |
The core difference: remote vs laboratory
iMotions is the standard-bearer for laboratory-based multi-modal biometric research. It integrates eye tracking, galvanic skin response, EEG, facial coding and cardiac monitoring into a unified data stream — producing the most comprehensive physiological picture available from a single research session. For academic researchers, premium UX labs and clinical settings where multi-modal capture is essential, iMotions has no direct equivalent.
EchoDepth Insight is built for a different context: commercial research teams and agencies who need emotional response data from geographically distributed participants, without laboratory infrastructure. Every participant joins via a browser link on their own device. There is no hardware to ship, calibrate or return. There is no facility to book. A study spanning London, Manchester, Berlin and New York can recruit simultaneously and run in parallel.
The practical consequence: an iMotions study with 30 participants across three countries requires three separate facility visits, three operator schedules, three data collection sessions, and three rounds of data export and cleaning before analysis begins. An equivalent EchoDepth study can begin within 48 hours of agreement and run all 30 participants in a single coordinated session.
Same science, different delivery
Both platforms apply FACS-grounded facial Action Unit analysis. Ekman and Friesen's 1978 taxonomy is the foundation of both approaches — the 44 Action Units that encode every human emotional expression are common ground. VAD (Valence, Arousal, Dominance) scoring, from Russell's Circumplex Model of Affect (1980), is the output framework for both.
The difference is capture context. iMotions uses dedicated research-grade cameras in a controlled environment with consistent lighting — conditions that maximise data quality per frame. EchoDepth uses the participant's own device camera in their natural environment — which introduces more variability per frame but enables sample sizes and geographic reach that facility-based research cannot match. For most commercial research questions, the directional insight produced from 30 remote participants outweighs the marginal data quality advantage of 10 facility participants.
When iMotions is the right choice
iMotions is the appropriate platform when your research question requires biometric data modalities that cannot be captured remotely: EEG (neural activity), full pupillometry with fixation heatmaps, galvanic skin response, cardiac monitoring, or synchronised multi-sensor fusion. These signals require physical sensors in contact with the participant and controlled environmental conditions.
Academic research, clinical studies, and premium brand experience labs that have existing facility infrastructure and operator capability are natural iMotions contexts. If your institution already has a biometric lab and you need the full multi-modal signal, EchoDepth is not a replacement.
When EchoDepth is the right choice
EchoDepth is the right choice when your primary research question can be answered from facial coding and VAD data — and when your participants cannot or should not attend a physical facility. This covers the majority of commercial research contexts: pharma patient and HCP interviews, FMCG concept testing, advertising pre-testing, culture survey emotional analysis, and people analytics.
EchoDepth also differs from iMotions in what it produces at the end. iMotions delivers raw multi-modal data for researcher analysis — the interpretation is the researcher's responsibility. EchoDepth delivers a structured insight report: the emotional data translated into specific, actionable recommendations the research team or board can act on. The output is knowledge, not data.
Client perspective
"Using EchoDepth from Cavefish allows us to validate ideas quickly, minimising the risk of launching a product or idea."
Gethin Thomas
CEO, Iterate
Common questions about EchoDepth vs iMotions
EchoDepth Insight
See what EchoDepth surfaces on your research challenge.
Book a 30-minute discovery call. We will show you exactly what emotional AI would add to your current research methodology — and whether EchoDepth or iMotions is the right fit for your specific needs.