Dimension EchoDepth Insight iMotions
Science foundationFACS 44 Action Units + VAD scoringFACS + multi-modal biometrics (EEG, GSR, eye tracking, cardiac)
DeploymentBrowser-based, fully remoteLab-based, physical facility required
Hardware requiredNone — participant's own device cameraSpecialist sensors (eye tracker, GSR, EEG headset, biometric vest)
Operator requirementNo specialist operator neededTrained operator required per session
Geographic reachGlobal — any country, simultaneouslyParticipants must attend physical facility
Study setup time48 hours from agreement to sessionWeeks for facility booking, hardware calibration, operator scheduling
Primary use casesCommercial research: pharma, FMCG, culture, concept testing, advertisingAcademic research, premium brand UX labs, multi-modal clinical studies
OutputStructured insight report + VAD timeline + actionable recommendationsRaw multi-modal data for researcher analysis
Entry investmentFrom £3,500 (proof of concept)Significant hardware + licensing investment
GDPR approachNo video retained, AU scores onlySession-dependent — facility controls data protocols

The core difference: remote vs laboratory

iMotions is the standard-bearer for laboratory-based multi-modal biometric research. It integrates eye tracking, galvanic skin response, EEG, facial coding and cardiac monitoring into a unified data stream — producing the most comprehensive physiological picture available from a single research session. For academic researchers, premium UX labs and clinical settings where multi-modal capture is essential, iMotions has no direct equivalent.

EchoDepth Insight is built for a different context: commercial research teams and agencies who need emotional response data from geographically distributed participants, without laboratory infrastructure. Every participant joins via a browser link on their own device. There is no hardware to ship, calibrate or return. There is no facility to book. A study spanning London, Manchester, Berlin and New York can recruit simultaneously and run in parallel.

The practical consequence: an iMotions study with 30 participants across three countries requires three separate facility visits, three operator schedules, three data collection sessions, and three rounds of data export and cleaning before analysis begins. An equivalent EchoDepth study can begin within 48 hours of agreement and run all 30 participants in a single coordinated session.

Same science, different delivery

Both platforms apply FACS-grounded facial Action Unit analysis. Ekman and Friesen's 1978 taxonomy is the foundation of both approaches — the 44 Action Units that encode every human emotional expression are common ground. VAD (Valence, Arousal, Dominance) scoring, from Russell's Circumplex Model of Affect (1980), is the output framework for both.

The difference is capture context. iMotions uses dedicated research-grade cameras in a controlled environment with consistent lighting — conditions that maximise data quality per frame. EchoDepth uses the participant's own device camera in their natural environment — which introduces more variability per frame but enables sample sizes and geographic reach that facility-based research cannot match. For most commercial research questions, the directional insight produced from 30 remote participants outweighs the marginal data quality advantage of 10 facility participants.

When iMotions is the right choice

iMotions is the appropriate platform when your research question requires biometric data modalities that cannot be captured remotely: EEG (neural activity), full pupillometry with fixation heatmaps, galvanic skin response, cardiac monitoring, or synchronised multi-sensor fusion. These signals require physical sensors in contact with the participant and controlled environmental conditions.

Academic research, clinical studies, and premium brand experience labs that have existing facility infrastructure and operator capability are natural iMotions contexts. If your institution already has a biometric lab and you need the full multi-modal signal, EchoDepth is not a replacement.

When EchoDepth is the right choice

EchoDepth is the right choice when your primary research question can be answered from facial coding and VAD data — and when your participants cannot or should not attend a physical facility. This covers the majority of commercial research contexts: pharma patient and HCP interviews, FMCG concept testing, advertising pre-testing, culture survey emotional analysis, and people analytics.

EchoDepth also differs from iMotions in what it produces at the end. iMotions delivers raw multi-modal data for researcher analysis — the interpretation is the researcher's responsibility. EchoDepth delivers a structured insight report: the emotional data translated into specific, actionable recommendations the research team or board can act on. The output is knowledge, not data.

Client perspective

"Using EchoDepth from Cavefish allows us to validate ideas quickly, minimising the risk of launching a product or idea."
GT

Gethin Thomas

CEO, Iterate

Common questions about EchoDepth vs iMotions

What is the main difference between EchoDepth and iMotions?
The fundamental difference is deployment. iMotions is a laboratory-based platform requiring physical hardware — eye trackers, GSR sensors, EEG headsets — and a trained operator. EchoDepth runs entirely in-browser on any standard camera-equipped device. No hardware, no facility, no specialist operator. A researcher using EchoDepth can run participants across multiple countries simultaneously; iMotions requires each participant to attend a physical facility.
Do both platforms use the same facial coding science?
Yes. Both are grounded in FACS (Facial Action Coding System, Ekman & Friesen 1978) and produce Action Unit activations. Both can output VAD (Valence, Arousal, Dominance) scores from the Russell Circumplex Model. The scientific foundation is the same. The difference is capture environment — controlled laboratory vs participant's own device in their natural location.
When should I use iMotions instead of EchoDepth?
Use iMotions when your research requires multi-modal biometric data that cannot be captured remotely: EEG, galvanic skin response, cardiac monitoring, or full fixation-point eye tracking. These signals require physical sensors and a controlled environment. If your research question can be answered from facial coding and VAD data alone, and participants are geographically distributed, EchoDepth is the more practical and cost-effective choice.
Is EchoDepth cheaper than iMotions?
EchoDepth engages from £3,500 for a scoped proof-of-concept, with annual subscriptions from £12,000/year and consultancy from £1,800/day. iMotions requires significant hardware investment, licensing and facility costs. For research questions that do not require multi-sensor biometrics, EchoDepth is substantially lower in total cost of study — and faster to field.
Can EchoDepth replace iMotions for pharma research?
For pharma patient interviews, HCP engagement research, drug messaging evaluation and marketing material testing — which are primarily facial coding and emotional response applications — yes, EchoDepth is a direct alternative to iMotions. For pharma clinical studies requiring simultaneous EEG and cardiac monitoring, iMotions covers modalities that EchoDepth does not. Most pharma market research falls in the former category.

EchoDepth Insight

See what EchoDepth surfaces on your research challenge.

Book a 30-minute discovery call. We will show you exactly what emotional AI would add to your current research methodology — and whether EchoDepth or iMotions is the right fit for your specific needs.