As posted on LBB https://lbbonline.com/news/Attention-Metrics-within-Media-Mix-Modelling
In Canada, “attention” is beginning to shift from a media-planning curiosity to a practical component in how agencies evaluate paid media quality. Industry groups have been pushing standardisation, with IAB Canada highlighting the IAB/MRC work on attention measurement guidelines, a signal that attention is being treated as an emerging currency rather than a one-off vendor KPI.
Canadian advertisers face the same realities as our global friends — fragmented audiences, cross-platform delivery, and declining signal clarity — while also contending with bilingual creative requirements and uneven scale across provinces. Attention-based metrics offer a way to compare the “quality of exposure” across placements that otherwise appear identical in a reach-and-frequency report. The IAB’s attention framework underscores that attention is not an outcome by itself, but an input that can help explain why some impressions work harder than others.
Most attention products translate a mix of signals—such as viewable time, on-screen share, audibility (for video), and other engagement proxies—into a standardised score. Examples include DoubleVerify’s “DV Authentic Attention” and Adelaide’s AU metric, both designed for near-real-time optimisation and cross-placement comparison.
Traditional Media Mix Modelling (MMM) often treats each channel as a single variable (e.g., “Programmatic Display Spend” or “Online Video Impressions”). That can mask huge variance in placement quality within the same channel. Attention metrics are being adopted to make MMM more sensitive to this variance by converting media inputs into *attention-weighted* inputs, essentially upgrading the independent variables the model learns from. Adelaide explicitly frames attention as a means to modernise MMM by accounting for differences in placement quality that legacy models struggle to capture.
1. Attention-weighted impressions (or GRPs)
Agencies transform impression-based inputs into “attentive impressions” by multiplying delivered impressions by an attention index (often normalised against a baseline). The model then estimates the sales/brand response per attentive impression rather than per raw impression.
2. Quality multipliers by sub-channel
Instead of one coefficient for “online video,” agencies split inventory into tiers (e.g., high/medium/low attention) using vendor scoring, then feed each tier as a separate variable. This can reveal that a smaller volume of high-attention inventory outperforms a larger volume of low-attention inventory at the same spend.
3. Interaction terms with reach/frequency
Some MMM setups treat attention as a moderator—e.g., the effectiveness of frequency depends on whether exposures were actually delivered. This is particularly useful for CTV, where “ad served” and “ad watched” can diverge in living-room environments.
The practical benefit is not just prettier dashboards; it’s a more actionable MMM that can recommend which placements to buy within a specific channel—not merely how much budget to assign to the channel. The risk is also clear: attention metrics vary by provider and methodology, which is why agencies are watching industry guidelines closely and treating attention as a model input to be validated, not a universal truth.
At Involved Media Canada we always want to provide our clients with the latest, most advanced thinking – but we also want to ensure the methodology and practices are tested and validated. Being on the bleeding edge can lead to a competitive advantage so ensuring our tools and processes are solid is our highest priority.