Adoption and Engagement for CSMs: In-Product Metrics, EBR Cadence, Expansion Identification in 2026
In short
Adoption and engagement is the steady-state phase of the CSM motion: months four through twelve of the first year, when onboarding is done and the renewal isn't yet. The mid-level CSM motion is running the standard cadence (monthly check-ins, quarterly EBRs, ad-hoc training) and reporting engagement metrics. The senior craft is reading engagement data for two distinct signals at the same time: churn risk (engagement softening below the predicted curve) and expansion opportunity (engagement showing patterns that indicate readiness for additional seats, modules, or use cases). Adoption-and-engagement work that doesn't surface both signals is missing half the senior bar.
Key takeaways
- The single most useful adoption metric is depth-of-use per active user, not total user count. A customer with 200 light users adopting two features each is more at risk than a customer with 50 deep users adopting eight features each, even though the headline adoption number is higher. The Gainsight product-adoption guide documents the pattern; the senior craft is building depth-weighted adoption metrics, not just user counts.
- EBR cadence stays quarterly through the steady-state phase for strategic accounts, but the agenda evolves. EBRs in months four through twelve focus on outcome trajectory and expansion identification, not initial setup or training. The executive-business-reviews deep-skill covers EBR craft in depth; the adoption-phase specific note is that the EBR is the formal venue where expansion gets teed up, not where it gets pitched.
- Expansion identification is a two-signal problem. The first signal is product-side: usage patterns that indicate the customer is ready (high depth-of-use, additional teams asking for access, hitting tier limits). The second signal is relationship-side: champion advocating for expansion internally, exec sponsor referencing the product in board narrative, customer asking unprompted about additional capabilities. Both signals together are the green light; one signal alone usually means the customer isn't ready and the CSM shouldn't push.
- The adoption-curve gap between mid-level and senior CSMs shows up in the mid-quarter intervention. A mid-level CSM looks at adoption monthly and reacts at the EBR. A senior CSM looks at adoption weekly and intervenes the moment a leading indicator drops, before the lagging metric crosses a threshold. The compensation curve at levels.fyi Customer Success track (median total compensation around $132,000 across the track as of May 2026, with senior bands tracking meaningfully higher) tracks this discipline.
- Health-score updates during the steady-state phase are weekly, not quarterly. The adoption-phase health score updates with every data refresh and the CSM scans it before every customer interaction. The CS platform and operations deep-skill covers the platform side; the adoption-phase operational note is that the CSM checks the score before sending any message to the champion.
- Tech-touch adoption motion is its own craft. Long-tail accounts (typically the bulk of the book by account count) get automated adoption nudges (in-product messages, email sequences, pre-recorded EBRs) rather than a dedicated CSM. Senior CSMs at tech-touch-heavy companies (Snowflake, HubSpot; see hub spokes) are evaluated as much on automation design as on individual relationships, because the automation is what scales the motion across thousands of accounts.
- Engagement metrics that the customer cares about and engagement metrics that the vendor cares about are usually different. The vendor cares about feature adoption, login frequency, and ticket counts; the customer cares about whether the business outcome they bought the product for is materializing. The senior craft is reporting in the customer's vocabulary at every interaction. Customers who get vendor-vocabulary reports treat them as overhead; customers who get outcome-vocabulary reports treat them as evidence the relationship is working.
What 'adoption' actually means in the steady state
Adoption is one of the most overloaded terms in the CSM vocabulary, and the overload is a problem. Different stakeholders use the word to mean three different things, and a CSM who isn't tracking which version of adoption is being discussed in a given conversation ends up giving the wrong answer.
Definition 1: Provisioning adoption. The number of users who have been set up with access. This is the easiest to measure and the least useful as a leading indicator. A customer can provision 100 users and have only 20 of them log in; the headline adoption number looks fine while the underlying engagement is weak.
Definition 2: Active adoption. The number of users actively using the product (defined by the company's MAU or WAU metric). More useful than provisioning adoption, but still a count metric: it doesn't tell you whether those active users are getting value or just logging in out of habit.
Definition 3: Depth-of-use adoption. The breadth and depth of features each active user is engaging with, weighted by the features that drive the customer's business outcome. This is the definition that correlates with renewal outcomes. The Gainsight product-adoption guide calls this 'value-weighted engagement'; the working-CSM version is a depth-weighted score that maps each feature to its contribution to the named business outcome from the success plan.
The senior CSM tracks all three definitions and reports the right one in the right context. Depth-of-use is the one that matters at the EBR; active adoption is the one that matters in the weekly CSM scrum; provisioning adoption is the one that matters when the customer's IT team is auditing licenses. Confusing the three is the most common junior failure mode in adoption reporting.
Reading the engagement data for two signals
The senior craft of adoption-and-engagement is reading engagement data for two signals at the same time: churn risk and expansion opportunity. Most CSMs are good at one and weak at the other, and the weakness shows up in the renewal numbers.
The churn-risk signal is engagement softening below the predicted curve. If a customer is on track to hit their depth-of-use target by month nine and the curve flattens at month six, that's a leading indicator. The save play (covered in the churn-prevention deep-skill) starts then, not at the next scheduled EBR.
The expansion-opportunity signal is engagement showing patterns that indicate readiness for more. Three common patterns: (1) hitting tier limits (seats, queries, storage) and running into them; (2) additional teams asking for access through the champion (the champion is fielding internal requests); (3) usage patterns that indicate the customer is solving problems the product can address with adjacent modules they haven't bought yet. Each of these is a tee-up for an expansion conversation at the next EBR; the renewal-and-expansion deep-skill covers the commercial side.
Mid-level CSMs see one signal at a time. They notice churn risk because their leadership measures GRR, or they notice expansion because their AE asks. Senior CSMs see both signals simultaneously because they review the engagement data with both questions in mind: is this customer at risk, and is this customer ready for more? The discipline shows up in the compensation curve: the senior bands of the levels.fyi Customer Success track are populated by CSMs who can deliver both retention and expansion within the same book.
EBR cadence in the steady-state phase
The EBR (covered in detail in the executive-business-reviews deep-skill) stays quarterly through the steady-state phase for strategic accounts, but the agenda evolves quarter over quarter. The EBR in month three looks different from the EBR in month nine; running the same agenda for both is a mistake.
Month 3 EBR (post-onboarding): outcome-validation EBR. The agenda focuses on the success plan from onboarding (covered in the onboarding deep-skill), the milestones hit and missed, and a recommitment to the year-one outcome trajectory. This EBR is where the success plan becomes a live document; if the outcome definition needs adjustment based on what onboarding revealed, this is the EBR where the adjustment gets ratified.
Month 6 EBR: trajectory-and-expansion EBR. The agenda focuses on outcome trajectory at midyear and expansion identification. By month six the customer should be hitting the value-weighted engagement targets and the CSM should be reading the engagement data for expansion signals. This EBR is typically the first one where expansion gets teed up if the signals are present.
Month 9 EBR: pre-renewal EBR. The agenda focuses on the upcoming renewal conversation: forecast, expansion-or-flat-renewal decision, multi-year discussion if applicable. The renewal-and-expansion deep-skill covers the renewal-cadence in detail; the month-9 EBR is the venue where the exec sponsor gets formally aligned on the renewal forecast before the AE engages procurement.
Month 12 EBR: renewal celebration + year-2 success plan. If the renewal lands, the month-12 EBR celebrates the win and authors the year-2 success plan. If the renewal didn't land, the month-12 conversation is harder and the save plays from churn-prevention are running in parallel.
Tech-touch adoption (and why it's its own craft)
The high-touch adoption motion described above scales for strategic accounts and most mid-market accounts. It does not scale for the long tail of smaller accounts, which typically make up the bulk of a CSM book by account count even when they're a minority of the ARR. Tech-touch adoption is the scaled motion: automated nudges, in-product messages, email sequences, pre-recorded EBRs, self-serve resources.
Designing a strong tech-touch motion is a real craft and most first-pass tech-touch programs are weak. Three structural decisions matter:
One: tier the motion by signal, not by ARR. Tech-touch shouldn't mean 'no human contact'; it should mean 'no human contact unless a specific signal triggers it.' When the depth-of-use score for a tech-touch account drops below a threshold, a human CSM gets paged for a one-time intervention. Companies that treat tech-touch as a hard no-contact rule end up losing the long tail to silent churn.
Two: design the automation around the success plan, not around the product. Tech-touch programs that send feature-of-the-week emails to every account in the long tail are noise; tech-touch programs that send outcome-tied messages (based on the customer's named business outcome from the abbreviated success plan) get engagement. Building this requires the CS platform infrastructure covered in the cs-platform-and-operations deep-skill.
Three: measure tech-touch programs by retention, not engagement. The easy mistake is measuring tech-touch by email open rates, in-product message clicks, or video-completion rates. The real measure is whether the tech-touch cohort retains at comparable rates to the high-touch cohort, controlling for ARR and product complexity. Companies that measure engagement instead of retention build elaborate tech-touch programs that everyone ignores at renewal.
Senior CSMs at tech-touch-heavy companies (Snowflake, HubSpot, Datadog; see hub spokes) are evaluated as much on automation design as on individual relationships. The RepVue ratings track publishes broad professional-development and culture signals across sales-and-CS orgs. The specific observation about automation-roadmap maturity below is author analysis applied to those signals, not a category RepVue explicitly publishes: companies that treat tech-touch as a real craft (with dedicated CS Ops partners and explicit automation roadmaps) tend to score higher on the broader professional-development and tenure indicators than companies that treat it as an afterthought.
Frequently asked questions
- What's the difference between adoption and engagement?
- Adoption is whether users have started using the product; engagement is the depth and frequency of that use over time. Adoption is a one-time threshold (you adopt the product when you start using it); engagement is an ongoing measure (engagement can rise and fall after adoption is complete). Most CSM dashboards conflate the two; the senior craft is reporting them separately because they tell different stories at different points in the lifecycle.
- How often should a CSM review adoption metrics?
- Weekly for strategic accounts, monthly for mid-market accounts, and on-trigger for tech-touch accounts. Weekly review is the senior bar because leading indicators move faster than the monthly EBR cycle and waiting for monthly review means missing the intervention window. Most junior CSMs review monthly because that's the cadence the platform defaults to; senior CSMs override the default and review weekly even when no alarm has fired.
- What does an adoption red flag look like?
- Three red-flag patterns. (1) Depth-of-use drop: the customer was on a steady upward depth curve and the curve flattens or drops within a single month. (2) Active-user drop: the WAU or MAU count drops more than materially without a known cause (layoff, reorg, seasonal). (3) Champion silence: the champion stops responding within their normal cadence even when headline metrics look fine. Each red flag triggers a save play from the churn-prevention deep-skill rather than a wait-and-see response.
- How do you identify expansion opportunities from engagement data?
- Three patterns are the most reliable. (1) The customer is hitting tier limits (seats, queries, storage, API throughput) and running into them as friction. (2) Additional teams or business units are asking the champion for access; the champion is fielding internal requests. (3) The customer is using the product to solve problems that an adjacent module addresses more directly. Each pattern is a tee-up for an expansion conversation at the next EBR, not a hard pitch in a regular check-in. The renewal-and-expansion deep-skill covers the commercial conversation craft.
- Should expansion conversations happen mid-quarter or at the EBR?
- The EBR is the formal venue where expansion gets teed up; the mid-quarter is where the tee-up gets prepared. The structural mistake is hard-pitching expansion in a regular CSM check-in; champions read it as a sales pitch interrupting a relationship conversation, and the relationship takes the damage. The right pattern: spot the signal mid-quarter, do the discovery work mid-quarter (what additional outcome would the expansion address, who internally is asking, what's the budget context), and present at the EBR with the AE in the room for the commercial portion.
- How do you handle a customer with declining adoption?
- Five-step playbook. (1) Diagnose: is the decline due to a product issue, a customer-side issue (layoff, reorg, priority shift), or a champion-side issue (champion leaving, champion overloaded)? (2) Engage the champion directly with a structured conversation naming the decline. (3) Loop in the exec sponsor if the cause is structural. (4) If the cause is product-related, escalate internally to product team with the specific friction documented. (5) If the cause is unfixable in the current quarter, scope down the expectations rather than pretending the trajectory is intact. The churn-prevention deep-skill covers the save plays in depth.
- Is adoption-and-engagement different at PLG companies?
- Yes, structurally. At product-led-growth companies (typically modern SaaS with freemium or self-serve tiers), adoption is largely product-driven: the product is responsible for getting the user to value, and the CSM engages later in the lifecycle when the customer becomes enterprise-tier. The CSM motion at PLG companies skews toward expansion (PLG customers self-onboard but need help scaling) and strategic relationship (PLG customers often need exec-sponsor engineering that the self-serve flow doesn't provide). Sales-led companies have the opposite shape: the CSM owns adoption from day one. Both motions exist; the senior CSM is fluent in the shape of the company they're at.
- What metrics should be in a steady-state EBR deck?
- Four families, in order. (1) Outcome trajectory: the single-slide visual showing progress against the named business outcome from the success plan. (2) Depth-of-use engagement: how the customer's value-weighted engagement is tracking quarter over quarter. (3) Expansion signals: any tier-limit hits, additional-team interest, or adjacent-use-case patterns. (4) Forward-quarter ask: the two decisions the CSM is asking the exec to make. Provisioning adoption, MAU counts, ticket counts, and feature-by-feature breakdowns go in the appendix; the EBR-craft point is that leading with vendor-side metrics signals the wrong conversation.
Sources
- Gainsight: Product Adoption Guide (depth-of-use, value-weighted engagement)
- levels.fyi Customer Success track ($132,000 median total compensation, last updated May 2026)
- RepVue: per-company sales/CS-org ratings (broad professional-development and culture signals; tech-touch and automation framing in this article is author analysis)
- BLS Customer Service Representatives (closest BLS proxy for CSM track)
- Bravado War Room: SaaS-sales practitioner community; the page references this as the venue where expansion and AE-coordination conversations surface, not as proof of any specific pattern
- ChartHop: org-data, headcount-planning, and HRIS-adjacent workforce-data resources (additional-team identification framing in this article is author analysis applied to ChartHop's org-chart data)
About the author. Blake Crosley founded ResumeGeni and writes about customer success, hiring technology, and ATS optimization. More writing at blakecrosley.com.