The Three Moments When Secondary Research Runs Out in Strategy Consulting
Every strategy engagement begins with desk research, and desk research is genuinely useful right up until it is not. The point where desk research limitations appear is not random, and it is not a function of how much time was spent on it. It is structural, predictable, and it arrives at three specific moments in the consulting workflow.
Knowing exactly where those moments are is what separates a recommendation that holds up under client scrutiny from one that quietly assumes the published record is a complete account of how a market actually works.
| Moment | What Triggers It | What Secondary Research Cannot Do | Why? | Resolution Mechanism | Who Provides the Fix |
| Moment 1: Hypothesis needs field validation | Hypothesis tree reaches branches the published record has not answered | Explain why customers switch, reveal real channel economics, clarify how regulations are applied in practice | 80% of professionals made decisions on poor information; 44% did so repeatedly | Structured expert interviews with practitioners who have managed the situation firsthand | Former function heads, operators inside the relevant market |
| Moment 2: Market data conflicts | Two or more credible analyst sources produce irreconcilable estimates | Resolve divergence caused by different methodologies and market definitions | GenAI: $22B to $1.3T; Quantum computing: $4.24B vs $20.2B (4.8x gap); EV charging: $82B to $548B (6x gap) | 10–15 practitioner conversations that anchor estimates against real budget lines and transaction volumes | Category heads, channel partners, procurement leads |
| Moment 3: Client asks an unanswered question | Novel situation with no published precedent, or question raised in the final readout | Answer questions about real adoption speed, true total cost of ownership, or integration failure modes | 88% of transformations miss original ambitions (Bain, 400+ executives); 57.2% of M&A acquirers destroy value (KPMG, 3,000+ deals) | Targeted expert calls commissioned at the hypothesis-design stage, focused on the load-bearing assumptions | Practitioners who have been inside the situation the client is about to enter |
Moment 1: When Hypotheses Need Field Validation
Secondary research builds the hypothesis well, but it cannot stress-test it against operational reality. That gap has a way of showing up at the worst possible time.
Limits of Desk Research
The McKinsey problem-solving model, and the equivalent frameworks at BCG and Bain, structure a strategy engagement around hypothesis formation followed by evidence gathering. Secondary research serves the formation phase well. Analyst reports, earnings transcripts, published case studies, and industry databases give the team a starting framework for what the market looks like and where the client’s hypothesis might be right or wrong.
The trouble starts when the hypothesis tree reaches branches that depend on questions nobody has already answered and published, such as why customers in a specific segment switch providers, what the real economics of a distribution channel look like at scale, or how a regulatory interpretation is being applied in practice rather than in the letter of the guidance. AMPLYFI research found that 80 percent of business professionals have knowingly made a decision based on poor information, and 44 percent have done so more than once. Secondary research is rarely the cause of that problem, but stopping there after the hypothesis has outrun the published record is. This is one of the clearest desk research limitations in a live consulting engagement.
Validating Assumptions With Expert Interviews
The McKinsey Way makes this point directly: the day-to-day operational reality of an organisation or a market can only be known by asking the people operating inside it. When a hypothesis makes a claim about customer churn causation, channel partner dynamics, procurement decision criteria, or competitor capability that the published record does not contain, the evidence that resolves it is a conversation with a practitioner who has managed that situation firsthand. Expert interviews in strategy consulting serve exactly this function.
A structured expert interview with someone who has run the relevant function inside the relevant market type does not replace the analytical framework. It provides the field-level test of whether the framework is pointing at something real or at a plausible-sounding model of reality that would not survive a challenge from the client’s own operating team.
Moment 2: When Market Data Conflicts
Conflicting analyst estimates are not a sign that better secondary research is needed. They are a sign that the secondary record has run out of road. This is one of the more concrete secondary research limitations in strategy consulting, because the gap cannot be closed by pulling more reports.
Diverging Analyst Estimates
The generative AI market is the most visible current example of a structural problem that runs across virtually every fast-moving sector a strategy consultant is likely to work in. Published estimates for the same market, measured over the same time horizon, currently range from about $22 billion at the conservative end to about $1.3 trillion at the expansive end, depending on whether the analyst is counting vendor revenue, transaction volume, or broader market activity. McKinsey, by contrast, estimates that generative AI could create up to $4.4 trillion annually in economic impact, which is a different metric from market size entirely.
Quantum computing market size estimates for 2030 diverge by a factor of about 4.8 across cited sources, with published figures of $4.24 billion and $20.2 billion. EV charging infrastructure projections show the same pattern in a more extreme form: published estimates for roughly the same market and time horizon range from about $82 billion to over $548 billion by the early 2030s, a factor of more than six between the lowest and highest figures from credible research firms. More secondary research does not fix this.
PitchBook, which aggregates market estimates from multiple research firms across thousands of markets, notes that each source uses its own methodology and definition of the market space, which means divergence at this scale reflects genuinely different analytical choices rather than measurement error. That is a candid acknowledgment from within the consulting market research methods ecosystem itself.
Using Expert Triangulation to Resolve Gaps
When two credible analyst sources place the same market at a factor of three or four apart, the number that goes into the engagement deck cannot come from averaging them. The resolution is a set of conversations with practitioners who have managed actual budget lines, procurement volumes, or sales pipelines inside the market being sized, because their operational experience provides the ground-truth anchor that reconciles conflicting top-down methodologies. This is where consulting research validation through expert calls becomes a practical necessity rather than a preference.
A former category head who has managed the relevant spend at scale, a channel partner who processes transactions in the market, or a procurement lead who has evaluated the competitive set all carry the kind of unit-level economic reality that analyst reports model from the outside. Ten to fifteen structured conversations with practitioners of this kind typically produces more resolution on a contested market size than any additional secondary synthesis would.
Moment 3: When Clients Ask Unanswered Questions
There is always a question in the final readout that nobody anticipated, and the worst version of it is the one where the honest answer is that the published record does not address it.
Novel Strategic Challenges
Bain’s 2024 transformation research, drawing on more than 400 executives, found that 88 percent of business transformations fail to achieve their original ambitions. KPMG’s analysis of over 3,000 public M&A deals concluded that 57.2 percent of acquirers destroyed shareholder value, with overestimated synergies and under-operationalised integration as the two leading causes.
Both findings share a common root: the strategic assumptions behind the recommendation were not tested against the operational reality of the people who would have to execute them. Novel strategic situations, market entries into geographies without a precedent in the client’s own history, technology investments in categories where the Gartner Hype Cycle has already placed expectations at their peak, and competitive responses to disruption patterns that are not yet in the published literature, are precisely where the questions that matter most have not yet been answered anywhere. Secondary research in strategy consulting reaches its hard limit here.
The Limits of Published Research
The Gartner Hype Cycle is the most institutionalised acknowledgment in the analyst world that published projections systematically overshoot operational reality. The entire structure of the framework, from the Peak of Inflated Expectations through the Trough of Disillusionment to the Plateau of Productivity, documents analyst forecasts diverging from practitioner experience before eventually converging with it.
When a client asks how quickly a technology will actually be adopted inside their competitive set, what the real total cost of ownership looks like beyond the vendor’s stated figures, or what the integration failure modes are that do not appear in the official case studies, the published record provides the question more reliably than it provides the answer. Those are the moments where the engagement needs someone who has been there.
What Comes Next: Primary Intelligence from Experts
Knowing when secondary research fails is only useful if the team has a sourcing mechanism ready before the deadline arrives.
Expert Calls and Industry Interviews
The expert network industry now represents roughly $2.5 to $3 billion in annual spend globally, growing at around 12 percent per year, with consulting and market research accounting for the largest share of that spend. Every hypothesis-driven engagement eventually reaches the three breakpoints described above, and the teams running those engagements need a way to get from a testable hypothesis to a field-tested one within the window the engagement allows.
A set of structured expert calls for strategy projects, built around the specific questions the issue tree cannot resolve from secondary sources, moves the recommendation from defensible-on-paper to defensible-under-questioning. Bain’s documented approach to commercial due diligence puts it plainly: forecasts supplied by management and secondary sources should be viewed with circumspection, and no single element of business diligence is more important than primary intelligence in consulting from practitioners with direct operating experience in the relevant market.
Integrating Expert Insight into Consulting Frameworks
Expert intelligence does not replace the analytical framework. It stress-tests the framework’s most load-bearing assumptions before they reach the client. The most efficient integration happens at the hypothesis-design stage, when the consulting team identifies which branches of the issue tree depend on claims the secondary record cannot validate, then commissions a targeted set of industry expert interviews structured to resolve exactly those claims.
At Nexus Expert Research, the methodology is built around exactly this principle: when internally moderated, prepare a focused hypothesis and question set with the help of the client so that calls target the assumptions that matter most to the thesis. The expert conversation that challenges the consensus view in the secondary record is not a complication. It is often exactly where an engagement gets ahead of the market.
| Stage | Secondary Research Role | Primary Intelligence Role |
| Hypothesis formation | Strong, provides the framework | Not yet needed |
| Hypothesis stress-testing | Weak, published record has gaps | Essential, field-level test of the framework |
| Market sizing | Produces estimates, not resolution | Grounds estimates in real transaction data |
| Client Q&A | Cannot answer novel questions | Provides answers that are not in the literature |
| Due diligence | Provides management forecasts | Needed to scrutinise those forecasts |
The Map Has an Edge
Secondary research has a boundary, and it falls at a predictable location on the issue tree in every serious strategy engagement. The three moments described here, hypothesis branches that require field validation, analyst estimates that diverge beyond any usable range, and client questions that the published record simply has not addressed yet, are not edge cases.
They are the standard operating conditions of strategy consulting in sectors moving faster than the research that describes them. Knowing where the map ends is a methodological skill, and the consultants who have it are the ones who budget for primary research for consulting before the deadline forces the issue rather than after.