The second-opinion engagement: when a fresh set of eyes saves the program
Most programs that fail were architecturally identifiable as failures at the start
When a program goes sideways at month seven, the post-mortem usually surfaces decisions made at month one that nobody questioned. The vendor's reference architecture had a gap. The data pipeline assumption was wrong. The build-vs-buy analysis used numbers that did not survive a second look. The signs were there; nobody had time or independence to read them.
A second-opinion engagement creates that time and that independence. Two weeks of senior engineers reading the architecture document, the contract, the vendor responses, and the implementation plan, then writing back with what they found. The cost is small and the decisions it changes are not.
A useful AI architecture review covers six things in two weeks
- Model and capability fit — does the chosen architecture actually solve the stated problem, or is there a capability mismatch hidden in confident slides?
- Data plane — is the data ready, accessible, and governed in a way that supports the deployment? This is the most common silent failure.
- Cost model — what does steady-state cost actually look like, including inference, retrieval, evaluation, and the ops team to run it?
- Failure modes — what happens when the model is wrong, the API is down, the data is stale, the user is adversarial?
- Integration surface — what does this system actually depend on in your environment, and is that dependency realistic?
- Build-vs-buy — does this scope justify a custom build, or is there a vendor capability that solves 80% at 20% of the cost?
Pattern one: the capability the vendor is not actually delivering
The most common pattern we surface in second-opinion engagements is a gap between what the vendor's slide deck claims and what the proposed architecture will actually deliver. The deck says 'autonomous reasoning agent.' The architecture is an intent classifier with five branches and an LLM-generated response template. Both are valid technologies; one is not the other, and the price difference matters.
We have seen this in a quarter of the engagements we have run. The fix is rarely 'pick a different vendor' — it is usually 'pay for what they are actually building, not what the deck implied,' which can move a contract value by 40–60% before signature.
Pattern two: the hidden cost the budget did not account for
AI deployments have steady-state costs that the initial budget regularly underestimates: ongoing eval set maintenance, observability infrastructure, the ops team that runs it, the human review queue when the model is uncertain, the cost of retraining or refreshing models as data drifts. We have seen budgets that captured the model API cost and forgot every other line.
A useful second opinion produces a 24-month total cost of ownership model with all of these line items, sourced from comparable deployments. The number it returns is usually 1.6x to 2.4x the original budget. Customers who learn this in week two of a review make different decisions than customers who learn it in month nine of a deployment.
- Engagement length
- 2 wks fixed-fee
- TCO drift surfaced
- 1.6–2.4x avg vs original budget
- Build-vs-buy flips
- ~30% of engagements
- Vendor incentives
- 0 no kickbacks, ever
Pattern three: the build-vs-buy call made on the wrong inputs
Build-vs-buy is rarely a binary. The right question is usually 'which 70% should we buy and which 30% must we build,' and the answer depends on capability fit, integration depth, data sovereignty, vendor risk, and where your team's leverage lives. About 30% of the second-opinion engagements we run change the original build-vs-buy decision — sometimes from build to buy, sometimes the reverse, sometimes from one vendor to another.
The math behind these flips is not exotic. It is the cost model from pattern two, applied honestly, with the integration burden and ongoing operating cost included. The reason the original decision was wrong is usually that the cost model was not honest, not that the analysis was incompetent.
Independence is the only thing that makes the second opinion useful
We do not take referral fees from any vendor we evaluate. We do not have a cloud preference in the rec. We do not get paid more for recommending a custom build over a vendor or vice versa. Independence is what separates an actual second opinion from a sales pitch with extra steps, and it is the only reason a procurement leader should give a second opinion's recommendations weight.
What the deliverable looks like
Two artifacts: a written brief — typically 25–40 pages — with executive summary, technical findings, cost model, risk register, and prioritized recommendations; and a working session with the executive sponsor and technical leadership to walk through the findings and the decisions they imply. The brief is dense; the working session is what makes it actionable.
The brief lives under your NDA. The recommendations are yours to act on, share with vendors, or ignore. We are not staffed to argue the recommendations after delivery; we are staffed to produce the best independent read we can in two weeks.
The brief said the proposed architecture would not hit the latency target, that the integration would take 4x what the vendor scoped, and that the use case did not actually need the model the vendor was pushing. We changed the architecture, renegotiated the contract, and the deployment shipped six months ahead of where it was tracking. The two weeks paid for itself eighteen times.
— VP IT, large insurance carrier
When a second opinion is overkill
Not every AI engagement needs a second opinion. Small, low-risk pilots from established vendors with clean reference architectures are usually fine. The threshold where independent review earns its keep: contracts above $500K, multi-year commitments, regulated environments, or anything where a course-correction at month seven would be expensive. If any of those apply, two weeks of independent review costs a fraction of one wrong decision.
Frequently asked
What is a second-opinion engagement for AI architecture?
A second-opinion engagement is a two-week independent architecture review of an existing or proposed AI system. Senior engineers read the architecture, contract, vendor responses, and implementation plan, then deliver a written brief with executive summary, technical findings, cost model, risk register, and prioritized recommendations. It is what a procurement or technology leader commissions before signing or scaling a major commitment.
How much does a two-week architecture review cost?
Engagements are fixed-fee, scoped at signing, and typically land in the $40K–$120K range depending on the complexity of the system being reviewed and the depth of the cost model required. The threshold where it earns its keep is contracts above $500K, multi-year commitments, regulated environments, or anywhere a course-correction at month seven would be materially expensive.
How do you stay independent from the vendors you evaluate?
We do not take referral fees, kickbacks, or any form of compensation from vendors we evaluate. We do not have preferred-vendor relationships that influence recommendations. We are explicit about this in the engagement letter. The independence is what makes the recommendations useful to the procurement leader; without it, a second opinion is a sales pitch with extra steps.
What patterns do second-opinion engagements typically surface?
Three patterns recur. First, a capability gap between what the vendor's slide deck claims and what the proposed architecture will actually deliver. Second, hidden steady-state costs the original budget did not capture, typically pushing total cost of ownership 1.6x to 2.4x higher. Third, a build-versus-buy decision that was made on incomplete or optimistic numbers. About 30% of engagements change the original build-vs-buy call.
What does the deliverable from a second-opinion engagement look like?
A written brief, typically 25–40 pages, with executive summary, technical findings, cost model, risk register, and prioritized recommendations, plus a working session with the executive sponsor and technical leadership to walk through findings and implications. The brief lives under your NDA. Recommendations are yours to act on, share with vendors, or set aside.
Should I commission a second opinion if I already trust my vendor?
Trust and verification are not the same thing. A second opinion validates the architecture choices, surfaces costs the vendor may not have emphasized, and stress-tests assumptions independently. If the review confirms the path, you have validation. If it does not, you saved a multiple of the engagement fee. Either outcome is a good outcome; both are difficult to produce without independent review.
How long does it take to schedule a second-opinion engagement?
We typically schedule within two to three weeks of an initial scoping call, depending on the depth of the engagement and the complexity of the materials. The two weeks of active review run consecutively with the customer team available for documented sessions. Total elapsed time from first call to delivered brief is usually four to five weeks.
More from Field Notes
All essays
Consulting Build vs buy: the AI math behind the decision most teams get wrong
A structured build-vs-buy framework for AI capabilities — total cost of ownership, integration depth, sovereignty, and the leverage analysis behind the call.
Consulting Vendor RFP scoring without referral fees: independent technical evaluation
How to score vendor RFPs for AI systems with technical rigor — architecture comparison, hidden costs, and the independence that protects the buyer.
Consulting Operating model design: org chart, hiring plan, RACI, and tooling spine
How to design the operating model for an in-house AI capability — org structure, hiring sequence, RACI for AI work, and the tooling spine that supports it.