Pillar Profile
A scored read of your company's posture across the five pillars. Each pillar carries a 1 to 5 score with anchored evidence and a narrative on what's working, what's not, and what the score means in operating terms.
A practitioner-led, five-pillar assessment for mid-market CEOs. Two to three weeks of elapsed time. A scored Pillar Profile, a ranked Use Case Portfolio, and a 30/60/90 Day Roadmap you can act on the day you get them.
Most AI advisory work is sold by people who have read about AI. The Anchor AI Bearing Assessment is delivered by someone who builds AI systems every week — and who spent 35 years in the same leadership trenches you're in now.
Not a webinar. Not a certification program. Not a sales pitch disguised as education. A scoped, time-bounded engagement that produces three durable artifacts: a scored read of your posture across five pillars, a ranked list of candidate AI initiatives surfaced from your own data, and a 30/60/90 Day Roadmap that names what to do, in what order, and what to leave alone.
Built for the CEO, President, Division GM, or SVP at a $20M to $1B company who wants a senior peer read, not a Big Four engagement. Industry-agnostic by design.
Kickoff, discovery interviews, synthesis, readout, and a 30-day follow-up. Ten to fifteen business days from start to final PDF.
Pillar scores tied to interview transcripts and published research. Use cases ranked against your data and tooling. Recommendations cite their sources.
Anchor delivers the read and the roadmap. Implementation, vendor negotiation, and software development stay with you or your chosen partners.
Each pillar is scored 1 to 5 against anchored evidence. Depth scales with tier — Single-Executive reflects the CEO's perspective, Organizational and above scores across all interviewees with cross-function comparison and disagreement signals called out.
Where your data lives, how clean it is, and whether it's reachable by the systems you'd want AI to plug into.
Whether AI is tied to a real business outcome or sitting on a wish list because the board asked about it.
The platforms, tooling, and security posture that decide what's actually deployable in the next 90 days.
Who in the building can build with AI, who can govern it, and where the critical capability gaps sit.
How the organization absorbs new ways of working — and where the quiet resistance lives that will determine whether anything sticks.
Every engagement, from Single-Executive to Enterprise, produces all three. These are the commitments — locked at v1.3 of the Engagement Spec, and the same whether you spend $6,000 or $20,000.
A scored read of your company's posture across the five pillars. Each pillar carries a 1 to 5 score with anchored evidence and a narrative on what's working, what's not, and what the score means in operating terms.
A ranked list of candidate AI initiatives surfaced from your interviews, scored for fit and sequenced by dependency. Each use case carries a brief, the function that owns it, the prerequisites, and an expected timeline.
What to do, in what order, and what to leave alone. The 30-day window names quick wins. The 60-day window names the foundation moves. The 90-day window names the visible deployments that compound on the foundation.
Five tiers. Same three core deliverables at every level. Depth scales with the number of discovery interviews and the bolt-on artifacts bundled in. Pricing is published, not quoted on request.
The $20M to $1B revenue band is the sweet spot, not a hard boundary. A $10M company with a serious AI question and a CEO who can authorize the fee is a fit. A $2B division with a buying GM is a fit. The boundary case gets judged at the intro meeting.
Former Head of the AWS Corporate Data Warehouse team, where he ran systems processing trillions of rows daily. Author of the original PR-FAQ for AWS Lake Formation.
Clark spent three decades leading technology organizations — building teams, shipping products, and navigating every major platform shift from client-server to cloud to AI. He builds multi-agent AI systems, voice interfaces, and automated workflows as part of his daily practice. The methodology, the research library, and the tooling that produce the Anchor AI Bearing Assessment all come from that working bench.
That dual perspective — deep technical practitioner and seasoned executive — is the point. Clark understands the constraints because he's lived them. He understands the tooling because he's building with it this week.
Tell Clark where you are with AI and where you want to be. If the assessment is the right next step, you'll know by the end of the call. If it isn't, you'll know that too. No pitch, no pressure.