Portfolio · Edition 04 · 2026
Product Manager · 8 Years · Insurance · Fintech · Energy

Okunade Anifalaje.

Building software that makes regulated industries move at the speed of consumer products.

Saint Louis, MO  ·  Available for senior product roles
Scroll
00

About

Eight years of product management across cryptocurrency, upstream oil & gas, and commercial property insurance. I take complex, regulated systems and rebuild them as products people want to use.

At FM Global, I am modernizing the AIM Customer Portal for FM Approvals and shaping how AI is woven into Loss Prevention and underwriting workflows. At Apache Corporation, I led an internal platform that unified field operations data for production engineers across the Permian Basin. At Binance International, I owned the mobile trading experience for our priority international markets and shipped the company’s IT Security Suite.

I trained as a lawyer first — LL.B and LL.M in Energy Law from the University of Ibadan — before moving into product. The two crafts overlap more than they look: read the regulation, read the system, find the gap, ship the workaround. Regulated industries are where speed and stakes meet, and the product job is to honour both at once.

What I optimize for, in order:

  1. 01 Customer-defensible decisions
  2. 02 Measurable business value
  3. 03 Engineering velocity that compounds
  4. 04 Stakeholder trust at the executive table
01

Experience

A consistent throughline: ownership of complex platform decisions in regulated environments where stakes are high and trust is earned slowly.

Mar 2023 — Present Johnston, RI
FM Global Commercial Property Insurance

Product Manager

Leading the AIM Customer Portal modernization for FM Approvals and shaping how AI is integrated into Loss Prevention and underwriting workflows. Cross-functional ownership across IT, certification engineering, legal, and information security.

  • AIM Portal Rebuild
  • AI / Risk Insights
  • Compliance & Audit
Aug 2022 — Feb 2023 Houston, TX
Apache Corporation Upstream Oil & Gas

Product Manager

Owned the product roadmap for an internal operational visibility platform serving production engineers and field supervisors. Shipped a unified daily reporting and exception-alerting system across the basin operations org.

  • Internal Platform
  • Operational Data
  • Field Workflows
Feb 2017 — Aug 2022 San Francisco, CA
Binance International Cryptocurrency Exchange

Product Manager

Led the international mobile trading experience and the launch of the Binance IT Security Suite. Drove agile transformation that cut time-to-market by 40%, and partnered with Insider Risk Management on a platform-wide risk posture program.

  • Mobile Trading
  • IT Security Suite
  • International Growth
02

Selected Work

Four case studies. Each is written as a self-contained product narrative — problem, decisions, trade-offs, outcomes. Where company specifics are confidential, examples are clearly labelled representative or illustrative.

  1. 01 Re-platforming the Binance Mobile App Binance · Mobile · International
  2. 02 Unifying Field Operations Data, Permian Basin Apache · Internal Platform · Data
  3. 03 AIM Customer Portal Rebuild FM Global · Customer Portal · Migration
  4. 04 AI-Assisted Risk Insights FM Global · AI · Loss Prevention
01 Binance International Mobile Trading International Expansion Representative

Re-platforming the Binance mobile app for international growth.

A multi-quarter program to defend Binance’s position in twelve priority international markets — cutting onboarding friction, shipping a true power-trader experience, and rebuilding the underlying platform so feature velocity could compound.

Role
End-to-end product owner
Scope
Mobile app · 12 international markets
Team
3 engineering pods · cross-functional
Headline
−40% time-to-first-trade · 3× release cadence
Context

Binance International needed to defend its position in fast-growing non-US markets where mobile-first traders were churning to local exchanges with native experiences. Latency, fragmented onboarding, and inconsistent localization were eroding both new acquisition and active-trader retention. Leadership had a clear strategic mandate: be the default mobile exchange in our top twelve international markets.

Problem

The legacy mobile app was a single-codebase port that prioritized parity with web over mobile-native ergonomics. Time-to-first-trade for new users in Tier-2 markets averaged roughly seven days. Power-trader features were buried, and high-frequency users — who drove disproportionate volume — were defecting to lighter-weight competitors. Technical debt slowed feature shipping to a six-week release cycle.

Users & Stakeholders
  • Retail traders across 12 priority international markets
  • Power traders — top decile by 30-day volume
  • Compliance and Insider Risk Management teams
  • Engineering platform team — custodial wallets, matching engine bridge
  • Marketing and growth pods in regional hubs
Goals
  • Reduce time-to-first-trade for newly onboarded traders
  • Lift 30-day retention among power traders
  • Move release cadence from 6-week to 2-week cycles
  • Maintain regulatory and security posture across all jurisdictions
My Role

End-to-end product owner for the international mobile experience. Defined product strategy with the GM for International, owned the backlog, partnered daily with engineering managers across three pods, and was accountable to the Product VP for shipping and outcomes.

Process — Discovery to Delivery
  1. 01
    Discovery

    Eight weeks of user interviews with 40+ traders across five markets, quantitative funnel analysis on the existing app, and a competitor study covering three regional exchanges. Output: an opportunity map and segment hypotheses.

    8 wks
  2. 02
    Definition

    PRDs for three workstreams — onboarding, pro mode, discovery surface — reviewed with engineering, design, compliance, and risk. Output: signed-off PRDs and a dependency map.

    4 wks
  3. 03
    Build & Instrument

    Two-week sprints across three pods. A weekly trade-off review with engineering leads handled scope drift transparently rather than letting it accumulate in silence.

    cont.
  4. 04
    Beta & Rollout

    Phased rollout to two markets first, instrumented heavily, then waterfall expansion to the remaining ten as confidence accumulated. Each market rollout was paired with a regional comms plan.

    2 qtr
  5. 05
    Post-launch

    Monthly KPI review with the GM and a quarterly business review with the executive team. Standing “what we got wrong” agenda item to keep learning visible.

    ongoing
User Story · Onboarding MS-1147 Q2 priority illustrative
AS A first-time retail trader in Jakarta, I WANT my onboarding to validate my KYC documents in under 90 seconds end-to-end, SO THAT I can complete my first trade in the same session I downloaded the app. ACCEPTANCE CRITERIA 01 Document upload completes on connections ≥ 1 Mbps 02 KYC partner response within 60s for tier-1 documents (p90) 03 Localized error messages for the top-5 failure modes per market 04 Resume-from-failure within 24h without document re-upload 05 Telemetry: end-to-end time captured per attempt 06 Compliance sign-off: regulator-specific consent screens per jurisdiction # Definition of done includes a compliance review per market # and a green pre-flight from the Insider Risk team.
Reviewed: PM · Compliance · Insider Risk · Mobile Eng. Lead
How I Prioritized

A RICE-weighted backlog scored on (a) market-segment LTV impact, (b) regulatory exposure, and (c) engineering cost. Anything below the cut-line moved to a “next horizon” document. I maintained a visible kill list — features explicitly chosen not to build — to keep stakeholders calibrated and to convert “no” from a refusal into a shared decision.

Key Decisions & Trade-offs
  • Modularize first, ship features second.

    Engineering pushed back — we’d lose three months of feature velocity to platform work. I held the line because compounding velocity afterward was the point. Backed up the call with an explicit ROI memo to the VP.

  • Localized signal beats personalized recommendations.

    A heavier ML approach was tabled. The data wasn’t yet defensible and the regulatory cost in some jurisdictions was high. We shipped a simpler localized-signal engine and committed to revisit on a calendar.

  • No pro-mode for retail.

    Compliance and product agreed: surface complexity is itself a risk signal. Pro mode was opt-in only and gated behind a behavioral threshold.

Outcomes
−40%
Time-to-first-trade in priority markets
Release cadence — 6 weeks to 2 weeks
+0.7
App store rating across priority markets, 4 quarters
2-digit
Power-trader 30-day retention lift — representative
Tools
  • Jira
  • Confluence
  • Aha!
  • Figma
  • Amplitude
  • Mixpanel
  • Looker
  • SQL
  • Slack
Lessons
  • Platform investment is a leadership conversation, not an engineering one. Frame it as a velocity-and-cost decision in the language of the business sponsor.
  • Localization is a product, not a translation pass. Each market needed its own discovery loop.
  • Visible kill lists are an underused stakeholder tool. They convert “no” from refusal into shared decision.

Velocity is a strategic asset, but only if leadership owns the trade against capability for it. The platform investment landed because the trade was made visible to the business in the business’s own language.

02 Apache Corporation Internal Platform Operational Data Representative

Unifying field operations data for faster decisions in the Permian Basin.

An internal product that replaced the manual morning-report ritual with a single trusted source of truth, exception-based alerting, and the data foundation for downstream forecasting and AI-assisted optimization.

Role
Charter-to-MVP product lead
Scope
Field operations · Permian Basin
Team
VP Ops · Production Eng · Data Eng
Headline
Daily report compile time · hours → minutes
Context

Upstream field operations ran across a patchwork of SCADA telemetry, third-party well-monitoring vendors, drilling reports, and spreadsheets. Engineers and field supervisors were spending hours each morning stitching together a daily picture of basin performance that should have taken minutes. Leadership wanted a single internal product for operational visibility and a more consistent way to push exception-based attention to the right person.

Problem

The morning report production engineers compiled was a manual, error-prone process with no audit trail and no ability to drill into root cause. Decisions about well intervention and chemical treatment were being made on stale or inconsistent data. Annualized, the cost of operational latency and missed interventions across a small subset of wells was meaningful enough to justify internal platform investment.

Before · manual ritual
  • Three browser tabs and a maintained spreadsheet
  • Manual reconciliation when feeds drifted
  • Hours per engineer, every morning
  • No audit trail; decisions on stale data
After · unified product
  • One product surface, one schema
  • Exception alerts surface what matters
  • Minutes per engineer; freed for engineering
  • Audit trail and root-cause drill-down
Users & Stakeholders
  • Production engineers — primary users
  • Field supervisors and operations managers
  • Reservoir engineering — secondary, analytical use cases
  • Drilling & completions — read-only on production performance
  • IT platform team and data engineering
  • Executive sponsor: VP Operations
Goals
  • Reduce time spent compiling daily operational reports
  • Improve mean time to detect well-level production exceptions
  • Build a foundation for forecasting and AI-assisted optimization
  • Establish a single trusted source of truth for production data
My Role

Owned the product from charter through MVP launch. Defined the discovery scope with the VP Operations, ran requirements and design sprints with field engineers, partnered with the data engineering lead on architecture trade-offs, and managed the launch and adoption plan.

Process — Field-led discovery
PRD · Out-of-scope list OPS-VIZ-V1 Signed at kickoff illustrative
EXPLICITLY OUT OF SCOPE FOR V1 01 Mobile application   defer to v1.5; validate desktop daily-use first 02 Forecasting & ML modules   defer to v2; data quality bar not yet met 03 Drilling & Completions write access   read-only at launch 04 Per-user custom dashboards   templated dashboards only at v1 05 Cross-basin rollups   Permian-only at launch 06 Predictive maintenance integration   out of program REVIEWED & SIGNED: VP Operations · IT Platform Lead · Data Eng. Lead # This list is the contract. Additions require executive sign-off. # Out-of-scope items become candidates at the next quarterly planning gate.
Filed in Confluence · reviewed quarterly · tied to OKR-OPS-Q3
How I Prioritized

A value-vs-effort matrix scored against the VP’s three stated outcomes. The daily report builder went first because it was the highest-frequency, highest-pain user task. Alerting and analytics were held back so we could earn real adoption before adding features. Forecasting was sequenced last — a deliberate choice against the more glamorous capability.

Key Decisions & Trade-offs
  • Build on existing warehouse, defer re-platform.

    Faster MVP, but coupled us to the warehouse team’s roadmap. Mitigated by negotiating a shared SLA and a quarterly review.

  • Exception-based alerting before forecasting.

    Forecasting was the more glamorous capability; exception alerting was the higher-utility baseline. Held the line on sequencing.

  • No mobile MVP.

    Field teams asked for it. Deferred to a post-MVP phase to avoid splitting engineering effort and to validate desktop adoption first. Documented with an explicit commit to revisit.

Outcomes
hrs → min
Daily report compile time per engineer
significant
Mean time to detect well-level exceptions — representative
85%
Daily-active production engineers within 1 quarter of pilot
1
Trusted source of truth established for production data
Tools
  • Jira
  • Confluence
  • Aha!
  • Power BI
  • Tableau
  • SQL
  • MS Teams
Lessons
  • Field discovery is non-negotiable. The morning-report process was nothing like what the slide decks described.
  • Build the boring thing first. Exception alerting is worth more than forecasting if your daily process is already broken.
  • A short, signed list of out-of-scope items is the cheapest scope-protection mechanism in product management.

Internal platforms rarely fail on technology. They fail on adoption — which is downstream of whether the executive sponsor describes the problem in the same words the user does.

03 FM Global · FM Approvals Customer Portal Migration Real engagement, summarized

Re-architecting the AIM Customer Portal for FM Approvals.

A multi-phase migration of the customer-facing surface, identity layer, and certification workflow APIs — delivered with no certification-window disruption on a cohort-based cutover plan.

Role
Program product manager
Scope
Customer portal · all FM Approvals customers
Team
FM Approvals · IAM · Account Mgmt · Support
Headline
0 certification-window disruptions across migration
Context

The Account Information Management (AIM) Customer Portal was the primary digital touchpoint for FM Approvals’ certification customers — manufacturers and product owners who relied on the portal to manage approval applications, listings, and account information. The legacy system had accumulated more than a decade of patches: expensive to maintain, awkward to use, and a bottleneck for new product capability.

Problem

The legacy portal couldn’t support modern compliance workflows, single sign-on at scale, or the data integrations FM Approvals needed to streamline certification turnaround. Customer support volume was meaningful and rising. Internally, every new approval product required disproportionate engineering effort to plumb through the legacy stack.

Users & Stakeholders
  • External: certification customers — manufacturers, product owners, OEMs
  • Internal: FM Approvals certification engineers, account managers, support
  • Compliance and information security
  • Platform engineering and IAM team
  • Executive sponsor: divisional leadership at FM Approvals
Goals
  • Migrate AIM end-to-end with no disruption to active certifications
  • Improve customer self-service and reduce support volume per active customer
  • Establish a foundation for future approval-product launches without re-platforming cost
  • Tighten audit, identity, and compliance posture
My Role

Led the program end-to-end as the product manager. Owned the migration plan, requirements catalog, cutover strategy, and customer communications plan. Reported into FM Approvals product leadership and was accountable to the divisional executive sponsor.

How I Prioritized — The Sunset / Rebuild Decision

Inventoried every legacy feature, scored each on usage frequency × business criticality × migration complexity, then produced an explicit rebuild / rebuild later / sunset decision for each. Sunset decisions were socialized with account management before any customer was migrated.

Feature catalog, signed by FM Approvals leadership before customer migration. Counts representative.
Cutover plan · Cohort C3 Tier-1 EMEA 6-wave rollout illustrative
CUTOVER COMMS & OPS — COHORT C3 · TIER-1 EMEA MANUFACTURERS T-30 days Email + account-manager sync. Cutover window confirmed in customer contract. T-14 days Webinar: new portal walkthrough. Recording distributed in 4 languages. T-7 days Personalized email with login link, training videos, support hotline. T-0 Cutover window: Saturday 02:00 – 08:00 UTC. Status page live. T+1 day Account-manager check-in call — every Tier-1 customer. T+7 days Survey: cutover experience (NPS-style, 4 questions). T+30 days Sunset window for legacy read-only access closes. OWNERS PM (lead) · Account Mgmt (delivery) · Support (white-glove) · Comms (broadcast) PAUSE CRITERIA · Any Tier-1 customer reports a stalled certification within 24h of cutover · Support volume > 2.5x baseline for the cohort > 4 hours · IAM error rate > 0.5% of authentication attempts
Reviewed: PM · Account Mgmt Director · Customer Support Lead · FM Approvals Compliance
Process
  1. 01
    Discovery & inventory

    Six-week feature audit with usage telemetry and account-manager interviews.

  2. 02
    Stakeholder alignment

    Working sessions across FM Approvals leadership, certification engineering, and IT to socialize the rebuild/sunset list and resolve disputes.

  3. 03
    PRD & architecture

    Master PRD covering customer-facing surface, identity, and workflow APIs.

  4. 04
    Phased build

    Cut into shippable increments mapped to certification customer journeys.

  5. 05
    Cohort cutover

    Customer cohorts sequenced by certification activity. Low-activity first; high-volume manufacturers last.

  6. 06
    Sunset

    Decommissioned the legacy portal after a verified parallel-run window.

Key Decisions & Trade-offs
  • Cohort-based cutover, not big-bang.

    Slower overall, but eliminated the risk of disrupting active certifications during peak windows.

  • Identity migration before workflow rebuild.

    Took longer up front but meant the new workflow surface launched against a clean identity model.

  • Sunset 14 legacy features.

    Each mapped to either a redesigned equivalent or a documented removal with stakeholder sign-off. Prevented dragging legacy debt into the new system.

  • Communications-first rollout.

    Dedicated workstream for customer-facing communications and training. Reduced the support spike during cutover.

Outcomes
0
Certification-window disruptions across the migration
+
Customer self-service rates lifted on key journeys
Support tickets per active customer after stabilization
14
Legacy features explicitly sunset, with sign-off
Tools
  • Jira
  • Confluence
  • Aha!
  • Figma
  • Power BI
  • SQL
  • MS Teams
Lessons
  • A migration is a product. Treat the cutover plan with the same rigor as a feature launch.
  • Sunset decisions are the highest-leverage decisions in any rebuild. Make them explicit and defended.
  • The communications plan is the rollout plan. If customers don’t know what’s changing, the platform doesn’t matter.

Migrations demand all the craft of a new product launch with one extra constraint: the product is already in customers’ hands and they have built workflows around its quirks. The platform was the visible product; the negotiation was the actual work.

04 FM Global AI / Risk Insights Loss Prevention Representative

AI-assisted risk insights for property loss prevention.

A draft-and-review tool that helps Loss Prevention engineers turn site visit data into prioritized recommendations and underwriting-ready summaries — designed to be additive to engineering judgment, auditable, and governable from day one.

Role
AI workstream product manager
Scope
Loss Prevention engineer drafting tool
Team
Data Science · Risk · Legal · InfoSec
Headline
Productized draft-and-review pattern · 0 audit incidents
Context

Loss Prevention engineers conduct site visits to identify and quantify property risks at insured locations. Their reports flow into underwriting, account management, and customers themselves. Synthesis was largely manual: engineers spent significant time turning observations into prioritized recommendations and into language tailored to different downstream readers.

The strategic question: could a careful, AI-assisted product reduce the synthesis burden, surface cross-account patterns engineers might miss, and produce more consistent outputs — without compromising the engineering judgment that is the company’s competitive moat?

Problem
  1. Engineers spent disproportionate time on report synthesis versus on-site analysis.
  2. Underwriters and account managers received outputs of varying structure and clarity, slowing downstream decisions.
  3. Cross-account pattern recognition was bottlenecked by time, not by data — a real opportunity to surface emerging risks earlier.

The product had to be additive to engineering judgment, not a replacement; it had to be auditable; and it had to satisfy a high bar for explainability and data governance.

Users & Stakeholders
  • Loss Prevention engineers — primary user
  • Underwriters and account managers — downstream consumers
  • Risk research / data science team
  • Information security, legal, compliance
  • Executive sponsor: insurance product leadership
Goals
  • Reduce synthesis time on standardized loss prevention reports
  • Improve cross-account risk pattern detection
  • Maintain or improve report quality, judged by underwriting and customers
  • Operate within a defensible governance and explainability framework
My Role

Product manager for the AI-assisted insights workstream. Owned the product brief, partnered with the data science team on model selection and evaluation, drove governance and explainability requirements with legal and infosec, and managed the rollout to the engineering field organization.

What I Built — The draft-and-review pattern

A second module surfaces potential cross-account risk patterns for human review — engineer-in-loop, never auto-routed to underwriting.

Evaluation rubric · v1.2 Pilot cohort Engineer-judged illustrative
PER SECTION DRAFT — engineer-judged before any output leaves ☐ Factual correctness — any error: reject ☐ Recommendation prioritization matches engineer’s — top-3 must align ☐ Tone matched to consumer — underwriter / customer / regulator ☐ No hallucinated entities — sites / products / accounts ☐ Citations present for every quantitative claim — observation-traceable CAPTURED FOR ONGOING MODEL EVALUATION · Edit distance per section leading indicator (auto) · Accept / edit / reject per recommendation engineer-captured · Underwriter satisfaction lagging, 30-day survey · Cross-account flag → confirm rate precision proxy # Every model output is a draft. Engineer judgment governs. # Any sustained drop in any captured metric triggers a model review gate.
Reviewed: PM · Data Science Lead · Model Risk · Legal · Information Security
How I Prioritized — Value-vs-Risk

A value-vs-risk matrix instead of value-vs-effort. Synthesis assistance scored high on value, moderate on risk — manageable with a draft-and-review pattern. Autonomous report generation scored high on both value and risk and was explicitly out of scope for v1. Cross-account pattern surfacing scored high on value and moderate on risk — included with human-in-loop confirmation.

Key Decisions & Trade-offs
  • Draft-and-review, not autonomous.

    A choice for trust and adoption over apparent efficiency. Defended this in front of executives who initially wanted the more aggressive option.

  • Engineer-in-loop for cross-account surfacing.

    No flag goes to underwriting without an engineer’s confirmation. Reduces false-positive risk; slows surfacing slightly.

  • Edit-distance as a leading indicator.

    A controversial choice — edit distance can be noisy. We used it as a leading indicator of model quality; downstream underwriter satisfaction was the lagging indicator.

  • Explicit out-of-scope list.

    Autonomous generation, risk-rating outputs, and pricing recommendations were all explicitly out of scope. Reviewed and signed by legal and the executive sponsor.

Value vs Risk — v1 scoping
Scoping matrix used at executive review to defend v1 inclusions and the explicit out-of-scope list. Illustrative.
Outcomes
meaningful
Synthesis time reduction on standardized sections in pilot — representative
+
Underwriter satisfaction with report consistency
0
Governance or audit incidents through pilot
1
Productized pattern other AI workstreams now design against
Tools
  • Jira
  • Confluence
  • Aha!
  • Figma
  • SQL
  • Python (eval notebooks)
  • Internal model eval tooling
  • MS Teams
Lessons
  • Trust is the constraint, not capability. The product was deliberately less ambitious than the underlying model could support.
  • Governance is a product feature, not a check at the end. Designing the audit and explainability surface up front saved months of retrofit.
  • Edit-distance is a leading indicator of model quality; user satisfaction is the lagging one. You need both.
  • The engineer-in-loop pattern is portable. The same shape works for many AI-assisted internal tools and reduces the political cost of adoption.

The discipline in AI-assisted internal products is to ship less capability, more carefully, with the judgment of the people who own the outcome inside the loop. The pattern is portable; the trust is what has to be earned.

03

Product Thinking

Six pillars I bring to every team I work with. The patterns are durable across insurance, fintech, and energy — only the surface changes.

01

Prioritization

A scored backlog — RICE or value-vs-effort, sized to the situation — feeds a visible roadmap with explicit kill-list items. Prioritization is a stakeholder ritual, not a private calculation. I make trade-offs visible so the business owns them with me.

From the field At Apache, the explicit out-of-scope list signed at kickoff prevented a six-month forecasting workstream from being smuggled back into v1 by month three.
02

Stakeholder Management

Three discipline points. (a) A single source of truth for status and decisions. (b) Pre-reads before every executive review — never surprises. (c) An explicit forum for disagreement before commitments are made; disagreement after commitment is more expensive than disagreement before.

From the field At FM Global, weekly pre-reads to the divisional executive sponsor turned the cohort cutover from a high-stakes approval into a series of small, predictable confirmations.
03

Ambiguity

Time-boxed discovery, written hypotheses, evidence-driven rejection or commitment. I would rather kill a question than carry it. Where evidence is genuinely unavailable, I make the assumption explicit and revisit on a calendar.

From the field At Binance, an early hypothesis that ML personalization would lift retention was explicitly tabled when the data and the regulatory cost both failed our evidence bar; the simpler localized-signal engine shipped instead.
04

Trade-offs

Frame every trade-off as a comparison, not a choice. The question is never “should we ship X” — it is “should we ship X instead of Y.” Naming the alternative forces real prioritization and surfaces the second-order cost.

From the field The Loss Prevention AI tool was scoped against a more aggressive autonomous-generation alternative. Naming the alternative made the draft-and-review choice defensible at the executive table and recoverable on the engineering floor.
05

Discovery & Delivery

Discovery and delivery run in parallel, not in sequence. Discovery is continuous, fed by a backlog of opportunity hypotheses. Delivery operates on a tighter cadence with clear acceptance criteria and instrumentation built into the spec, not bolted on.

From the field On the Apache platform, two weeks of field-shadowing in West Texas surfaced friction points that no slide deck described — and made the v1 scope obvious in a way no internal interview could have.
06

Metrics & Success

Every initiative has a leading indicator instrumented into the build and a lagging business-value indicator reviewed quarterly. Vanity metrics are explicitly excluded. Quarterly reviews include a standing “what we got wrong” agenda item.

From the field For the AI-assisted Loss Prevention tool, edit-distance was the leading indicator of model quality; underwriter satisfaction was the lagging indicator. Both were instrumented from day one of pilot.

“The hardest product decisions are not what to build. They are what to stop building, and how to make that decision visible enough that the business owns it with you.”

— OA, on running discovery in regulated environments
04

Tools & Methods

Roadmapping & PM
  • Aha!
  • Jira
  • Confluence
  • Notion
Design & Prototyping
  • Figma
  • Wireframes
  • Mockups
  • User flows
Analytics & Data
  • Power BI
  • Tableau
  • SQL
  • Looker
  • Amplitude
  • Mixpanel
Methodology
  • Agile / Scrum
  • CSPO certified
  • CSM certified
  • Continuous discovery
  • OKR planning
Collaboration
  • Slack
  • Microsoft Teams
  • Loom
  • Miro
Domain
  • Insurance & FM Approvals
  • Cryptocurrency & KYC
  • Upstream Oil & Gas
  • Energy Law (LL.M)
05

Contact

Let’s build something durable.

I’m open to senior product roles where the problems are messy, the customers are real, and the bar for craft is high. Insurance, fintech, energy, AI-assisted internal tools — or any environment where regulation and product velocity have to be reconciled.