We published our content methodology because readers deserve to know how the information they're reading was produced. M.A.R.C. — Methodology for Augmented Research Content — is a set of seven documented rules that govern every article on this site: mandatory sourcing, counter-arguments, incentive disclosure, and human review before publication.
This article explains what M.A.R.C. contains, why we built it, and what its limitations are.
Why does content need a published methodology?
AI tools have made content production cheaper and faster than at any point in history. According to an ongoing study by Originality.ai, an AI content detection service,, AI-generated content in Google's top search results more than doubled between December 2023 and November 2024 — from 8.5% to over 18%.
That volume creates a trust problem. When content is cheap to produce, the format of an article tells you nothing about its quality. A carefully researched piece and a thirty-second generation look identical on the page. Both have confident headings. Both cite statistics. Both sound authoritative.
Google has responded to this directly. Their March 2024 core update targeted low-quality and unoriginal content. Combined with previous efforts, this resulted in 45% less low-quality content appearing in search results, according to Google. But algorithmic filtering is reactive. A published methodology is proactive — it commits to a standard before the content is produced, not after.
For AEO professionals specifically, this matters because AI search engines are increasingly selective about which sources they cite. Research published at ACM SIGKDD 2024 found that content with proper citations and quotations can boost visibility in AI-generated responses by up to 40% — with smaller sites seeing improvements of up to 115% when using citation strategies (Aggarwal et al., 2024). The structural and sourcing elements that M.A.R.C. requires aren't just editorial choices. They're signals that AI systems use when deciding what to cite.
What does M.A.R.C. actually require?
M.A.R.C. stands for Methodology for Augmented Research Content. "Augmented" because AI assists in the research and drafting process. "Methodology" because the process is documented and repeatable, not ad hoc.
Every article must satisfy seven principles:
- Answer the question first. The direct answer appears within the first 50 words. No filler preambles.
- Source everything. Every factual claim links to a verifiable source. Statistics include who measured them, when, and how.
- Counter our own argument. Every article that makes a case includes the strongest available counter-argument — the version an intelligent sceptic would actually make.
- Disclose our incentives. Where the content topic relates to services we sell, we say so explicitly.
- Calibrate confidence to evidence. Strong evidence gets confident language. Mixed or early-stage evidence gets appropriately hedged language.
- Link to independent sources. Further reading includes resources we don't control — so you can verify and go deeper without staying on our site.
- Human review is non-negotiable. AI assists in production. A human reviews every article against a structured checklist before publication.
The full methodology is documented on our methodology page.
How does this compare to standard content production?
Most content operations — whether AI-assisted or not — don't publish their editorial standards. Some have internal guidelines, but the reader has no way to verify what process produced what they're reading.
M.A.R.C. differs from typical content workflows in three specific ways:
- Counter-arguments are mandatory, not optional. Most content marketing is designed to support a position. M.A.R.C. requires presenting the strongest objection to that position and engaging with it substantively — minimum 100 words, evaluated for steelman quality during review.
- Incentive disclosure is structural, not discretionary. It's built into the template as a required section, not left to the writer's judgment about whether it's "relevant" to a particular piece.
- The review is documented per-article. Each piece goes through a 14-point check: 10 structural items (answer position, source count, FAQ presence, section length) and 4 ethical tests (peer test, asymmetry test, fact test, consilience test). This produces metadata that tracks quality over time — it's not just a checkbox.
Does publishing a methodology actually build trust?
There is evidence that it does. A series of field studies by Harvard Business School researchers (Buell, Porter & Norton, 2021) found that operational transparency — showing people the work being done behind a service — measurably increased both trust and engagement. In one study, users who received evidence of their service requests being addressed submitted 60% more follow-up requests. In another, residents exposed to visualisations of government work became 14% more trusting. A separate experimental economics study (Borzino et al., 2023) found that transparency generated a 27% increase in trust compared to reputation-only conditions.
The mechanism is straightforward: when you can inspect the process, you don't have to trust the claim. Published methodology converts "trust us" into "check our work."
What's the case against published methodology?
The strongest argument against a published content methodology is that it can become performative — a trust signal that substitutes for actual quality rather than guaranteeing it. A company could document an impressive-sounding process and then not follow it. The document itself doesn't prevent that.
This is a legitimate concern. Certification schemes in other industries (organic food labelling, ISO standards) have documented cases where the certificate exists but compliance is inconsistent. A methodology is only as good as the review process that enforces it, and at small scale, that depends entirely on the people running it.
There's also an efficiency argument: the overhead of mandatory counter-arguments, structured review, and per-article metadata tracking is significant. A team without this overhead can publish faster, test more topics, and iterate more quickly. In a market where speed matters, methodology can be a competitive disadvantage.
M.A.R.C. accepts both of these limitations. The methodology is testable against its own standards — if our articles don't hold up to the checks we claim to run, the methodology page makes that verifiable. And the speed trade-off is deliberate: we publish less, but we think the quality difference is worth the cost. Whether the market agrees is an empirical question we don't yet have enough data to answer.
Incentive disclosure: Findcraft is an AEO consultancy. We sell AI visibility services. A published methodology supports our positioning as a quality-focused consultancy — we have a commercial interest in presenting this approach favourably. The claims about M.A.R.C.'s principles are verifiable against the methodology page. The claims about trust and AI citation research are sourced below.
Frequently asked questions
What is M.A.R.C.?
M.A.R.C. stands for Methodology for Augmented Research Content. It's a documented content governance framework requiring answer-first structure, mandatory sourcing, counter-arguments, incentive disclosure, calibrated confidence language, independent further reading, and human review for every article.
Does M.A.R.C. use AI in content production?
Yes. AI assists with research, drafting, and structural checks. The methodology governs how AI is used — requiring human review, source verification, and editorial judgment before publication.
Can other businesses use M.A.R.C.?
The principles — source everything, counter your own argument, disclose incentives — are not proprietary. Any content team can adopt them. Findcraft's implementation includes specific structural templates, metadata tracking, and a review protocol designed for AEO content.
Why publish the methodology rather than just producing good content?
Because "trust us, it's good" is not verifiable. Publishing the methodology lets readers inspect the process, not just the output. As discussed above, field research on operational transparency shows measurable gains in both trust and engagement when people can see how work is done.
Further reading
- GEO: Generative Engine Optimization — Aggarwal et al. (2024). The academic research on how content structure affects AI citation, published at ACM SIGKDD 2024.
- Surfacing the Submerged State: Operational Transparency Increases Trust — Buell, Porter & Norton (2021). Field studies showing transparency increases trust and engagement.
- New ways we're tackling spammy, low-quality content on Search — Google's official announcement of the March 2024 core update.
Content produced through the M.A.R.C. methodology — Methodology for Augmented Research Content. Human-reviewed before publication.
Related reading
How Does AI Decide Which Businesses to Recommend?
AI recommends businesses it can verify and trust. Here's how it decides and what signals matter.
What Is AEO? A Plain English Guide for Small Businesses
Answer Engine Optimisation explained in plain English. What it is, why it matters, and what you can do.