Getting Started

Everything you need
to begin.

Marcus is an AI prompt library built for senior executives. Here’s what it is, how it works, and how to get the most out of it.


AI is only as good as the instructions you give it. Marcus exists to make those instructions excellent — so senior leaders can get board-ready output without becoming prompt engineering specialists.

The library contains carefully crafted prompts organised around the real decisions executives face: strategy reviews, board communications, people challenges, financial analysis, stakeholder management, risk governance, and more. Each prompt is written to work immediately — and to be adapted easily to your specific situation.

Marcus is not an AI tool itself. It’s a library of prompts you use with the AI tools you already have — ChatGPT, Claude, Gemini, Copilot, or any other assistant. Think of it as the playbook; you choose the player.

In this guide
  1. Find the right prompt
  2. Read the prompt before you copy it
  3. Customise before you run
  4. Use AI tools you already have
  5. Iterate — don’t stop at the first response
  6. Save what works

01

Find the right prompt

The Marcus library is organised by role and function — not by AI jargon. Whether you're a CEO preparing for a board meeting or a CHRO navigating a restructure, every prompt is built around the decisions and situations you actually face.

Use the filters in the library to narrow by your role (CEO, CFO, CHRO, CTO, Board Member), by function (Strategy, People, Finance, Communications, Risk), or by the type of output you need — a draft, an analysis, a framework, a decision.


02

Read the prompt before you copy it

Every prompt in Marcus is designed to be effective out of the box — but the best results come from understanding what it's doing before you use it. Read through the full prompt. Notice where it asks you to provide context, insert your own figures, or adapt the framing to your situation.

The prompts are not magic. They are well-structured instructions. The more relevant context you add — your organisation, your people, the stakes — the more precisely the AI can respond.


03

Customise before you run

Marcus prompts use placeholders like [company name], [paste data here], or [your target timeline] to mark where you should add your own information. Replace every placeholder before submitting. A prompt left half-finished produces a response that is half as useful.

If the default framing doesn't fit — adjust it. The prompts are starting points, not fixed scripts. Change the role, the tone, the output format, or the constraints to match your exact situation.


04

Use AI tools you already have

Marcus prompts are model-agnostic. They work with ChatGPT, Claude, Gemini, Copilot, or any other AI assistant your organisation uses. Copy the prompt from Marcus and paste it directly into your tool of choice.

If you're unsure which AI tool to use, Claude and ChatGPT are both well-suited to the kinds of executive-level tasks Marcus covers — long-form analysis, structured drafts, strategic frameworks, and complex synthesis.


05

Iterate — don't stop at the first response

The first response is rarely the final one. Treat it as a starting draft and push further. Ask the model to sharpen the argument, cut the length, add a dissenting view, or reframe for a different audience.

Some of the most useful follow-up instructions are simple: "Make this more direct." "Remove the caveats." "Give me three alternatives." "What have I missed?" The real leverage in AI comes from the conversation, not the first output.


06

Save what works

When you find a prompt that consistently delivers, bookmark it. Use the heart icon on any prompt card in the library to save it to your Favourites — available any time from your navigation bar.

Over time you'll build a personal collection of the prompts that work for your specific context, team, and style. That collection becomes a repeatable system — not a one-off experiment.


A few principles worth knowing

AI models respond to specificity. Vague instructions produce vague output. Telling the model your role, the audience, the constraints, and the format you want will consistently produce better results than a short, open-ended request.

Context is your most powerful lever. The model doesn’t know your organisation, your people, or the stakes. When you provide that background — even briefly — the output shifts from generic to genuinely useful. A sentence or two of context can transform the quality of a response.

Always review the output critically. AI can be fluent and wrong. Use the responses as a starting point — check facts, figures, and recommendations against your own knowledge before acting on them or sharing them.

For a deeper guide to the specific prompting techniques Marcus uses — zero-shot, chain-of-thought, role-based, and more — read the Prompt Guide.


Ready to put it
into practice?

Browse the full library and filter by your role, function, or the type of output you need. Every prompt is ready to use today.

Explore the Library →Read the Prompt Guide