Eleven thirty in the morning. The Atlas project kickoff just ended. Real decisions were made in the room. Real commitments were spoken out loud. And now you are sitting at your desk with three pages of notes that look like a transcript of a meandering conversation.
Somewhere in there are risks that need a register. Action items that need owners and dates. And a responsibility matrix the team expects by tomorrow.
That is three separate documents, three different formats, and if you are being honest, about two hours of work. Most PMs open a blank spreadsheet and start copying and pasting. By the time they finish the risk register, the action items are already stale.
Today we are going to do all three in ten minutes. One paste. Three documents. One conversation.
Three documents every project needs after a kickoff
| Document | What it is | Why it matters |
|---|---|---|
| Risk Register | Structured table — likelihood, impact, owners, mitigation | Surfaces what could derail the project, with someone owning each risk |
| Action Log | Every commitment with a name, date, deliverable, dependency | Turns the room's promises into trackable work |
| RACI Matrix | Who is Responsible, Accountable, Consulted, Informed for each deliverable | Removes the "I thought you had it" failure mode |
The input for all three is the same meeting notes. We are not going to write three separate prompts from scratch. We are going to have one conversation with the AI. Each document builds on the last.
Prompt one — the risk register
I am using Claude. This works the same way in ChatGPT or Copilot. The tool does not matter. The structure does.
You are a senior project manager processing notes from a project kickoff meeting.
Review the attached meeting notes and produce a risk register table with these columns:
- Risk ID (R-001, R-002, etc.)
- Risk Description (one clear sentence)
- Likelihood (High / Medium / Low)
- Impact (High / Medium / Low)
- Risk Owner (name from the meeting)
- Mitigation Action (one specific action)
- Status (Open)
- Source (where in the meeting this was raised)
Include risks that were explicitly discussed AND risks that are implied but not stated. Flag implied risks clearly.
Notice the format. I am telling it exactly what columns I want. The last line is where the value sits. I am asking it to find risks that were discussed and risks that are implied but not stated.
A PM reading those notes would catch the obvious risks. The data mapping issue. The legacy APIs. But what about the QA environment not being ready until sprint three? That is a schedule risk hiding in a side comment.
I paste the meeting notes below the prompt. No cleanup. No formatting. Raw notes, exactly as they were written.
What comes back
Seven risks in about fifteen seconds. Five explicit. Two implied.
Risks one and two are the obvious ones — legacy APIs undocumented for two years and customer data in three address formats. Both came directly from the meeting. Owners are correct. Mitigation actions are specific.
Risk six is more interesting. Analytics team understaffed, schema sign-off at risk. That came from the side conversation after the meeting, where Priya privately said the schema sign-off date was "optimistic at best." The AI caught it because the full notes were in the input. Most PMs would have left that out of the register.
Risk three has no owner. Change management for two hundred users. The AI flagged it as unassigned. That is your first follow-up action.
What needs your judgment: the likelihood ratings. The AI defaults to Medium when uncertain. You were in the room. You saw Priya's face when the team talked about the address formats. That is a High, not a Medium. Adjust based on what you observed.
Ninety seconds of generation. Five minutes of review and adjustment. That is your risk register.
Prompt two — the action log
Same conversation. I do not start over. The AI already has the meeting notes and the risk register in context.
Now, from the same meeting notes, produce an action log table with these columns:
- Action ID (A-001, A-002, etc.)
- Action Item (specific deliverable, not vague)
- Owner (name)
- Due Date (specific date or sprint reference)
- Priority (High / Medium / Low)
- Dependencies (what must happen first)
- Status (Not Started)
Include actions from explicit commitments AND actions implied by risks or gaps identified above.
Notice I am asking for dependencies, not just tasks and dates. That column is what separates a useful action log from a to-do list. I am also asking it to include actions implied by the risks we just identified. Risk three had no change management owner. That should become an action item: assign a change management lead.
What comes back
Eight action items. Four came from explicit commitments. Three from the risks above. One from the side conversation.
Action four — assign a change management lead — was not a commitment from the meeting. It was generated because risk three had no owner. The AI saw the gap and created the follow-up.
Action seven — confirm the schema sign-off date — came from the side conversation. The AI remembered it and turned it into a tracked action with a specific timing.
Look at the dependencies column. Action three, the API contracts, depends on action one, the cloud security review. These are not isolated tasks on a list. They are linked.
One thing to check: the due dates. The AI pulled dates from the meeting notes where they were mentioned. Where they were not, it used sprint references — Sprint 1 check-in, Week 1 check-in. You may need to convert those to specific dates based on your calendar.
Two documents done. Same conversation. Same notes. Let us get the third.
Prompt three — the RACI matrix
The RACI is where most PMs spend the most time, because you are guessing at who said what. The AI has the transcript. It maps the assignments based on the actual conversation.
Eight deliverables come back: cloud infrastructure, API migration, frontend redesign, data migration, security review, QA strategy, change management, SSO integration.
Look at the change management row. Maya — the PM — is listed as Responsible because no one else was assigned. The AI did not invent an owner. It put the PM as a placeholder and flagged it.
David is Accountable for change management because he is the sponsor and it affects end users. That is a reasonable inference from his opening statement about customer impact.
What to verify: the Consulted and Informed columns. The AI makes reasonable assumptions, but you know the working relationships better. Maybe Sam should be consulted on data migration because the frontend displays that data. That is a judgment call.
Three documents. One conversation. The meeting notes went in once. The risk register, action log, and RACI came out connected to each other. Risks generated actions. Commitments generated RACI assignments.
Total time: about ten minutes of AI generation and review. That same work, done manually, is two hours on a good day.
What AI cannot do
Here is what the AI did not do, and cannot do.
It did not read the room. When Priya said the schema sign-off date was optimistic at best, she said it privately, after the meeting. The AI processed her words. It did not see her face during the meeting when the date was agreed. You did.
It did not weigh political context. The security review is due March fourteenth. If Jordan is late, does Maya escalate to David immediately, or give it two more days? That depends on relationships the AI knows nothing about.
It did not decide what to escalate. The QA environment is not ready until sprint three. Taylor raised it as a risk. But is it worth flagging to the sponsor now, or do you absorb it and revisit at the sprint one review?
Those are judgment calls. They are the reason the PM role exists. The AI built the scaffolding. You do the thinking. Ten minutes of generation. Ten minutes of review and judgment. Three documents that would have taken two hours.
One paste. Three documents. One conversation.
The meeting notes did not change. They are still messy, unstructured, and full of side conversations. What changed is what you do with them next.
One paste into any AI tool. One conversation that builds three connected documents. Each one informed by the last. AI handles the formatting, the structure, and the cross-referencing. But it cannot have the sponsor conversation. It cannot read body language. And it cannot decide which risks to raise and which to absorb.
Take the notes from your next kickoff. Paste them into Claude, ChatGPT, or Copilot. Ask for a risk register first. Then an action log. Then a RACI. See what it catches that you would have missed.
Practical AI. One workflow at a time.
Watch the full walkthrough on YouTube — same framework, real example, ten minutes. Subscribe for one practical PM workflow every week.
