ferkakta.dev

I answered 114 AWS Well-Architected Review questions from my terminal

I was fourteen questions into the AWS Well-Architected Review when my wrists told me to stop. Each question is a page: read the description, check the boxes, type notes into a 2084-character text field, click Next. The Container Build Lens alone has 28 questions. I had two more lenses queued — the main Well-Architected Framework (57 questions) and the Generative AI Lens (29). That’s 114 questions total, and the console wants me to click through every one.

I have carpal tunnel. Clickops is not an option for 114 questions.

Everything behind that form is just JSON

The Well-Architected Tool has a full API. aws wellarchitected list-answers returns every question for a lens, including the available choices, their IDs, and any answers you’ve already submitted. aws wellarchitected update-answer pushes answers back — selected choices and notes, per question.

The entire review is a data structure. The console is just a form over it.

I pulled the Container Build Lens into a YAML file:

aws wellarchitected list-answers \
  --workload-id a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4 \
  --lens-alias "arn:aws:wellarchitected::aws:lens/containerbuild" \
  --region us-east-1 --max-results 50

Then for each question, get-answer to retrieve the full choice list with descriptions. A script assembled the YAML — one entry per question with id, title, pillar, available_choices, selected_choices, and notes.

My copilot checked my homework

With the questions in YAML, I answered them conversationally. My AI copilot asked each question, I answered, and the notes went straight to my clipboard so I could paste them into the YAML or directly into the console for the first few. After I realized the clipboard-to-console round trip was still clickops, we cut out the middle step entirely: the copilot updated the YAML and pushed via the API.

The interview style turned out to be better than the console in ways I didn’t expect. The copilot could check my claims in real time. “Do you pin Dockerfile base images by digest?” — instead of guessing, it grepped my Dockerfiles and told me which ones did and which ones didn’t. “Is GuardDuty enabled?” — it ran aws guardduty list-detectors and reported back. The answers were grounded in what actually exists, not what I thought existed.

The notes became a personal architecture journal. Every gap got documented honestly — not as a finding to remediate, but as a statement of where we are and what we haven’t built yet. “No formal incident response plan, no on-call rotation, no break-glass IAM roles. Incident response is ad-hoc — team monitors and responds as issues surface.” That’s more useful than a green checkbox. The improvement plan is the deliverable, not the score.

114 answers, one loop

Once the YAML was complete, pushing was mechanical:

aws wellarchitected update-answer \
  --workload-id $WORKLOAD_ID \
  --lens-alias $LENS \
  --question-id $QUESTION_ID \
  --selected-choices $CHOICE_1 $CHOICE_2 \
  --notes "$NOTES" \
  --region us-east-1

28 questions for Container Build. 57 for Well-Architected Framework. 29 for GenAI. All pushed programmatically, zero failures. The console showed the results immediately — risk ratings, improvement items, the whole dashboard populated from answers I never clicked.

The review lives in git now

The console is one question at a time, forward and back. The YAML is the entire review in one file. I can grep it for gaps (rg "no formal\|no automated\|not enabled"), diff it between reviews, version it in git, and hand it to a colleague who needs to understand our security posture without clicking through 114 pages.

The notes survive outside AWS. If we spin up a new workload, the YAML from the first review is a starting template — most of the infrastructure answers carry over, and the gaps are already documented. The console treats each workload review as an island. The YAML makes it a portfolio.

What fell out

114 questions answered in one session. Three lenses. Every answer has a note documenting what we do and what we don’t. The review generated an improvement backlog of 22 gaps organized by category — pre-launch blockers, security hardening, operational maturity, CI/CD — with AWS documentation links for each.

The Well-Architected Review is a prefab diary with writing prompts — and doing it interview-style with an AI copilot turns it from a chore into a pleasure. The copilot checks your claims against live infrastructure, you think out loud, and the notes become an honest architecture journal. The improvement plan is the deliverable, not the score. The review is your journal, not their trophy. The console makes it feel like homework. The terminal makes it feel like a conversation.

#aws #platformengineering #wellarchitected #cli #devops