<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ci-Cd on ferkakta.dev</title><link>https://ferkakta.dev/tags/ci-cd/</link><description>Recent content in Ci-Cd on ferkakta.dev</description><generator>Hugo</generator><language>en-US</language><copyright>Copyright fizz.</copyright><lastBuildDate>Mon, 02 Mar 2026 09:00:00 -0600</lastBuildDate><atom:link href="https://ferkakta.dev/tags/ci-cd/index.xml" rel="self" type="application/rss+xml"/><item><title>Zero-touch multi-tenant deploys: removing myself from the critical path</title><link>https://ferkakta.dev/zero-touch-multi-tenant-deploys-eks-terraform/</link><pubDate>Mon, 02 Mar 2026 09:00:00 -0600</pubDate><guid>https://ferkakta.dev/zero-touch-multi-tenant-deploys-eks-terraform/</guid><description>&lt;p&gt;I had provisioned two tenants when I realized the deploy process didn&amp;rsquo;t scale to three. Each tenant on &lt;a href="https://ramparts.dev"&gt;ramparts&lt;/a&gt; runs three services &amp;ndash; &lt;code&gt;api-server&lt;/code&gt;, &lt;code&gt;web-client&lt;/code&gt; (the React frontend), &lt;code&gt;tenant-auth&lt;/code&gt; &amp;ndash; each with its own Docker image in ECR. Deploying a release meant running &lt;code&gt;gh workflow run deploy-tenant.yml -f tenant_name=acme -f action=apply -f update_images=true&lt;/code&gt;, then doing it again for the next tenant. With 3 services resolving per run and N tenants, I was the bottleneck. Not Terraform, not GitHub Actions, not ECR. Me, remembering which tenants existed and typing their names correctly.&lt;/p&gt;</description></item><item><title>Expression injection in GitHub Actions repository_dispatch — and the one-line fix</title><link>https://ferkakta.dev/github-actions-expression-injection-repository-dispatch/</link><pubDate>Fri, 20 Feb 2026 10:00:00 -0600</pubDate><guid>https://ferkakta.dev/github-actions-expression-injection-repository-dispatch/</guid><description>&lt;p&gt;I was hardening a cross-repo deploy pipeline built on &lt;code&gt;repository_dispatch&lt;/code&gt; when I found a textbook expression injection sitting in plain sight. The trigger workflow accepted a &lt;code&gt;client_payload&lt;/code&gt; JSON object from the caller and dropped its fields straight into a &lt;code&gt;run:&lt;/code&gt; block.&lt;/p&gt;
&lt;h2 id="how-repository_dispatch-works"&gt;How repository_dispatch works&lt;/h2&gt;
&lt;p&gt;When you call the &lt;code&gt;repository_dispatch&lt;/code&gt; API, you send a JSON body with an &lt;code&gt;event_type&lt;/code&gt; and an optional &lt;code&gt;client_payload&lt;/code&gt; — an arbitrary JSON object the caller defines. Your workflow reads it via &lt;code&gt;github.event.client_payload.*&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Self-healing race conditions: when your CI/CD fails on purpose</title><link>https://ferkakta.dev/self-healing-race-conditions-github-actions-concurrency/</link><pubDate>Fri, 20 Feb 2026 11:00:00 -0500</pubDate><guid>https://ferkakta.dev/self-healing-race-conditions-github-actions-concurrency/</guid><description>&lt;p&gt;Three app repos build Docker images and push them to ECR. On merge, each fires a &lt;code&gt;repository_dispatch&lt;/code&gt; to an infra repo&amp;rsquo;s orchestrator workflow. The orchestrator resolves ALL service images — not just the one that triggered it — and deploys every tenant via Terraform.&lt;/p&gt;
&lt;p&gt;What happens when two repos merge at the same time?&lt;/p&gt;
&lt;h2 id="the-sequence"&gt;The sequence&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;T=0:&lt;/strong&gt; &lt;code&gt;web-client&lt;/code&gt; and &lt;code&gt;tenant-auth&lt;/code&gt; both merge to &lt;code&gt;releases/0.0.2&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;T=2m:&lt;/strong&gt; &lt;code&gt;tenant-auth&lt;/code&gt; build finishes first, fires dispatch.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;T=2.5m:&lt;/strong&gt; Orchestrator Run A starts. Tries to resolve all 3 service images.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;T=2.5m:&lt;/strong&gt; &lt;code&gt;web-client&lt;/code&gt; image doesn&amp;rsquo;t exist yet — still building. Run A fails at image resolution.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;T=4m:&lt;/strong&gt; &lt;code&gt;web-client&lt;/code&gt; build finishes, fires its own dispatch.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;T=4m:&lt;/strong&gt; Orchestrator Run B starts. Run A already finished (failed), so the concurrency group is free.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;T=4m:&lt;/strong&gt; Run B resolves all 3 images. Both new ones exist now. Deploy succeeds.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The end state is correct. Both changes deployed. One workflow run failed. Nobody had to do anything.&lt;/p&gt;</description></item><item><title>Cross-repo auto-deploy with GitHub Actions: the orchestrator pattern</title><link>https://ferkakta.dev/cross-repo-auto-deploy-orchestration-github-actions/</link><pubDate>Fri, 20 Feb 2026 10:00:00 -0500</pubDate><guid>https://ferkakta.dev/cross-repo-auto-deploy-orchestration-github-actions/</guid><description>&lt;p&gt;Two repos merged within seconds of each other. The first orchestrator run failed — &lt;code&gt;web-client&lt;/code&gt;&amp;rsquo;s ECR image didn&amp;rsquo;t exist yet because the build was still running. The GitHub Actions log showed a red X, an error annotation, and a Slack notification I didn&amp;rsquo;t need to read.&lt;/p&gt;
&lt;p&gt;Four minutes later, the second run deployed both changes. No retry logic. No manual intervention. Nobody touched anything.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d spent my day building a cross-repo deploy pipeline for a multi-tenant platform — three app repos pushing Docker images to ECR, one infra repo deploying the new tenant service images to EKS. The race condition was the first real test. It failed exactly the way I wanted it to.&lt;/p&gt;</description></item><item><title>Your CI/CD dispatch token can rewrite your infrastructure code</title><link>https://ferkakta.dev/github-actions-repository-dispatch-contents-write-permission/</link><pubDate>Fri, 20 Feb 2026 09:00:00 -0600</pubDate><guid>https://ferkakta.dev/github-actions-repository-dispatch-contents-write-permission/</guid><description>&lt;p&gt;I built a cross-repo auto-deploy pipeline this week. Three app repos push Docker images to ECR, then dispatch a deploy event to the infra repo&amp;rsquo;s orchestrator workflow via &lt;code&gt;repository_dispatch&lt;/code&gt;. Standard pattern.&lt;/p&gt;
&lt;p&gt;The gotcha: fine-grained PATs need &lt;code&gt;contents:write&lt;/code&gt; to call the &lt;code&gt;repository_dispatch&lt;/code&gt; API. Not &lt;code&gt;actions:write&lt;/code&gt; — &lt;code&gt;contents:write&lt;/code&gt;. The permission that also lets you push code, create branches, and delete files.&lt;/p&gt;
&lt;p&gt;My service token that should only be able to say &amp;ldquo;hey, deploy this&amp;rdquo; can also rewrite the deployment workflow it&amp;rsquo;s triggering. That&amp;rsquo;s not least privilege. That&amp;rsquo;s a door that&amp;rsquo;s three sizes too wide.&lt;/p&gt;</description></item><item><title>Your terraform apply is silently rolling back your container images</title><link>https://ferkakta.dev/state-aware-ecr-image-resolution-github-actions/</link><pubDate>Tue, 17 Feb 2026 09:00:00 -0600</pubDate><guid>https://ferkakta.dev/state-aware-ecr-image-resolution-github-actions/</guid><description>&lt;p&gt;Every &amp;ldquo;deploy to EKS with GitHub Actions&amp;rdquo; tutorial solves the same problem: build an image, push to ECR, deploy it. The tutorial ends at &amp;ldquo;your pod is running.&amp;rdquo; Nobody talks about day two.&lt;/p&gt;
&lt;h2 id="the-silent-rollback"&gt;The silent rollback&lt;/h2&gt;
&lt;p&gt;Day two: you have a running EKS cluster with three services per tenant. You need to change an IAM policy. You open a PR, touch one line of Terraform, run &lt;code&gt;terraform apply&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Your IAM policy updates. Your container images also update — to whatever was hardcoded in &lt;code&gt;variables.tf&lt;/code&gt; as the default. That default was correct three months ago. Your services just rolled back to a three-month-old image and nobody noticed because the deployment succeeded.&lt;/p&gt;</description></item></channel></rss>