<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Llm on ferkakta.dev</title><link>https://ferkakta.dev/tags/llm/</link><description>Recent content in Llm on ferkakta.dev</description><generator>Hugo</generator><language>en-US</language><copyright>Copyright fizz.</copyright><lastBuildDate>Thu, 19 Mar 2026 12:00:00 -0500</lastBuildDate><atom:link href="https://ferkakta.dev/tags/llm/index.xml" rel="self" type="application/rss+xml"/><item><title>The missing layer in compliance RAG: why your search results need a judge</title><link>https://ferkakta.dev/rag-judging-layer/</link><pubDate>Thu, 19 Mar 2026 12:00:00 -0500</pubDate><guid>https://ferkakta.dev/rag-judging-layer/</guid><description>&lt;p&gt;If you&amp;rsquo;re building search over a knowledge base with an LLM — the pattern everyone calls RAG — you&amp;rsquo;ve seen the standard pipeline: embed the user&amp;rsquo;s question, find the closest chunks in a vector store, hand them to the LLM, get an answer. For documentation search or internal wikis, this works. The LLM is good at ignoring irrelevant context when the relevant stuff is also in the window.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m building a CMMC compliance platform, and I wanted a way to dogfood our own product against our own development process. Every commit we make to the platform touches some aspect of NIST 800-171 — access control, audit logging, encryption, configuration management. I wanted our pull requests to show which compliance controls each change addresses. Not as a compliance artifact (though it could become one), but as a consciousness-raising tool: every engineer on the team sees the compliance implications of their own code, every reviewer sees which controls are being strengthened. It&amp;rsquo;s ambient education that turns into culture.&lt;/p&gt;</description></item></channel></rss>