Back to all essays
    Regulated AI & Sovereignty

    Are Custom-Programmed Solutions Feasible in Pharma Again?

    AI compresses both the code and the documentation burden around it. The build-vs-buy equation in pharma IT is shifting for the first time in 20 years.

    I shipped a fully documented custom enterprise system around procurement workflows in a pharma environment in under 12 months. It was non-GxP. Still, the vendor selection and customisation path would have taken 2–3 years.

    That raises a question: are custom-built solutions becoming feasible in pharma again?

    For the last 20+ years, pharma IT defaulted to buy. Not because packaged software was cheap, but because custom systems were too slow, too validation-heavy, and too hard to maintain. Much of the cost sat in the effort required to produce and maintain URS, FDS, traceability, risk assessment etc.

    What changed is not only that AI now helps to write code faster. It also compresses the manual documentation burden around the code, with humans still reviewing and signing where required.

    Institutions are reacting. GAMP AI guidance, the draft EU Annex 22, FDA CSA, and the FDA/EMA principles are starting to show where AI fits in regulated environments, and where it does not.

    Karpathy's recent LLM Wiki pattern points to a practical model: raw sources and code become a schema-governed, LLM-maintained knowledge layer. In a regulated setting, that layer can stay live internally while controlled, signed outputs are generated where formal approval is required.

    The jump to GxP looks more like added compliance scaffolding than a different architecture.

    That changes the build-vs-buy equation. Custom software is becoming easier and quicker to build, and to maintain, thus safer and cheaper.

    For now humans are still in the loop. As models improve, more documentation and more governance will automate. Liability will not. The tricky question is no longer whether the work can be done, it's who owns the risk, and who answers when models err?

    References