Seal is the structural recipe spine for biologics operations — bench through commercial. PD, GMP, MSAT, and CMC work on the same versioned graph. We integrate with control systems, historians, instruments, and ERP through modern APIs; we do not replace them.
The Process That Drifted From Itself
A monoclonal antibody program. Process v3.2 was transferred to a CMO in March 2024 for commercial PPQ. The transfer package was thorough — master batch record, raw material specs, equipment matrix, validation plan. Forty-three documents. Eight Zoom workshops. Person-in-plant for the engineering runs. The CMO ran v3.2 on the engineering batch. Comparable to the sponsor's GMP material. Everyone signed off. Tech transfer complete.
Six months later, PPQ Lot 3 failed — protein A capture yield 18% below the expected range. The CMO's investigation pointed at column performance. The sponsor's MSAT team was confused; their internal process had been performing well. They opened the comparison.
The CMO was running v3.2. The sponsor's MSAT team had iterated to v3.5 to debottleneck the same protein A step. v3.3 added a dynamic load criterion. v3.4 changed the wash buffer molarity. v3.5 modified the elution gradient. None of these changes were "major." Each was approved internally as process-improvement work. None had been pushed to the CMO because the CMO was running clinical product, not commercial. Until they were.
Then it got worse. The CMC submission filed with FDA two months earlier described the v3.5 process. Because the regulatory team had pulled their narrative from MSAT's "current process" documents. The submission described one process. The CMO ran another. The clinical material referenced in the comparability section came from yet a third (v3.0, the GMP-pilot version).
Everyone had been working from the "current process." There were three current processes. The investigation took 14 months. The PPQ slipped a year. The 483 finding cited "inadequate procedures to ensure that the manufacturing process described in the application is the process used to manufacture clinical and commercial materials."
Three teams. Three sources of truth. All correct in their local frame. All incompatible the moment they met. Whether you read this story as the sponsor or as the CDMO, the failure mode is the same — and the fix is the same.
The many-recipes problem
Walk into any biologics company and ask a simple question: what's the current recipe for Product X? You'll get different answers depending on who you ask.
Process development shows you the bench recipe. Lab notebooks, scale-down model parameters, design-space data, the latest CPP ranges. The version they're optimizing now.
GMP manufacturing shows you the master batch record. The recipe approved when clinical campaign N was authorized. Frozen at campaign launch, with approved deviations stitched on top.
MSAT shows you the "current commercial recipe." The internal model that incorporates every learning since the campaign started — minor parameter shifts, raw material substitutions, troubleshooting changes that became permanent.
The CMO shows you the recipe they were transferred. Whatever version was current at handoff, with their own approved deviations layered on.
Regulatory shows you the process described in the IND, the BLA, or the most recent variation. A snapshot from when the submission went out.
Each lives in its own system — bench LIMS, MES master batch record, MSAT change-control workspace, CMO's own QMS, eCTD publishing tool. Each has the same name. Each says different things. The drift is invisible until something breaks.
The recipe as one graph
Seal collapses these into one structure. The recipe is a graph: unit operations linked in sequence, each unit operation linked to its CPPs and acceptance ranges, each CPP linked to the CQAs it controls, each unit operation linked to the raw materials, equipment, and consumables it consumes.
The bench recipe, the master batch record, the MSAT model, the CMO recipe, the CMC narrative — all views of that same graph. Process development sees the design space and the parameter ranges. Manufacturing sees the executable batch record with limits and acceptance criteria. MSAT sees the change history and the trend overlay. Regulatory sees the narrative formatted for Module 3.S.2.2 or 3.2.P.3.3.
When the bench iterates v3.3, the graph updates and every view updates. When MSAT pushes a tightening of a CPP based on commercial data, the graph updates and every view updates. When the CMO is on a specific version, the version is explicit and the diff against the sponsor's current version is computable in seconds.
There is no master document to forget to send. There is no shared-drive folder to find. The recipe is the structure. Documents are renderings.
Recipe-driven development
Most biologics programs treat process development as a document factory. Every campaign generates a study report that someone hopes to harvest later. CPPs are written into a Word document at the end of the program. Design space is captured in a series of DOE summaries that may or may not be linkable to the parameter ranges they justified. The master batch record is built by translating the development reports into procedure language, with someone hoping nothing was lost in translation.
Recipe-driven development inverts this. The recipe graph is the working artifact from the first PD experiment. When development runs an experiment, it runs against a draft recipe — a structured object with explicit unit operations, parameter ranges (wide initially), candidate CPPs, and intended CQAs. Each experiment links to the recipe version it tested and records which parameters it varied. The DOE result attaches to the parameter range it informed. The CPP justification is the chain of experiments that established it.
By the time the program enters GMP clinical, the recipe has accumulated its own rationale. The transition to GMP is a lock event — freeze the recipe at its current state, sign the master batch record. There is no "translation" step because the recipe and the master batch record are the same object viewed differently. The development reports are no longer the load-bearing artifact; the linked evidence on each recipe element is.
This compounds across programs. Platform recipes become reusable starting points. New programs branch from a platform recipe and customize the elements that need molecule-specific work. The institutional knowledge that took years to build for the first commercial product becomes structural in the system, available to the next program from day one.
One-click tech transfer
A traditional tech transfer is a packaging exercise. Assemble forty-three documents. Email them to the receiver. Hold workshops to translate. Run engineering batches. Compare results. Sign the transfer report. Then watch the recipe drift between the two sites for the rest of the product lifecycle.
One-click tech transfer collapses the packaging step. Click the recipe version. Select the receiving site. The system creates the receiver's recipe instance as a structured inheritance — every unit operation, every parameter, every CPP-to-CQA link, every raw material specification, every consumable transfers as one object. The receiver opens it, reviews each element, and declares site-specific deltas where the receiving site differs.
Every delta is logged with reason: "different chromatography column geometry — bed height adjusted from 25 cm to 22 cm to maintain residence time," "site-specific GMP grade media qualification — Lot N from approved Source B," "fill volume 100 mL replaces 50 mL per commercial pack size." The transfer comparability plan generates from the delta list. The engineering batch protocol generates from the inherited recipe. The transfer report renders from the executed comparability evidence.
What used to take six months of document assembly now takes weeks of meaningful technical work — because the meaningful technical work was never about the document assembly. The transfer is not "complete" the day engineering batches release. The link between sender and receiver recipe versions is preserved indefinitely. When the sender iterates, the system shows the receiver what's drifted. When the receiver runs into a deviation, it links back to the inherited unit operation and the sender's MSAT can see it.
This works whether the receiver is internal (PD → GMP pilot → commercial site) or external (sponsor → CDMO). For CDMOs running multiple sponsor programs in parallel, every program gets its own recipe spine with per-client isolation. The CDMO's MSAT works on N recipes for N sponsors without crossing wires, and each sponsor's live link sees only their program. Same structure. Same delta tracking. Same comparability evidence stitched into the recipe graph.
What Seal does not replace
A recipe spine is only useful if it composes cleanly with the systems that already run your operation. Seal is deliberately scoped:
- Control systems and DCS run the loop. Seal reads from them and writes recipe target ranges into them through OPC UA or vendor APIs. We do not run the loop and we are never on the critical control path.
- Historians and time-series infrastructure keep the high-frequency PV data. Seal pulls aggregates, excursion events, and CPV-relevant slices into the recipe context. The historian stays the historian.
- Instruments and analyzers keep their own SDMS, drivers, and connectors. Seal's role is to attach the result to the recipe element — the unit operation, the CPP, the CQA — not to be the instrument data lake.
- ERP owns the financials, the procurement, the inventory ledger. Seal carries the AVL relationship on the item and the recipe-side material requirements; ERP carries the receivables and the capacity plan.
- eCTD publishing tools handle the regulatory packaging. Seal is the source of truth for the CMC content; your publisher handles assembly and transmission to the agency.
Modern APIs — REST, JSON, webhooks, OPC UA where applicable — make this composability real instead of aspirational. Bus architectures, unified namespaces (UNS), and in-flight contextualization layers compose cleanly above the recipe spine. Whatever your data-standards strategy looks like, the recipe element is the canonical anchor that every other system attaches to.
MSAT as the steward of the live process
Manufacturing Science and Technology owns the process after launch. Every campaign generates data. Every deviation reveals something. Every raw material lot has a slightly different attribute profile. The process learns. The question is whether the learning lives anywhere structural.
In most companies, MSAT is a heroic activity that keeps a parallel knowledge base in spreadsheets, dashboards, and individual heads. The MSAT lead knows the process is "really" running with a tightened lower bound on Step 7 protein A load, but the master batch record still shows the wider range because tightening it would trigger a change control. The MSAT lead knows that Resin Lot 8472 had unusual fines content and the team adjusted the wash flow rate to compensate, but that's an institutional memory item.
In Seal, MSAT works on the live recipe graph. Every change MSAT proposes — whether it's the sponsor's MSAT or the CDMO's MSAT working on a transferred program — attaches to the unit operation, the CPP, the raw material, or the equipment it concerns. Every change runs through proper change control with cascade analysis: which CMC sections does this affect, which validated processes does this touch, which transfers need updating, which sponsor needs notification. Approved changes update the live graph. Rejected changes are archived with reason. The MSAT knowledge stops being a private dashboard and becomes part of the structural recipe record.
When MSAT receives a complaint trend, an OOT result, or a stability deviation, the investigation traces back through the same graph. The link between the field signal and the process state is structural. The CAPA that follows generates a process change with cascade impact already computed.
Continuous Process Verification (CPV) under ICH Q10 stops being a manual quarterly exercise. It runs against the live graph, with control limits attached to each CPP, and surfaces signals as they emerge.
Established conditions · Q12 lifecycle management
ICH Q12 introduced the discipline most biologics companies still fake: separating the recipe elements you've committed to the agency (Established Conditions) from the elements you control internally (supportive information). When an EC changes, you owe a submission — a Prior Approval Supplement, a CBE-30, or an Annual Reportable depending on category. When supportive information changes, internal change control is enough.
Most companies maintain ECs as a list in a spreadsheet, decoupled from the master batch record and the change-control system. Six months after filing, the spreadsheet is stale. Twelve months later, it's fiction. The first time anyone discovers a change should have been filed is when an inspector asks why it wasn't.
Seal tags ECs on the recipe element itself. Each unit operation, CPP range, raw material specification, or analytical method carries its EC classification — Prior Approval, CBE-30, Annual Reportable, or non-EC. When a change is proposed, the cascade computes the regulatory category automatically: this change touches three ECs at the Prior Approval level and two at CBE-30, so the variation submission is required and the timing follows. A non-EC change runs through internal change control without a filing.
Post-Approval Change Management Protocols (PACMPs) attach to the EC they manage. When you've pre-agreed a protocol with the agency for a future change, the system routes the change through the PACMP path automatically — collecting the predefined evidence, generating the reduced-reporting documentation, executing under the pre-agreed conditions. The Q12 framework stops being a regulatory ambition and becomes how change actually flows.
The Product Lifecycle Management Strategy document Q12 expects — describing how you classify ECs, how you manage change, how PACMPs operate — generates from the live tagging on the recipe graph. The strategy and the operation cannot disagree because they're the same object viewed differently.
Raw material AVL where it actually matters
For biologics, the AVL isn't a procurement convenience. It's a regulatory commitment. Animal-origin component status, GMP grade qualification, single-source risk, change-notification agreements — all of these are filed with the agency.
Seal puts the AVL on the raw material item, with full qualification context. Master cell bank reagents qualified for cGMP biologics use. Cell-culture media with confirmed animal-origin-free status. Chromatography resins with extractables/leachables data. Single-use bags with film qualification. Each source carries its qualification status, audit history, and change-notification subscription.
When a supplier issues a change notification — a resin manufacturer modifies their ligand, a media manufacturer changes their soy peptone source, a single-use bag film vendor switches their plasticizer — the notification routes to the right items in the system. Where-used answers immediately: which processes consume this material, which active campaigns are running, which CMC sections describe it, which comparability evidence is on file. The change-control response is initiated with the impact already computed.
Single-source critical materials surface continuously. The system flags items where there is no qualified backup, weighted by criticality and current campaign demand. Sourcing risk becomes a queryable property of the process, not a tribal worry.
Cell line and bank lineage as a graph
The biological starting material has its own lifecycle. Research cell line, master cell bank, working cell bank, end-of-production cell line characterization. Every program has its lineage. Every lineage has provenance, characterization, and stability data.
In Seal, the cell line and bank lineage are first-class objects. The MCB record links to its source research cell line, its characterization studies, its stability program, and the process that uses it. Every WCB derives from a specific MCB with a recorded passage history. Every campaign batch records the WCB vial used, the passage age at inoculation, and the harvest performance.
When a regulatory question lands — "show us the WCB-to-MCB-to-RCB lineage for the lot referenced in your BLA Module 3.S.2.3" — the answer is one query. The lineage isn't a paragraph someone wrote in a submission. It's the live graph the process actually used.
For programs with multiple variants — research strain, GMP-banked strain, commercial-banked strain — the lineage tracks the variant relationships explicitly. The CMC submission describes the actual lineage in production, not the lineage someone remembered when they wrote the section.
CPP / CQA traceability that holds
Every CPP exists because it controls a CQA. Every CQA exists because it links to a clinical risk. The QbD chain — process parameter → quality attribute → product safety/efficacy — is the regulator's framework. It is also the framework most companies maintain in disconnected Excel spreadsheets.
Seal makes the chain structural. Click any CQA — purity, potency, aggregate level, host cell protein, residual DNA, glycan profile. See every CPP linked to it across every unit operation. See the design-space evidence that established the link. See the validation evidence that proved it. See the post-launch CPV data that monitors it.
When a process change is proposed, the cascade computes which CQAs are potentially affected, which validation evidence may need refresh, which comparability studies are required. When a CQA result trends, the system surfaces the CPPs that historically drove similar shifts. When the CMC submission needs to justify a specification, the rationale renders from the linked design space and validation evidence — not assembled from documents.
Multi-site comparability without multi-effort
A successful biologic ends up running in multiple places. Sponsor's clinical site for early material, commercial site for launch, second site for capacity, CMO for surge. Every additional site multiplies the comparability burden. Or it would, if comparability were a document.
In Seal, comparability is structural. Each site runs the recipe as a versioned instance with explicit deltas from the master recipe graph. Comparability evidence — release testing, characterization, stability — links to the recipe versions it bridges. When you add a site, you inherit the master recipe, declare your site-specific deltas, and run comparability batches. The comparability dossier renders from the linked evidence. When the master recipe iterates, every site sees the change request and resolves whether to inherit or branch.
When inspectors compare what's in the BLA to what's running on the floor at any site, the answer matches by construction.
CMC submission from the live process
The Module 3 sections describing your manufacturing process — 3.2.S.2.2 (description of manufacturing process and process controls), 3.2.S.2.4 (controls of critical steps and intermediates), 3.2.S.2.5 (process validation and/or evaluation), 3.2.P.3.3 (description of manufacturing process and process controls for drug product) — are the most labor-intensive narrative sections of the dossier. They are also the sections most likely to drift from reality between filings.
In Seal, these sections render from the live recipe graph. Unit operations become process descriptions. CPP ranges become control parameter tables. Raw material specifications become control of materials sections. Validation evidence becomes the process validation summary. The narrative is generated from the structure, not assembled from archives.
Variations and supplements generate the same way, with the change relative to the prior submission already computed. The annual product review compiles itself from the campaign data and the process change history. The reviewer questions that follow filings get answered from queryable data, not from week-long document hunts.
Neil computes the cascade before you approve
Tell Neil what changed at the bench: "Move from Filter A to Filter B at the depth filtration step." Neil walks the recipe graph and prints the cascade before you submit the change request:
- 7 unit operations downstream that interact with the filter output
- 4 CPPs whose ranges were established under Filter A's flux profile
- 2 CQAs (HCP residual, host cell DNA) with linked validation evidence that may need refresh
- 3 sites running the recipe, including 1 CMO under live transfer link
- 6 CMC sections that describe the depth filter — 2 currently classified as Established Conditions, requiring a CBE-30
- 1 PACMP on file that covers depth filter changes within a defined design space — change qualifies, reduced reporting available
- 5 in-flight campaign batches that will need disposition impact review
You see the regulatory work and the operational work in front of you, before approval. Then Neil drafts the change request, the comparability plan, the CBE-30 narrative, and the impacted-batch disposition memo. You review and decide.
Between changes, Neil watches the spine. Recipe versions diverging across sites without an open transfer flag. CPP control limits that have drifted since last validation. Raw materials with high consumption and a single qualified source. CQA trends crossing statistical threshold. The signals come to MSAT, instead of MSAT going looking.
The next inspection
Same molecule. Same site. Two years after the v3.2/v3.5 incident. An FDA pre-approval inspection arrives for the variation. The lead investigator opens with the standard question.
"Show me the manufacturing process described in the application."
Module 3.S.2.2 renders. Generated from recipe v4.1, the version locked at the time of submission. Every unit operation, every CPP, every raw material specification on screen.
"Show me the same process as currently run on the floor."
The CMO's batch record for the lot in process. Recipe v4.1 plus three site deltas, each with rationale and comparability evidence. The investigator compares to the application. Match.
"What changes have you made since filing?"
Six. Each opens to its cascade analysis with EC classification visible. Two went through CBE-30 — filing dates, FDA acknowledgment letters, comparability data attached. One ran under PACMP-08 — pre-agreed protocol, evidence package, reduced reporting filed. Three were non-EC — internal change control, no submission required, with the rationale linked to the EC tagging.
"How do you know the CMO is running the same recipe?"
The live link. Pull up the diff between sponsor v4.1 and the CMO's effective recipe. Three site deltas, all declared. No undeclared drift. The diff is computed from the structure, not reconstructed from documents.
"Why is the day-7 glucose range what it is?"
Click the CPP. Design space evidence attached, study DOE-217. Linked CQA: G0F glycan ratio. Linked clinical risk: ADCC potency. Validation data, three PV lots, all within range. CPV trend chart for the last 18 commercial campaigns. The justification is the chain.
The investigator moves on. No 483.
Getting started
The fix is not a forklift migration. It is establishing the recipe spine for one program — usually the next program entering tech transfer or PPQ.
Pull the current master batch record into the structured graph. Link CPPs and CQAs as objects, not table rows. Tag Established Conditions on the elements that carry regulatory commitments. Capture the raw material AVL with qualification context. Lift the cell line and bank lineage. Attach validation and comparability evidence as linked records, not folder contents.
The first program takes weeks. The next tech transfer runs against the spine instead of a document package. Programs already in commercial benefit from MSAT moving onto the spine without disturbing the GMP master batch record until the next change cycle. Pre-IND programs benefit from starting structured.
The companies that made this transition stopped having the recurring conversation about which version is current.
