From structured data to eCTD modules. Dossier generation, submission tracking, health authority correspondence—all from one source of truth.

Three weeks before the NDA target date. Regulatory affairs discovers that the stability tables in Module 3.2.P.8 don't match the raw data in the stability database. Someone copied numbers incorrectly six months ago. Now every table needs to be regenerated, cross-checked, and re-QC'd.
The team works nights and weekends. They make the filing date—barely. But nobody asks the harder question: why were regulatory writers manually typing stability data into Word documents in the first place?
This is the state of regulatory submissions at most organizations. The data exists in validated systems—batch records, stability databases, analytical methods, process validation reports. The format is defined by ICH. And yet, regulatory affairs spends weeks manually assembling documents, reformatting tables, writing narratives that describe data they're copying from one system into another.
The submission process isn't complex because regulations are complex. It's complex because the work is manual. And manual work creates errors that manual checking must catch—or doesn't.
An eCTD submission is a structured package. Module 3.2.S covers drug substance—manufacturing process, controls, characterization, stability. Module 3.2.P covers drug product—formulation, manufacturing, specifications, stability. Each section has defined content requirements.
Look at what populates those sections: batch data from manufacturing records, analytical results from LIMS, stability data from stability studies, process parameters from validation protocols. This isn't new information created for the submission. It's information that already exists in your operational systems, reformatted for regulatory presentation.
The question isn't whether the data exists. It's why humans are manually copying it.
Seal connects your operational data to regulatory output. A submission template defines the structure: which eCTD sections to generate, what data sources feed each section, what format each table should take.
Data queries pull from structured records. "Batch data for Product X" returns manufacturing records with yields, process parameters, in-process controls. "Stability results through 24 months" returns time-point data with statistical analysis. "Analytical methods for assay and impurities" returns method summaries with validation status. The data is already validated—it came from your operational systems.
Code builds the document structure. Tables populate with batch data—no manual transcription. Figures generate from stability trends—no copying into Excel. Cross-references link to supporting documents—no hunting for appendix numbers. Section 3.2.P.3.4 gets the batch analysis because the template maps it there, not because someone remembered to put it there.
AI drafts narrative sections. Given manufacturing data and process descriptions, AI writes the connecting prose: "The manufacturing process consists of seven unit operations beginning with compounding and ending with packaging. Process validation demonstrated consistent performance across three consecutive batches with yields of 98.2%, 97.8%, and 98.5%. All critical quality attributes met acceptance criteria."
You review, edit for accuracy, approve. The draft becomes your submission text. The data was never manually copied—it flowed from source systems. The narrative wasn't written from scratch—it was drafted from data and refined by experts.
Module 3.2.S (Drug Substance): Manufacturing process description, controls, characterization, stability. Data flows from batch records, analytical methods, and stability protocols. AI drafts process narratives explaining what each step accomplishes and how controls ensure consistency. The synthesis route, the critical process parameters, the in-process controls—all pulled from development and manufacturing records.
Module 3.2.P (Drug Product): Formulation, manufacturing, specifications, stability. Same pattern—structured data plus AI narrative. The dissolution data, the content uniformity results, the accelerated stability—all pulled from LIMS and formatted to eCTD requirements. Container closure compatibility links to stability and extractables studies.
Module 2.3 (Quality Overall Summary): AI synthesizes data from Modules 3.2.S and 3.2.P into executive summary. Key quality attributes, control strategy rationale, stability conclusions. The QOS that used to take a week of writing takes a day of review—because the AI has access to all the underlying data, not just what someone remembered to include.
The stability tables that almost caused a filing delay? They generate directly from the stability database. If the data changes, the tables update. There's no manual copy to get out of sync.
The same product needs approval in multiple markets. FDA wants an NDA. EMA wants a MAA. Health Canada, PMDA, TGA—each has requirements, formats, timelines.
The underlying data is identical. What differs is presentation: US regional requirements versus EU Module 1, different administrative forms, varying levels of detail in certain sections.
Seal maintains one source of truth—your product data—and generates market-specific outputs. The stability data is the same; the table format adapts to each agency's expectations. The manufacturing description is the same; the regional variations address market-specific requirements. When you update stability data, every market's submission reflects the change.
This isn't just efficiency. It's consistency. When FDA and EMA review the same product, they should see the same data presented in their expected format—not different interpretations created by different writers working from different source documents.
The submission is filed. FDA has questions. "Provide additional data supporting the proposed shelf life." "Clarify the rationale for the specification limit on Impurity A." "Submit updated stability data for the commercial process."
Each question requires finding what was submitted, gathering current data, and preparing a response. In manual systems, this means hunting through filing archives, re-querying databases, and hoping you find everything relevant.
Seal links questions to submissions. When FDA asks about shelf life, you see what stability data was in the original submission, what data has accumulated since, and what the current trend analysis shows. AI drafts a response incorporating current data: "Since the original submission, 12-month stability data has been collected on three additional batches. All results remain within specification with no significant trends observed. The attached stability update demonstrates continued support for the proposed 24-month shelf life."
You verify the data references, adjust the language, submit. The response is accurate because it pulls from the same data sources as the original submission—not from someone's recollection of what was filed.
A process change triggers regulatory assessment. What filings are required? In which markets? With what classification?
Seal evaluates the change against market-specific requirements. This change requires a Type II variation in EU, a PAS in US, notification only in Canada. Different formats, different timelines, same underlying change.
For each market, the system generates appropriate documentation. The manufacturing change description is the same data—the presentation adapts. Impact assessments reference the relevant sections of each market's approved dossier. AI drafts market-specific cover letters explaining the change in terms each agency expects.
When the change is approved, each market's dossier updates. The current approved state reflects the variation. Next submission builds from current state, not from trying to remember what's been approved where.
Every approval creates commitments. Submit annual stability. Complete post-approval studies by a date. Update labeling to reflect a new indication. Provide batch data from the first three commercial lots.
Miss a commitment and regulators notice. Tracking commitments across products, markets, and years is error-prone when it lives in spreadsheets.
Seal tracks commitments with owners, deadlines, and links to source approvals. Dashboard shows what's due—this quarter, this year, overdue. When fulfillment is due, the system doesn't just remind you. It pulls current data and drafts the update. Annual stability report? Generated from current stability data. Commercial batch data? Extracted from manufacturing records.
You review and submit—the commitment closes with evidence linked to the original requirement. Auditors ask about commitments? Show them the list, the fulfillment evidence, the submission confirmation.
The stability tables match the database because they're generated from the database. The batch data matches the records because it's extracted from the records. The process narrative matches the validation reports because AI drafts from the validation data.
When submission teams spend their time reviewing and refining instead of copying and formatting, submissions get better. Fewer errors because less manual transcription. Faster turnaround because less assembly time. Better narratives because experts focus on interpretation, not data gathering.
The NDA that was a three-week emergency becomes a three-day review. Not because the work was rushed—because the manual work was eliminated.
