| --- |
| language: |
| - en |
| license: cc-by-4.0 |
| size_categories: |
| - 1K<n<10K |
| task_categories: |
| - text-classification |
| - feature-extraction |
| tags: |
| - patent |
| - prior-art |
| - freedom-to-operate |
| - fto |
| - office-action |
| - uspto |
| - epo |
| - multi-jurisdiction |
| - intellectual-property |
| - legal-nlp |
| pretty_name: "Layer A — Office Action Triples for FTO Eval" |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: "**/jurisdiction=*/filing_year=*/*.parquet" |
| - config_name: us |
| data_files: |
| - split: train |
| path: "US/jurisdiction=US/filing_year=*/*.parquet" |
| - config_name: ep |
| data_files: |
| - split: train |
| path: "EP/jurisdiction=EP/filing_year=*/*.parquet" |
| |
| |
| - config_name: prior_art_index |
| data_files: |
| - split: train |
| path: "prior_art_index/*/index.parquet" |
| --- |
| |
| # Layer A — Office Action Triples for FTO Evaluation |
|
|
| A public dataset of `(invention → cited prior art → outcome)` triples |
| extracted from the **USPTO Office Action Research Dataset (OARD)** |
| + **USPTO Open Data Portal (ODP)** API (US slice) and the **EPO |
| Open Patent Services (OPS)** Register service (EP slice). Built as |
| the agent-evaluation substrate for |
| [Parallax](https://parallax.3mergen.com), an AI-native Freedom-to- |
| Operate (FTO) and defensive-publication platform for individual |
| inventors and small teams. |
|
|
| > **Curated by [Vox](https://vox.delivery)** (org: `v13s`). |
| > Parallax is a Vox product; the curation layer (annotations, |
| > severity tagging, schema, manifest) is © Vox 2026 under |
| > CC-BY-4.0. The underlying USPTO patent data is public domain. |
|
|
| ## TL;DR |
|
|
| - **5 011 rows** (v1.1.20260513): 5 000 US Office Actions (filing years |
| 2011–2017) + 11 EP search reports (filing years 2014–2018) |
| - **14 Parquet shards**, partitioned by `jurisdiction × filing_year` |
| - Schema: `(case_id, invention, examination, prior_art[], outcome, |
| provenance)` — see [Schema](#schema) below |
| - Two jurisdictions live (US, EP); JP gated on INPIT bulk |
| credential issuance |
| - License: **CC-BY-4.0** on the curation; underlying patent |
| documents remain in the public domain |
| - SHA-256 manifest at `MANIFEST.json` for byte-level |
| reproducibility |
|
|
| ## Quick start |
|
|
| ```python |
| from datasets import load_dataset |
| |
| # Default config returns both jurisdictions (5 011 rows). |
| ds = load_dataset("v13s/golden-fto-layer-a", split="train") |
| print(len(ds)) # 5011 |
| |
| # Per-jurisdiction configs are also available. |
| ds_us = load_dataset("v13s/golden-fto-layer-a", "us", split="train") |
| ds_ep = load_dataset("v13s/golden-fto-layer-a", "ep", split="train") |
| print(len(ds_us), len(ds_ep)) # 5000, 11 |
| |
| row = ds[0] |
| print(row["case_id"]) # e.g. "US-13004847-0" |
| print(row["invention"]["title"]) # "SYSTEM AND METHOD FOR ..." |
| print(row["examination"]["oa_type"]) # "rejection" | "search_report" |
| print(row["examination"]["rejection_reasons"]) # ["obviousness_103"] |
| for ref in row["prior_art"]: |
| print(ref["ref_id"], ref["severity"]) |
| # US: "US9123456B2", "obviousness" |
| # EP: "US6825941", "novelty_destroying" |
| ``` |
|
|
| ## Schema |
|
|
| Each row is a single Office Action event linked to its prior-art |
| citations. The full schema lives at |
| [`data-pipeline/src/layer_a/schema.py`](https://github.com/masterleopold/parallax/blob/main/data-pipeline/src/layer_a/schema.py) |
| in the source repo. |
|
|
| | Field | Type | Description | |
| |---|---|---| |
| | `case_id` | string | Stable id: `<jurisdiction>-<application_number>-<oa_seq>` | |
| | `schema_version` | string | Per-row schema version (`1.0` legacy / `1.2` post-2026-05-07) | |
| | `jurisdiction` | string | `US` or `EP`; `JP` in future versions | |
| | `source_dataset` | string | `uspto_oard` (US rows) or `epo_ops` (EP rows) | |
| | `extracted_at` | timestamp[s, UTC] | When this row was emitted | |
| | `invention` | struct | Application metadata — title, abstract, IPC/CPC codes, claims, applicant | |
| | `examination` | struct | OA event — `oa_date`, `oa_type` ∈ {rejection, allowance, search_report}, `rejection_reasons[]`, `examiner_id` | |
| | `prior_art` | list<struct> | Cited references — `ref_id`, `ref_type`, `source` (examiner/applicant), `rejection_basis`, `claims_blocked[]`, `severity`, `categories[]` (v1.2+), `metadata` | |
| | `outcome` | struct | Final disposition — `final_disposition`, `disposition_date`, `granted_claims[]`, `amendments_made`, `decision_source` (v1.1+) | |
| | `provenance` | struct | Audit trail — `parser_version`, `source_file`, `manifest_sha256`, `validation_status`, `validation_notes[]` | |
|
|
| ### Severity enum (`prior_art[].severity`) |
| |
| A 3-value severity enum that downstream consumers can join across |
| jurisdictions. Each jurisdiction has its own source signal: |
| |
| | Severity | US (OARD signal) | EP (WIPO ST.14 search-report category) | |
| |---|---|---| |
| | `novelty_destroying` | `rejection_102=1` AND `citation_in_oa=1` | `X` or `E` (incl. multi-char `XY`, `XYI`) | |
| | `obviousness` | `rejection_103=1` AND `citation_in_oa=1` | `Y` | |
| | `background` | otherwise (PTO-892, PTO-1449 IDS) | `A`, `P`, `D`, `T`, `L`, `O`, `I` | |
|
|
| The EP search-report category sometimes concatenates multiple |
| codes (e.g. `"XY"` means the citation is BOTH novelty-relevant AND |
| obviousness-relevant). The lowering preserves the raw string and |
| the extractor maps the most-severe component to `severity`. |
|
|
| ### `prior_art[].categories` (v1.2+) |
| |
| The single-string `severity` collapses multi-character ST.14 codes |
| to one band (e.g. `XY` → `novelty_destroying`, dropping the |
| inventive-step signal). To preserve the full set, v1.2 adds a |
| `categories: list<string>` field with each code as its own |
| alphabetically-sorted entry: |
|
|
| | Source category | `severity` | `categories` | |
| |---|---|---| |
| | `X` | `novelty_destroying` | `["X"]` | |
| | `Y` | `obviousness` | `["Y"]` | |
| | `XY` | `novelty_destroying` | `["X", "Y"]` | |
| | `XYI` | `novelty_destroying` | `["I", "X", "Y"]` | |
| | `A` | `background` | `["A"]` | |
|
|
| Legacy v1.0 / v1.1 rows have `categories = []` (empty). |
| Jurisdictions whose source data doesn't expose ST.14 codes |
| (US OARD uses 35 USC § sections, not ST.14) also leave the field |
| empty. Filter for `len(categories) > 0` to query only ST.14- |
| exposed rows. |
|
|
| ### Cross-citation index (v1.2+) |
|
|
| A sibling partition `prior_art_index/<jurisdiction>/index.parquet` |
| aggregates the cases partition by `(ref_id, citing_jurisdiction)` |
| so consumers can ask "how often has document X been cited" without |
| walking the cases data row-by-row. |
|
|
| ```python |
| from datasets import load_dataset |
| |
| idx = load_dataset( |
| "v13s/golden-fto-layer-a", "prior_art_index", split="train", |
| ) |
| # Top-cited refs in EP search reports |
| top = sorted( |
| [r for r in idx if r["citing_jurisdiction"] == "EP"], |
| key=lambda r: r["citation_count"], reverse=True, |
| )[:10] |
| for r in top: |
| print(r["ref_id"], r["citation_count"], r["citing_case_ids"]) |
| ``` |
|
|
| Index schema: |
|
|
| | Field | Type | Description | |
| |---|---|---| |
| | `ref_id` | string | Cited document id (e.g. `US10721059`) | |
| | `citing_jurisdiction` | string | Where the citing examiner sits (`EP`, `US`) | |
| | `citation_count` | int32 | Total times this ref appears in `prior_art[]` across cases | |
| | `citing_case_ids` | list<string> | Sorted set of `case_id` values that cite this ref | |
| | `severity_distribution` | struct | Count by severity band (`novelty_destroying`, `obviousness`, `background`) | |
| | `first_cited_date` | date | Earliest `examination.oa_date` across citing cases | |
| | `last_cited_date` | date | Latest `examination.oa_date` across citing cases | |
|
|
| Per-jurisdiction subdirs (`prior_art_index/EP/`, `prior_art_index/US/`) |
| keep the index sharded by which extractor produced it. To get a |
| cross-jurisdiction view, union the partition or use the default |
| config above which includes both. |
|
|
| ### Rejection reason codes |
|
|
| Canonical 3-letter codes consistent across jurisdictions: |
|
|
| | Code | USC § | Description | |
| |---|---|---| |
| | `anticipation_102` | 35 USC §102 | Lack of novelty (single-reference) | |
| | `obviousness_103` | 35 USC §103 | Obviousness (multi-reference combination) | |
| | `subject_matter_101` | 35 USC §101 | Patent-eligible subject matter (Alice/Mayo/Bilski) | |
| | `indefiniteness_112` | 35 USC §112 | Written description / definiteness | |
| | `double_patenting` | non-statutory | Same invention claimed twice | |
|
|
| Future EP/JP releases add their statute-equivalent codes |
| (`novelty_epc_54`, `inventive_step_epc_56`, `novelty_jp_29_1`, |
| etc.) without breaking the schema. |
|
|
| ## How was this built? |
|
|
| ### US slice (5 000 rows) |
|
|
| 1. **OARD bulk download** (the 4M-row USPTO Office Action |
| Research Dataset, frozen at the 2017 release): manually |
| browser-downloaded from |
| [research.uspto.gov](https://www.uspto.gov/ip-policy/economic-research/research-datasets/office-action-research-dataset-patents), |
| mirrored to [v13s/oard-2017-mirror](https://huggingface.co/datasets/v13s/oard-2017-mirror) |
| for repeatable fetches |
| 2. **office_actions.csv scan** for the first 5 000 unique |
| application IDs in chronological order |
| 3. **citations.csv filter pass** to keep only those 5 000 apps' |
| citation rows (~50 MB filtered from a 4 M-row, 5 GB unfiltered |
| source) |
| 4. **USPTO ODP API** enrichment per app (60 RPM rate limit; ~85 |
| minutes wall-clock for the full pass) |
| 5. **Triple construction** — the OARD's pre-classified |
| `rejection_*` boolean columns + the citation rows + the ODP |
| metadata combine into a `LayerATriple` per OA event |
| |
| ### EP slice (11 rows, new in v1.0.2) |
| |
| 1. **EP publications list** curated from Espacenet IPC searches |
| (`G06F`, `H04L`, `A61K`), filing years 2014–2018 |
| 2. **OPS published-data full-cycle** for biblio + claims |
| (epodoc/docdb format, kind-suffix fallback for older |
| publications) |
| 3. **OPS Register service** (`/rest-services/register/publication/ |
| epodoc/{pub}/biblio`) for search-report citations — these |
| carry the WIPO ST.14 category codes, mapped to `severity` via |
| the table above |
| 4. **Two-endpoint merge per publication**: full-cycle gives the |
| bibliographic context; the Register service gives the |
| `prior_art[]` list. Filtered to `@cited-phase == "search"` to |
| keep the high-signal X/Y/A subset |
| 5. **Triple construction** — same `LayerATriple` shape as the US |
| slice; `oa_type = "search_report"`, `outcome.final_disposition |
| = "pending"` (EP grants land in a separate legal-status |
| endpoint, planned for v1.1) |
| |
| ### Common steps (both slices) |
| |
| 6. **Validation**: every row passes a linking validator that |
| checks temporal sanity (cited prior art filed before the |
| invention), severity coherence (novelty-destroying citations |
| on a granted+unamended application would be an inconsistency), |
| and schema round-trip |
| 7. **Parquet emit** partitioned by jurisdiction × filing_year, |
| with a SHA-256 manifest for byte-level reproducibility |
| 8. **HuggingFace push** under [v13s/golden-fto-layer-a](https://huggingface.co/datasets/v13s/golden-fto-layer-a) |
|
|
| The full pipeline source lives in the public repo at |
| [parallax/data-pipeline](https://github.com/masterleopold/parallax/tree/main/data-pipeline). |
| The release runner is |
| [`bin/local-extract-v1.sh`](https://github.com/masterleopold/parallax/blob/main/bin/local-extract-v1.sh). |
|
|
| ## Known limitations |
|
|
| - **Sample size**: 5 011 rows are a first cut. The full OARD has |
| 4 M+ Office Actions; ramp-up to 50 K+ US rows in v1.1+ is |
| planned. The EP slice is intentionally small (11 publications) |
| to validate the OPS Register integration end-to-end before |
| scaling. |
| - **Sparse claim text**: The ODP search endpoint returns |
| bibliographic metadata (title, applicant, IPC) but not full |
| claim text. Some rows have `invention.claims = []` or |
| placeholder markers; full claim extraction needs a separate |
| ODP call (planned for v1.1). |
| - **JP not yet shipped**: JP slice gated on INPIT bulk |
| credential approval; see |
| [docs/07-partnerships/inpit-bulk-data-application.md](https://github.com/masterleopold/parallax/blob/main/docs/07-partnerships/inpit-bulk-data-application.md). |
| - **EP claim ranges**: The Register service embeds claim ranges |
| in the citation's bibliographic text annotation (`[Y] 5,12`). |
| v1.0.3+ extracts these into `prior_art[].claims_blocked`; legacy |
| v1.0.2 rows leave the list empty. |
| - **Mixed schema_version partition**: rows from v1.0 / v1.1 cron |
| cycles carry `schema_version="1.0"` and an empty `categories[]`, |
| while v1.2+ rows carry `schema_version="1.2"` and populated |
| `categories[]` (when the source supports ST.14). Filter on |
| `schema_version` if you need a single-version partition. |
| - **EP outcome field is conservative**: Without joining the OPS |
| legal-status endpoint, `outcome.final_disposition` defaults to |
| `pending` for EP rows. v1.1 will resolve to |
| `granted` / `rejected` / `withdrawn`. |
| - **US outcome field is conservative**: Without joining USPTO |
| PEDS (Patent Examination Data System), the same defaulting |
| applies on the US slice. |
| |
| ## Versioning |
| |
| Semantic versioning per |
| [golden-dataset-plan.md](https://github.com/masterleopold/parallax/blob/main/docs/06-evaluation/golden-dataset-plan.md): |
| |
| - **MAJOR** — schema-incompatible (field removed, type changed) |
| - **MINOR** — new fields, new jurisdictions, ≥10 % data growth |
| - **PATCH** — parser bugfix, individual case re-validation |
| |
| The HuggingFace dataset repo's git history is the canonical |
| release ledger. To pin a specific version in your code: |
| |
| ```python |
| ds = load_dataset("v13s/golden-fto-layer-a", revision="v1.1.20260513") |
| ``` |
| |
| ## Citation |
| |
| If you use this dataset in academic work, please cite: |
| |
| ```bibtex |
| @dataset{vox_layer_a_2026, |
| author = {Hara, Yoichiro and {Vox}}, |
| title = {Layer A — Office Action Triples for |
| Freedom-to-Operate Evaluation}, |
| year = 2026, |
| publisher = {Hugging Face}, |
| version = {{1.1.20260513}}, |
| url = {https://huggingface.co/datasets/v13s/golden-fto-layer-a}, |
| note = {Curated under CC-BY-4.0; underlying patent |
| data in the public domain} |
| } |
| ``` |
| |
| ## License |
| |
| - **Curation layer (this dataset)**: [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) |
| — the schema, severity tagging, and triple construction are |
| © Vox 2026 and may be used / redistributed with attribution. |
| - **Underlying patent documents**: public domain (USPTO). |
| - **OARD source data**: public domain (USPTO Office of the Chief |
| Economist). |
| |
| ## Contact |
| |
| - Curator: **Yoichiro Hara** (`taisei@vox.delivery`) |
| - Org: [Vox](https://huggingface.co/v13s) (HF: `v13s`) |
| - Source repo: <https://github.com/masterleopold/parallax> |
| - Issues: <https://github.com/masterleopold/parallax/issues> |
| - Product surface: <https://parallax.3mergen.com> |
| |
| For takedown requests on specific patent applications, file an |
| issue or email the curator. Public-domain patent data is |
| included in good faith; the curation layer can be redacted on |
| request. |
| |