document stringlengths 5.92k 230k |
|---|
{"service":{"name":"Baidu AI Cloud","url":"https://intl.cloud.baidu.com/doc/Agreements/index.html","lang":"ENG","sector":"Cloud storage","hq":"China","hq_category":"Other","is_public":"Public","is_paid":"Optionally paid","date":"13.01.2022"},"document":{"title":"","text":"Agreements 文档\n2022-01-13\n\n2\n3\n3\n3\n3\n5\n... |
{"service":{"name":"Dropbox","url":"https://www.dropbox.com/en/terms","lang":"ENG","sector":"Cloud storage","hq":"US","hq_category":"US","is_public":"Public","is_paid":"Optionally paid","date":"14.01.2022"},"document":{"title":"","text":"Dropbox Terms of Service\nPosted: October 29, 2021\nEffective: January 14, 2022\nY... |
{"service":{"name":"iCloud","url":"https://www.apple.com/uk/legal/internet-services/icloud/en/terms.html","lang":"ENG","sector":"Cloud storage","hq":"US","hq_category":"US","is_public":"Public","is_paid":"Optionally paid","date":"20.09.2021"},"document":{"title":"","text":"Welcome to iCloud\nTHIS LEGAL AGREEMENT BETWEE... |
"{\"service\":{\"name\":\"Oktawave\",\"url\":\"https://oktawave.com/en/company/legal/customers-regul(...TRUNCATED) |
"{\"service\":{\"name\":\"OVH\",\"url\":\"https://www.ovh.ie/support/contracts/\",\"lang\":\"ENG\",\(...TRUNCATED) |
"{\"service\":{\"name\":\"Discord\",\"url\":\"https://discord.com/terms\",\"lang\":\"ENG\",\"sector\(...TRUNCATED) |
"{\"service\":{\"name\":\"GG\",\"url\":\"https://www.ggapp.com/info/term-of-use-of-gg-platform/\",\"(...TRUNCATED) |
"{\"service\":{\"name\":\"Line\",\"url\":\"http://terms.line.me/line_terms?lang=en\",\"lang\":\"ENG\(...TRUNCATED) |
"{\"service\":{\"name\":\"Signal\",\"url\":\"https://signal.org/legal/\",\"lang\":\"ENG\",\"sector\"(...TRUNCATED) |
"{\"service\":{\"name\":\"Telegram\",\"url\":\"https://telegram.org/tos?setln=en\",\"lang\":\"ENG\",(...TRUNCATED) |
A collection of Terms of Service or Privacy Policy datasets
Annotated datasets
CUAD
Specifically, the 28 service agreements from CUAD, which are licensed under CC BY 4.0 (subset: cuad).
Code
import datasets
from tos_datasets.proto import DocumentQA
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "cuad")
print(DocumentQA.model_validate_json(ds["document"][0]))
100 ToS
From Annotated 100 ToS, CC BY 4.0 (subset: 100_tos).
Code
import datasets
from tos_datasets.proto import DocumentEUConsumerLawAnnotation
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "100_tos")
print(DocumentEUConsumerLawAnnotation.model_validate_json(ds["document"][0]))
Multilingual Unfair Clause
From CLAUDETTE/Multilingual Unfair Clause, CC BY 4.0 (subset: multilingual_unfair_clause).
It was built from CLAUDETTE/25 Terms of Service in English, Italian, German, and Polish (100 documents in total) from A Corpus for Multilingual Analysis of Online Terms of Service.
Code
import datasets
from tos_datasets.proto import DocumentClassification
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "multilingual_unfair_clause")
print(DocumentClassification.model_validate_json(ds["document"][0]))
Memnet ToS
From 100 Terms of Service in English from Detecting and explaining unfairness in consumer contracts through memory networks, MIT (subset: memnet_tos).
Code
import datasets
from tos_datasets.proto import DocumentClassification
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "memnet_tos")
print(DocumentClassification.model_validate_json(ds["document"][0]))
142 ToS
From 142 Terms of Service in English divided according to market sector from Assessing the Cross-Market Generalization Capability of the CLAUDETTE System, Unknown (subset: 142_tos). This should also includes 50 Terms of Service in English from "CLAUDETTE: an Automated Detector of Potentially Unfair Clauses in Online Terms of Service".
Code
import datasets
from tos_datasets.proto import DocumentClassification
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "142_tos")
print(DocumentClassification.model_validate_json(ds["document"][0]))
10 ToS/PP
From 5 Terms of Service and 5 Privacy Policies in English and German (10 documents in total) from Cross-lingual Annotation Projection in Legal Texts, GNU GPL 3.0 (subset: 10_tos)
Code
import datasets
from tos_datasets.proto import DocumentClassification
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "10_tos")
print(DocumentClassification.model_validate_json(ds["document"][0]))
PolicyQA
This dataset seems to have some annotation issues where unanswerable questions are still answered with SQuAD-v1 format instead of the v2 format.
From PolicyQA, MIT (subset: privacy_glue/policy_qa).
Code
import datasets
from tos_datasets.proto import DocumentQA
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_qa")
print(DocumentQA.model_validate_json(ds["train"]["document"][0]))
PolicyIE
From PolicyIE, MIT (subset: privacy_glue/policy_ie).
Code
import datasets
from tos_datasets.proto import DocumentSequenceClassification, DocumentEvent
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_ie")
print(DocumentSequenceClassification.model_validate_json(ds["train"]["type_i"][0]))
print(DocumentEvent.model_validate_json(ds["train"]["type_ii"][0]))
Policy Detection
From [policy-detection-data](https://github.com/infsys-lab/policy-detection-data, GPL 3.0 (subset: privacy_glue/policy_detection).
Code
import datasets
from tos_datasets.proto import DocumentClassification
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_detection")
print(DocumentClassification.model_validate_json(ds["train"]["document"][0]))
Polisis
From Polisis, Unknown (subset: privacy_glue/polisis).
Code
import datasets
from tos_datasets.proto import DocumentClassification
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/polisis")
print(DocumentClassification.model_validate_json(ds["test"]["document"][0]))
PrivacyQA
From PrivacyQA, MIT (subset: privacy_qa).
Code
import datasets
from tos_datasets.proto import DocumentClassification
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/privacy_qa")
print(DocumentClassification.model_validate_json(ds["test"]["document"][0]))
Piextract
From Piextract, Unknown (subset: privacy_glue/piextract).
Code
import datasets
from tos_datasets.proto import DocumentSequenceClassification
ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/piextract")
print(DocumentSequenceClassification.model_validate_json(ds["train"]["document"][0]))
WIP
Annotated Italian TOS sentences, Apache 2.0Only sentence level annotations, missing original full textHuggingface, MITOnly sentence level annotations, missing original full text- ToSDR API, Unknown
- Downloads last month
- 16