Datasets:
Add hvadvilduhelst dataset
Contribution checklist:
- I have run the test suite using
make testand all tests pass - I have added/changed a dataset:
- I have updated descriptive statistics using
make update-descriptive-statistics - I have bumped the version using
make bump-version
- I have updated descriptive statistics using
- If I have added a
create.pyscript I have added the script dependencies required to run that script - I have updated the
CHANGELOG.mdwhere appropriate
Notes:
- Added the new
hvadvilduhelstdataset subset. make testpassed locally with399 passed, 1 skipped.- The skipped test is the repository's existing cross-dataset duplicate check, which is explicitly marked with
@pytest.mark.skip("This tests takes too long to run").
Skipped test are quite normal - feel free to simply ignore those
I would specify the dynaword version used to ensure it is reproducible e.g. see this create.py
Short description seems to be too long (it will ruin the formatting on the main table in the readme.
Upstream is a very technical word. I would rephrase the readme to be more friendly to users with little data-processing experience. Also legally what license others give is not relevant (I can easily publish Harry potter under an open license, doesn't mean that it is). I would probably focus more on the fact that the dataset was released by Kasper, who stated that the data was provided by hyg.dk in the acknowledgment.
In general, the create.py is the technical documentation. We use the readme.md for non-technical documentation.
What do you mean by "contains one exact duplicate prompt row", what is the prompt row?
Don't add a citation section if there original readme doesn't have one.
Sadly, reviews don't allow for suggested edits, but this is a good first round
- I would change:
The source release by Kasper Junge on Hugging Face is marked as CC BY 4.0.
to
This subset contains Danish "Hvad vil du helst?" questions released by Kasper Junge. The dataset was provided by hyg.dk and is released under a CC BY 4.0.
and also change de top description to:
This subset contains Danish "Hvad vil du helst?" (would you rather) questions derived from hyg.dk. The questions cover a wide range of playful and social topics, including food, sport, school, dating, geography, and everyday preferences. Each entry in Dynaword contains one question with two possible answers. It contains no correct solutions.
The dataset was published by Kasper Junge on Huggingface and GitHub.
The idea is to focus on what the data is first.
delete "Processing Notes", it states nothing that is not already in the tests (you are basically just stating that the quality standards is met)
in the shorthand summary add a link to hyg.dk
You could potentially add a limitations section stating that the samples are rather short. Which might make them hard to utilize.
You state "Run from the repository root:", but it should not needed to do so
the uv run command I don't believe with run, I think you need the GIT LFS from the linked create.py with the setting that you currently have in the top.
the created date should be plausible range. A single date is probably incorrect I would rather use 2015-01-01 to 2025-01-01
Sorry working on the two codebases at the same time
ok I think I have overworked the skills.md, I will make changes manually