Check if youtube commons contain Danish data

#58
by KennethEnevoldsen - opened
Danish Foundation Models org

Youtube commons can be found here

You might be interested in this dataset: https://huggingface.co/datasets/Rijgersberg/YouTube-Commons-descriptions

It contains the video titles and descriptions, as well language detection performed on those.

Danish Foundation Models org

Ah thanks @Rijgersberg , this is great! I see that you work a lot on Dutch, I have considered a expanding this project (dynaword) to a larger set of European languages, once we settled on the final structure for this one. If this is something you would be interested in, do let me know :)

I think a single collection of open datasets would be valuable for any language!

The GPT-NL-project will also make some newly collected open dataset public, but I don't know in what format or in what venue.

Danish Foundation Models org

Hi @Rijgersberg , never responded here - I we could def. do something like a dutch gigaword. I started doing Swedish and norwegian to, planning to go for a collection of scandinavian languages, but with both german common and common pile out I think one could reasonably construct dynawords for the germanic languages

I'm looking into extracting the Danish transcripts/metadata currently.

I did a search for danish transcriptions with GlotLID, which returned only around 180 results. Unfortunately, a large portion of these seem to be false positives (a lot of german) – and of those that are correctly labeled, the quality seems quite poor (many appear to be machine translations). It looks like there might not just be that much quality danish data unless GlotLID is horribly wrong on this.

Danish Foundation Models org

Thanks! I will close this as resolved then thanks for looking into it!

KennethEnevoldsen changed discussion status to closed

Sign up or log in to comment