user stringlengths 3 28 | created_at timestamp[us]date 2020-04-01 09:48:12 2025-07-30 20:59:07 | body stringlengths 1 173k | issue_number int64 1 3.81k | __index_level_0__ int64 0 11.8k |
|---|---|---|---|---|
YooSungHyun | 2025-01-15T04:54:55 | https://github.com/huggingface/trl/releases/tag/v0.7.5 | 2,569 | 0 |
HuggingFaceDocBuilderDev | 2025-01-15T11:15:17 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2568). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,568 | 1 |
mnoukhov | 2025-01-13T20:46:28 | Currently, changing RLOO/PPO to use `BaseOnlineTrainer` would remove the functionality of
- generating K minibatches from your model and then doing `K` updates on the generated data
- doing multiple updates on the same completions (i.e. `ppo_epochs`)
This is not present in the OnlineDPO code but it is a standard ... | 2,567 | 2 |
qgallouedec | 2025-01-14T09:16:19 | Hi @mnoukhov,
### Base Online Trainer
I understand the motivation behind this PR. However, from a philosophical standpoint, I believe that, except in rare cases, duplicated code should be preferred over inheritance in this context.
We've attempted to implement a base trainer several times in the past—whether... | 2,567 | 3 |
HuggingFaceDocBuilderDev | 2025-01-13T14:55:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2566). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,566 | 4 |
HuggingFaceDocBuilderDev | 2025-01-13T13:04:08 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2565). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,565 | 5 |
qgallouedec | 2025-01-15T22:16:23 | From the paper:
```math
\mathcal{J}_{\text{GRPO}}(\theta) =\frac{1}{G} \sum_{i=1}^G \frac{1}{|o_i|}\sum_{t=1}^{|o_i|}\left[\min \left(\frac{\pi_\theta(o_{i,t} | q, o_{i,< t})}{\pi_{\theta_{\text{old}}}(o_{i,t} | q, o_{i,< t})} \hat{A}_{i,t}, \text{clip}\left(\frac{\pi_\theta(o_{i,t} | q, o_{i,< t})}{\pi_{\theta_{\t... | 2,565 | 6 |
liuchaohu | 2025-01-13T09:50:13 | I know the reason why `pixel_values` disappears.
We should run the code the param "`--remove_unused_columns false`", otherwise `pixel_values` will be eliminated. | 2,563 | 7 |
HuggingFaceDocBuilderDev | 2025-01-11T16:44:58 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2561). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,561 | 8 |
qgallouedec | 2025-01-11T22:10:32 | CI fails, I think we can ignore, see https://github.com/huggingface/trl/pull/2558#issuecomment-2585461179 | 2,561 | 9 |
qgallouedec | 2025-01-12T11:30:23 | Good point, done in 43089fa | 2,561 | 10 |
oliveiraeliel | 2025-01-11T15:17:23 | Please, can someone give me some feedback? It is my first PR to `trl` | 2,560 | 11 |
qgallouedec | 2025-01-11T22:13:35 | Nice! just make sure to run `make precommit` to apply the right style | 2,560 | 12 |
oliveiraeliel | 2025-01-12T01:44:17 | > Nice! just make sure to run `make precommit` to apply the right style
I ran the `make precommit` and `pytest test/test_ppo_trainer.py`, everything looks ok. | 2,560 | 13 |
HuggingFaceDocBuilderDev | 2025-01-10T18:35:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2558). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,558 | 14 |
qgallouedec | 2025-01-11T00:11:01 | we get different results with vllm. probably linked to sampling param. investigating | 2,558 | 15 |
qgallouedec | 2025-01-11T22:08:57 | CI fails because in the latest transformers version release yesterday, transformers uses a python 3.10+ syntax (`timeout: float | None = None`). I'm not sure why it fails only for the cli test, but I think we can safely ignore it. | 2,558 | 16 |
qgallouedec | 2025-01-12T14:34:47 | Surprisingly, the precision of the generator model seems to have a pretty high impact on the results:
<img width="1272" alt="Screenshot 2025-01-12 at 15 31 31" src="https://github.com/user-attachments/assets/21a784e6-d1fb-48c1-9f1a-780e8863e0c7" />
When you keep the default precision (bfloat16), the results seem ... | 2,558 | 17 |
konrad-gerlach | 2025-01-10T16:02:23 | I would be very grateful for a review by:
@lvwerra
@vwxyzjn
@younesbelkada
@qgallouedec
or any others, that feel up to the task. | 2,556 | 18 |
konrad-gerlach | 2025-01-10T21:56:55 | I was unable to execute the pre-commit hook, so I manually ran the linter. | 2,556 | 19 |
qgallouedec | 2025-01-12T15:48:37 | Thanks for the PR!
Let's see what's the CI outputs. | 2,556 | 20 |
HuggingFaceDocBuilderDev | 2025-01-12T15:52:11 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2556). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,556 | 21 |
konrad-gerlach | 2025-01-12T18:46:43 | Just to be sure, as I'm unfamiliar with their implementation: The trl Trainers like PPO should not try to back propagate through the generated tokens, right? | 2,556 | 22 |
konrad-gerlach | 2025-01-12T19:59:57 | The CI failing for Python 3.9 seems unrelated to this PR. | 2,556 | 23 |
qgallouedec | 2025-01-12T20:49:07 | > The trl Trainers like PPO should not try to back propagate through the generated tokens, right?
Yes that's correct. The backprop is done on the output of a forward pass | 2,556 | 24 |
konrad-gerlach | 2025-01-12T21:21:55 | @qgallouedec Could you run the precommit to fix the linting issues? I haven't gotten it to work. | 2,556 | 25 |
konrad-gerlach | 2025-01-15T22:59:24 | I'm still working on adding some more tests and cleaning up the code a bit. | 2,556 | 26 |
Yukino256 | 2025-01-14T08:42:14 | same issue, and i tried the accelerate==0.34.2, ppo runs well. | 2,555 | 27 |
HuggingFaceDocBuilderDev | 2025-01-09T08:45:46 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,552 | 28 |
shirinyamani | 2025-01-09T22:23:27 | Hi, I have read the issue thread here and this PR, I agree that we can use `truncation_mode` in the `tokenize_row` function and I reviewed your addition. I wanted to also share my thoughts on it. so here the addition is if after truncating the prompt its still too long we can further truncate the response, and what if ... | 2,551 | 29 |
qgallouedec | 2025-01-10T17:00:02 | Where does this version `tokenize_row` comes from @shirinyamani? It seems quite different from its current version in main.
> if after truncating the prompt its still too long
This is anyway handled here:
https://github.com/huggingface/trl/blob/edabe0a2d8fdd790319ce8862bb8e17336b85df1/trl/trainer/dpo_trainer.p... | 2,551 | 30 |
shirinyamani | 2025-01-10T17:14:54 | This version is what I came up with based on my research. And yes, it's getting handled where you mentioned but they are in two different functions; `concatenated_forward` and `tokenize_row`. I wanted to have all the relevant stuff to truncation/prompt/response all in one function which would be `tokenize_row` for simp... | 2,551 | 31 |
anakin87 | 2025-01-10T17:22:48 | @qgallouedec feel free to review the proposed fix | 2,551 | 32 |
qgallouedec | 2025-01-10T17:29:23 | > all in one function which would be `tokenize_row` for simplicity and clarity purposes
that makes sense. Can you open another pull request for this? Wait for this one to be merged though | 2,551 | 33 |
qgallouedec | 2025-01-10T17:34:49 | sorry @anakin87 I forgot to press the submit review button a couple of days ago.
Also, @shirinyamani came with an idea that could make more sense: truncate the [prompt+completion] (either left or right) instead of just the prompt. Something like
```python
# Truncate
if self.args.max_length is not None:
if... | 2,551 | 34 |
HuggingFaceDocBuilderDev | 2025-01-08T18:24:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2550). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,550 | 35 |
qgallouedec | 2025-01-07T20:24:14 | Thanks! | 2,549 | 36 |
HuggingFaceDocBuilderDev | 2025-01-07T17:10:49 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2548). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,548 | 37 |
qgallouedec | 2025-01-07T17:34:53 | From their demo code, this is what I get as input for the model:
```
<|start_header_id|>user<|end_header_id|>
[CONTEXT]
<turn> user
Ellipsis
<turn> assistant
Ellipsis
<turn> user
Ellipsis
[RESPONSE A] BBBB [RESPONSE B] CCCC<|eot_id|>
```
doesn't make much sense to me:
- numerous unnecessary ... | 2,548 | 38 |
kashif | 2025-01-07T17:41:15 | you are using the instructions from here: https://huggingface.co/RLHFlow/pair-preference-model-LLaMA3-8B right? | 2,548 | 39 |
qgallouedec | 2025-01-07T17:42:24 |
> you are using the instructions from here: https://huggingface.co/RLHFlow/pair-preference-model-LLaMA3-8B right?
precisely
| 2,548 | 40 |
HuggingFaceDocBuilderDev | 2025-01-07T13:56:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,547 | 41 |
kashif | 2025-01-09T13:19:24 | thanks @okhat i can have a look and see how to fix it... just debugging currently | 2,545 | 42 |
okhat | 2025-01-11T22:10:41 | Awesome — thanks @kashif ! Looking forward to your findings! | 2,545 | 43 |
HuggingFaceDocBuilderDev | 2025-01-06T15:18:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,544 | 44 |
qgallouedec | 2025-01-06T14:17:24 | Both points sounds valid to me. For 1. I'd go for a warning in the doc (not in the function). Would you like to open a PR?
| 2,543 | 45 |
HuggingFaceDocBuilderDev | 2025-01-04T16:42:49 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2542). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,542 | 46 |
qgallouedec | 2025-01-04T18:42:36 | Fun feature! Do you have a demo repo? | 2,542 | 47 |
qgallouedec | 2025-01-04T18:44:19 | Have you tried with the HF api? It could be a free alternative | 2,542 | 48 |
August-murr | 2025-01-04T19:22:33 | > Fun feature! Do you have a demo repo?
Just pushed it to my [own fork](https://github.com/August-murr/trl/issues) | 2,542 | 49 |
qgallouedec | 2025-01-06T14:19:45 | I'll open a batch of issues to test it | 2,542 | 50 |
August-murr | 2025-01-06T17:59:38 | > Have you tried with the HF api? It could be a free alternative
Honestly, this was really effortless since I simply forked a mostly functional actions extension. Modifying it to work with the HF API will require much more effort. also it uses GPT-4o, there aren't many open-source models that are this accurate.
I... | 2,542 | 51 |
qgallouedec | 2025-01-06T19:05:17 | It doesn't seem like a big deal to me. Probably something like this could work
```python
from huggingface_hub import InferenceClient
client = InferenceClient(model="meta-llama/Llama-3.2-1B-Instruct", token="your_token")
content = "Find the label among these: question, issue."
completion = client.chat_completio... | 2,542 | 52 |
August-murr | 2025-01-06T19:37:15 | > It doesn't seem like a big deal to me. Probably something like this could work
>
> ```python
> from huggingface_hub import InferenceClient
>
> client = InferenceClient(model="meta-llama/Llama-3.2-1B-Instruct", token="your_token")
> content = "Find the label among these: question, issue."
> completion = clien... | 2,542 | 53 |
qgallouedec | 2025-01-06T20:24:34 | Do you know if you can access the tag description? It could help the model in its prediction | 2,542 | 54 |
August-murr | 2025-01-07T05:13:37 | > Do you know if you can access the tag description? It could help the model in its prediction
tag description as in the label description?
like:
`🚀 deepspeed` --> `Related to deepspeed`
If so, yes, it is part of the prompt. | 2,542 | 55 |
August-murr | 2025-01-07T07:14:14 | I tried using the Llama 1B model, and it "functioned," but for the TRL, I switched to the 70B model. However, I couldn't test it with the 70B because it requires a subscription.
Don't forget to add the `HF_API_KEY` to the secrets.
I got a context length error (limit of 4096 tokens) when using the Llama 1B model... | 2,542 | 56 |
August-murr | 2025-01-12T20:23:08 | > I got a context length error (limit of 4096 tokens) when using the Llama 1B model, which was weird since it supports up to 128k tokens. Since I can't use the 70B model, I'm unsure if it's a problem or not.
This can be problematic when dealing with issues that require a long context. The exact error message receive... | 2,542 | 57 |
qgallouedec | 2025-01-12T20:47:28 | A bit hacky but you can take the 15000 first strings. It should be enough for most issues:
```python
content = content[:15000]
``` | 2,542 | 58 |
August-murr | 2025-01-12T21:02:50 | > A bit hacky but you can take the 15000 first strings. It should be enough for most issues:
>
> ```python
> content = content[:15000]
> ```
more like 4000
But it works well. | 2,542 | 59 |
August-murr | 2025-01-04T06:17:04 | here's how to fix it:
`train_dataset = load_dataset('json', data_files=dataset_file_path, split="train") `
I suggest you get quick fixes for simpler issues simply by using ChatGPT or Copilot first as they can save you a lot of time!
| 2,541 | 60 |
degen2 | 2025-01-04T16:41:07 | I already tried that and still get the same KeyError. Even when loading a dataset from the hub. I also tried adding a ‚text‘ key field to the data. | 2,541 | 61 |
qgallouedec | 2025-01-04T19:09:33 | `split="train"` is the solution. If you still encounter the error please provide a MRE | 2,541 | 62 |
HuggingFaceDocBuilderDev | 2025-01-03T09:44:45 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,540 | 63 |
August-murr | 2025-01-09T13:10:24 | ### Question:
Does this theoretically work? I'm asking because I haven't read the PPO papers. When the PPO trainer is training, it outputs: `query`, `model_response`, and `score`, with the score being the tensor logits from the reward model. I have tested this branch and the changes, and it looks normal and function... | 2,540 | 64 |
qgallouedec | 2025-01-09T13:28:07 | Can you add a test as well? | 2,540 | 65 |
August-murr | 2025-01-09T13:40:25 | > Can you add a test as well?
I'll take that as a yes.
Yes I will add the test and the docs later, maybe a blogpost or something to show how it works if I don't run out of resources. | 2,540 | 66 |
gp1702 | 2025-01-07T21:20:17 | I tried running the demo command without qlora, and got the following error:
``
Traceback (most recent call last):
File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module>
main(script_args, training_args, model_args)
File "/home/gandharvp_google_com//dpo/example.py", line 134, in main
t... | 2,539 | 67 |
faaany | 2025-01-08T09:06:36 | > Thanks a lot for the fix @faaany - overall it looks great!
>
> Would you mind confirming that the following demo command works with your PR (once activation checkpointing is removed):
>
> ```shell
> accelerate launch --config_file=examples/accelerate_configs/fsdp_qlora.yaml --num_processes=NUM_GPUS trl/scripts... | 2,539 | 68 |
faaany | 2025-01-08T09:12:16 | > I tried running the demo command without qlora, and got the following error: ` Traceback (most recent call last): File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module> main(script_args, training_args, model_args) File "/home/gandharvp_google_com//dpo/example.py", line 134, in main trainer.train() Fi... | 2,539 | 69 |
qgallouedec | 2025-01-08T13:29:50 | > should this helper function live in a utils module somewhere so we don't have to copy it around to all other trainers?
I think it would make sense to have it in `trainer/utils.py` yes. | 2,539 | 70 |
gp1702 | 2025-01-08T15:08:08 | > > I tried running the demo command without qlora, and got the following error: ` Traceback (most recent call last): File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module> main(script_args, training_args, model_args) File "/home/gandharvp_google_com//dpo/example.py", line 134, in main trainer.train() ... | 2,539 | 71 |
faaany | 2025-01-09T04:57:46 | > > > I tried running the demo command without qlora, and got the following error: ` Traceback (most recent call last): File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module> main(script_args, training_args, model_args) File "/home/gandharvp_google_com//dpo/example.py", line 134, in main trainer.train(... | 2,539 | 72 |
faaany | 2025-01-09T05:05:58 | @qgallouedec @lewtun how about deepspeed? should we use the `prepare_deepspeed` function from `trainer/utils.py` in `dpo_trainer.py` as well?
| 2,539 | 73 |
gp1702 | 2025-01-10T06:03:21 | > > > > I tried running the demo command without qlora, and got the following error: ` Traceback (most recent call last): File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module> main(script_args, training_args, model_args) File "/home/gandharvp_google_com//dpo/example.py", line 134, in main trainer.trai... | 2,539 | 74 |
faaany | 2025-01-10T09:02:39 | > > > > > I tried running the demo command without qlora, and got the following error: ` Traceback (most recent call last): File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module> main(script_args, training_args, model_args) File "/home/gandharvp_google_com//dpo/example.py", line 134, in main trainer.tr... | 2,539 | 75 |
qgallouedec | 2025-01-08T09:42:39 | That's a good point!
In the past, truncation mode was only used for the prompt, and it seems that completion was only truncated for the encoder-decoder. This has been corrected with #2209. In any case, this is a good opportunity to bring this issue up again.
- Should `truncation_mode` apply to prompt truncation?
-... | 2,538 | 76 |
anakin87 | 2025-01-08T10:07:32 | Ah, I'm not an expert, unfortunately.
However, I took a cursory look, and it seems that `truncation_mode` is applied to the prompt in the following trainers: BCO, CPO, KTO, and ORPO.
In the Iterative SFT Trainer, it is implemented somewhat differently.
For consistency, it might make sense to align DPO with the o... | 2,538 | 77 |
qgallouedec | 2025-01-08T10:21:45 | Ok so we are aligned on this.
It would probably only require the following line change:
```python
if max_prompt_length is not None:
if truncation_mode == "keep_end":
prompt_input_ids = prompt_input_ids[:max_prompt_length]
elif truncation_mode == "keep_start":
prompt_input_ids = promp... | 2,538 | 78 |
anakin87 | 2025-01-08T10:23:51 | I would be happy to open a PR in the next few days! | 2,538 | 79 |
qgallouedec | 2025-01-08T09:47:31 | It's not very clear what code you're using. Because you seem to be using a command (`swift rlhf`) that I'm not familiar with and code that you provide doesn't take any arguments.
Plus, the system info that you provide aren't enough (I don't see the trl version among other). Can you copy-paste the output of `trl env`?
... | 2,536 | 80 |
maoulee | 2025-01-08T10:34:17 | > It's not very clear what code you're using. Because you seem to be using a command (`swift rlhf`) that I'm not familiar with and code that you provide doesn't take any arguments. Plus, the system info that you provide aren't enough (I don't see the trl version among other). Can you copy-paste the output of `trl env`?... | 2,536 | 81 |
qgallouedec | 2025-01-08T13:24:24 | I was able to reproduce the speed. I don't know how swift is different form trl (it's built upon trl as far as I understand). You should probably ask swift community here | 2,536 | 82 |
maoulee | 2025-01-08T14:12:57 |
> I was able to reproduce the speed. I don't know how swift is different form trl (it's built upon trl as far as I understand). You should probably ask swift community here
Thank you for your response. I have identified the key issue:
When I load the model and pass the peft_config directly into DPOTrainer, the ... | 2,536 | 83 |
qgallouedec | 2025-01-08T14:16:30 | It's probably because when you pass a peft model, it gets merged and unload (`merge_and_unload`). Those two settings should be equivalent though. It's probably an issue with the `DPOTrainer`. If you manage to fix it, feel free to open a PR | 2,536 | 84 |
qgallouedec | 2025-01-08T13:46:15 | In general we are open to any contribution yes. The easiest way is to open an issue per proposal to keep the discussion sperate and clear. But I'll answer everything here.
> Use SGLang to do rollout (generation) phase instead of vanilla HuggingFace model / DeepSpeed model. This seems to speed up generation a lot.
... | 2,535 | 85 |
fzyzcjy | 2025-01-09T00:14:22 | Hi thanks for the reply!
> The easiest way is to open an issue per proposal to keep the discussion sperate and clear.
Ok I will try to do that in the future issues.
> How much is a lot?
https://github.com/huggingface/trl/pull/1628 says "... preliminary testing shows it's ~8x faster" for vllm. I personally f... | 2,535 | 86 |
faaany | 2025-01-03T02:37:30 | @qgallouedec @lewtun @yao-matrix | 2,533 | 87 |
qgallouedec | 2025-01-07T20:28:08 | Can you confirm that these changes are enough for XPU backends? I'm not able to test it? | 2,533 | 88 |
HuggingFaceDocBuilderDev | 2025-01-07T20:32:06 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2533). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,533 | 89 |
faaany | 2025-01-08T02:17:49 | Thanks for the suggestions! Code Updated. @qgallouedec | 2,533 | 90 |
yiyepiaoling0715 | 2024-12-30T08:03:56 | 
| 2,532 | 91 |
qgallouedec | 2024-12-30T14:49:29 | > * [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
can you please minimise your code? It seems like the error occurs at generation; what the input of the mo... | 2,532 | 92 |
HuggingFaceDocBuilderDev | 2025-01-08T14:05:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2531). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,531 | 93 |
dawidm | 2025-01-08T19:53:54 | Update: I've incorrectly stated that "step, which is now equivalent of episodes". Actually, steps are equivalent to iterations of the main training loop. But the fix is still valid. | 2,531 | 94 |
yiyepiaoling0715 | 2024-12-30T04:55:00 | same question,how to resolve thie? | 2,529 | 95 |
qgallouedec | 2025-01-08T14:22:46 | The solution that you're suggesting sounds good to me, feel free to open a PR | 2,529 | 96 |
HuggingFaceDocBuilderDev | 2024-12-28T13:27:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2527). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,527 | 97 |
qgallouedec | 2025-01-08T14:33:24 | Can you tell me again why it's needed? | 2,527 | 98 |
kashif | 2025-01-08T14:34:41 | i was mistaken... in orpo the nll loss is over the prompt + completion | 2,527 | 99 |
End of preview. Expand in Data Studio
Stars
import requests
from datetime import datetime
from datasets import Dataset
import pyarrow as pa
import os
def get_stargazers(owner, repo, token):
# Initialize the count and the page number
page = 1
stargazers = []
while True:
# Construct the URL for the stargazers with pagination
stargazers_url = f"https://api.github.com/repos/{owner}/{repo}/stargazers?page={page}&per_page=100"
# Send the request to GitHub API with appropriate headers
headers = {"Accept": "application/vnd.github.v3.star+json", "Authorization": "token " + token}
response = requests.get(stargazers_url, headers=headers)
if response.status_code != 200:
raise Exception(f"Failed to fetch stargazers with status code {response.status_code}: {response.text}")
stargazers_page = response.json()
if not stargazers_page: # Exit the loop if there are no more stargazers to process
break
stargazers.extend(stargazers_page)
page += 1 # Move to the next page
return stargazers
token = os.environ.get("GITHUB_PAT")
stargazers = get_stargazers("huggingface", "trl", token)
stargazers = {key: [stargazer[key] for stargazer in stargazers] for key in stargazers[0].keys()}
dataset = Dataset.from_dict(stargazers)
def clean(example):
starred_at = datetime.strptime(example["starred_at"], "%Y-%m-%dT%H:%M:%SZ")
starred_at = pa.scalar(starred_at, type=pa.timestamp("s", tz="UTC"))
return {"starred_at": starred_at, "user": example["user"]["login"]}
dataset = dataset.map(clean, remove_columns=dataset.column_names)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="stargazers")
Pypi downloads
from datasets import Dataset
from google.cloud import bigquery
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "propane-tree-432413-4c3e2b5e6b3c.json"
# Initialize a BigQuery client
client = bigquery.Client()
# Define your query
query = """
#standardSQL
WITH daily_downloads AS (
SELECT
DATE(timestamp) AS day,
COUNT(*) AS num_downloads
FROM
`bigquery-public-data.pypi.file_downloads`
WHERE
file.project = 'trl'
-- Filter for the last 12 months
AND DATE(timestamp) BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 54 MONTH) AND CURRENT_DATE()
GROUP BY
day
)
SELECT
day,
num_downloads
FROM
daily_downloads
ORDER BY
day DESC
"""
# Execute the query
query_job = client.query(query)
# Fetch the results
results = query_job.result()
# Convert the results to a pandas DataFrame and then to a Dataset
df = results.to_dataframe()
dataset = Dataset.from_pandas(df)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="pypi_downloads")
Models tagged
from huggingface_hub import HfApi
from datasets import Dataset
api = HfApi()
models = api.list_models(tags="trl")
dataset_list = [{"id": model.id, "created_at": model.created_at, "likes": model.likes, "downloads": model.downloads, "tags": model.tags} for model in models]
dataset_dict = {key: [d[key] for d in dataset_list] for key in dataset_list[0].keys()}
dataset = Dataset.from_dict(dataset_dict)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="models")
Issues and comments
import requests
from datetime import datetime
import os
from datasets import Dataset
from tqdm import tqdm
token = os.environ.get("GITHUB_PAT")
def get_full_response(url, headers, params=None):
page = 1
output = []
params = params or {}
while True:
params = {**params, "page": page, "per_page": 100}
response = requests.get(url, headers=headers, params=params)
if response.status_code != 200:
raise Exception(f"Failed to fetch issues: {response.text}")
batch = response.json()
if len(batch) == 0:
break
output.extend(batch)
page += 1
return output
# GitHub API URL for issues (closed and open)
issues_url = f"https://api.github.com/repos/huggingface/trl/issues"
# Set up headers for authentication
headers = {"Authorization": f"token {token}", "Accept": "application/vnd.github.v3+json"}
# Make the request
issues = get_full_response(issues_url, headers, params={"state": "all"})
issues_dataset_dict = {
"number": [],
"title": [],
"user": [],
"state": [],
"created_at": [],
"closed_at": [],
"comments_count": [],
}
comments_dataset_dict = {
"user": [],
"created_at": [],
"body": [],
"issue_number": [],
}
for issue in tqdm(issues):
# Extract relevant information
issue_number = issue["number"]
title = issue["title"]
created_at = datetime.strptime(issue["created_at"], "%Y-%m-%dT%H:%M:%SZ")
comments_count = issue["comments"]
comments_url = issue["comments_url"]
comments = get_full_response(comments_url, headers=headers)
for comment in comments:
comments_dataset_dict["user"].append(comment["user"]["login"])
comments_dataset_dict["created_at"].append(datetime.strptime(comment["created_at"], "%Y-%m-%dT%H:%M:%SZ"))
comments_dataset_dict["body"].append(comment["body"])
comments_dataset_dict["issue_number"].append(issue_number)
issues_dataset_dict["number"].append(issue_number)
issues_dataset_dict["title"].append(title)
issues_dataset_dict["user"].append(issue["user"]["login"])
issues_dataset_dict["state"].append(issue["state"])
issues_dataset_dict["created_at"].append(datetime.strptime(issue["created_at"], "%Y-%m-%dT%H:%M:%SZ"))
issues_dataset_dict["closed_at"].append(datetime.strptime(issue["closed_at"], "%Y-%m-%dT%H:%M:%SZ") if issue["closed_at"] else None)
issues_dataset_dict["comments_count"].append(comments_count)
issues_dataset = Dataset.from_dict(issues_dataset_dict)
comments_dataset = Dataset.from_dict(comments_dataset_dict)
issues_dataset.push_to_hub("qgallouedec/trl-metrics", config_name="issues")
comments_dataset.push_to_hub("qgallouedec/trl-metrics", config_name="issue_comments")
- Downloads last month
- 39