user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-07-30 20:59:07
body
stringlengths
1
173k
issue_number
int64
1
3.81k
__index_level_0__
int64
0
11.8k
qgallouedec
2024-12-13T16:51:33
That's a good point! Feel free to open a PR to fix this. I don't think adding a unittest for this is relevant. If possible, add plots (eg, with wandb) before/after to ensure that we aren't introducing a regression
2,468
200
zhc7
2024-12-13T17:17:59
Ofcourse! ![image](https://github.com/user-attachments/assets/2da93fdf-a29d-41a1-974a-2b640e3a6ee6) here's a graph for the same training with and without the modification. You can see the pink line is a lot more smoother. Especially the accuracy graph. My `per_device_batch_size` is 2 so the accuracy per device can on...
2,468
201
qgallouedec
2024-12-13T17:34:35
Perfect!
2,468
202
HuggingFaceDocBuilderDev
2024-12-12T14:04:44
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2467). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,467
203
qgallouedec
2024-12-13T17:52:47
That's very interesting! It would be a nice improvement. If you want to tackle this problem, you should be aware that packing will be implemented differently (in a simpler way) in the near future, see #2405. You should branch from there.
2,466
204
qgallouedec
2024-12-12T10:44:50
First, SAC is designed for continuous action spaces, whereas NLP tasks involve discrete token outputs. (A discrete variant of SAC exists though.) Second, SAC lacks a mechanism to constrain the policy from deviating too far from the initial model. PPO, on the other hand, explicitly limits policy updates, which is cru...
2,465
205
AMindToThink
2024-12-13T21:11:39
Thank you for the response. The [RLOO trainer](https://arxiv.org/pdf/2402.14740) also lacks PPO's clipping mechanism that constrains the policy from deviating too far from the previous policy. [It turns out](https://huggingface.co/blog/putting_rl_back_in_rlhf_with_rloo) that for RLHF on pretrained language models, t...
2,465
206
qgallouedec
2024-12-13T22:13:21
> lacks PPO's clipping mechanism that constrains the policy from deviating too far from the previous policy There is a KL term though > I don't see why a KL divergence term with a reference policy cannot be included into the SAC loss function. I guess you can, it's just that in its classic formulation, the SAC...
2,465
207
haimianxing
2025-01-15T17:13:58
I also encountered this error. Traceback (most recent call last): ...
2,464
208
HuggingFaceDocBuilderDev
2024-12-11T20:08:30
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2463). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,463
209
qgallouedec
2024-12-12T11:25:31
Thanks @kashif!
2,463
210
qgallouedec
2024-12-11T15:50:18
Looks good, feel free to mark it ready for review when it's ready :)
2,462
211
yaricom
2024-12-12T17:24:11
@qgallouedec Hi, Quentin! I can see that there are some trainer implementations that do logging of tabular data as `wandb.Table` using `Trainer.log()` method rather than using corresponding method of WandB API. For example: ```Python class DPOTrainer(Trainer): ...... def evaluation_loop(...) ...
2,462
212
qgallouedec
2024-12-12T20:50:16
Hey thanks for working on this. Actually, we need to remove all these logging part in favor of [LogCompletionsCallback](https://huggingface.co/docs/trl/callbacks#trl.LogCompletionsCallback) The best way is probably to make this callback compatible with comet
2,462
213
yaricom
2024-12-13T14:37:42
@qgallouedec Thank you for quick response. I noticed that `LogCompletionsCallback` is a subclass of `WandbCallback`, which requires the `wandb` module to be present; otherwise, an exception is raised. It seems a bit out of place to leave this inheritance unchanged and simply add Comet integration to this callback. T...
2,462
214
qgallouedec
2024-12-13T17:38:35
> It is possible to change `LogCompletionsCallback` inheritance to use `TrainerCallback` as superclass and then implement both integrations: wandb and Comet. > > What do you think? Yes, I think your suggestion makes sense. Would you like to make it as part of this PR?
2,462
215
yaricom
2024-12-13T17:46:47
I think it would be better to have another PR for `LogCompletionsCallback` changes to keep things more granular.
2,462
216
qgallouedec
2024-12-13T18:10:36
LGTM, waiting https://github.com/huggingface/trl/pull/2462#discussion_r1884306215 to be addressed then I'll approve & merge. Thanks!
2,462
217
HuggingFaceDocBuilderDev
2024-12-13T18:14:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2462). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,462
218
qgallouedec
2024-12-11T15:04:05
☄️
2,461
219
qgallouedec
2024-12-11T14:22:57
The dataset is not loaded in RAM (only the current batch). The training should be rather light in terms of RAM, as the weights and gradient are on the GPU. You'll still need enough RAM to load the model though. When I run the experiment, I see that it requires less than 2GB of RAM.
2,460
220
Kallinteris-Andreas
2024-12-11T14:30:09
What CPU do you have (exact model) and does DRAM usage explode when you run `CUDA_VISIBLE_DEVICES="" python test.py` by current best guess is that it is a BF16 related issue (as my r5 4600h does not natively support it, probably off though)
2,460
221
qgallouedec
2024-12-11T14:45:57
``` processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 79 model name : Intel(R) Xeon(R) CPU @ 2.20GHz stepping : 0 microcode : 0xffffffff cpu MHz : 2199.998 cache size : 56320 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception...
2,460
222
Kallinteris-Andreas
2024-12-11T14:47:30
GPU does not work for me (works for my other RL projects) ```sh $ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True py test.py [2024-12-11 16:34:26,123] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect) No ROCm runtime is found, using ROCM_HOME='/opt/rocm' 0%| ...
2,460
223
qgallouedec
2024-12-11T14:59:11
WDYM it doesn't work? It seems to work from the traceback. I can see that your device only have 3.63 GiB, which is not enough to run the example. With `max_seq_length=128` you'll need around 12GB ![W B Chart 11_12_2024, 15_58_14](https://github.com/user-attachments/assets/63349049-b7f3-4c09-9229-b1c3f1914c90)
2,460
224
Kallinteris-Andreas
2024-12-11T15:27:51
Here is the dram usage per value of `max_seq_lenght` max_seq_length -> max RAM usage observed (rounded up) 4 -> 10GB DRAM 32 -> 9GB DRAM 128 -> 11GB DRAM 512 -> 18GB DRAM 1024 (default) -> 32GB+ DRAM using `max_seq_length=128` seems to require 28 hours on my CPU which is an improvement from not running at al...
2,460
225
qgallouedec
2024-12-11T15:37:44
> I am assuming it limits the context length used during fine-tuning Yes, that's what it does. > mentions something about ConstantLengthDataset but I have not found it what it is. This is a special dataset setting where all data have the same length. Not relevant for this issue though
2,460
226
Kallinteris-Andreas
2024-12-12T01:29:36
How much time does it take to run this simple example on your hardware? ```py from trl import SFTConfig, SFTTrainer from datasets import load_dataset dataset = load_dataset("trl-lib/Capybara", split="train") training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", max_seq_length=128) trainer = SFTTrainer...
2,460
227
Kallinteris-Andreas
2024-12-16T10:23:40
Closing, as it appears to be the natural requirement of SFT
2,460
228
qgallouedec
2024-12-13T22:16:30
Thanks for this suggestion. Can you quantify the speedup? Any idea how to properly set the gradient checkpointing configurations? Can we reproduce the speedup with a very simple code example?
2,459
229
qgallouedec
2024-12-11T17:10:04
Hey, thanks for contributing! Is it really a loss type? It seems to me that it can be combined with any loss type, no? What about having a new arg in `DPOConfig`? maybe `length_normalize`? Also, I'd add a test for this
2,458
230
HuggingFaceDocBuilderDev
2024-12-10T16:14:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2457). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,457
231
qgallouedec
2024-12-10T17:54:12
This is an interesting finding! ~I suspect it's related to https://github.com/huggingface/trl/issues/2175~. I'm investigating.
2,456
232
qgallouedec
2024-12-10T18:54:04
The issue arises from how the accelerator is configured in [`create_accelerator_and_postprocess`](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990). To set the number of gradient accumulation steps, users can either: 1. Specify `num_steps` i...
2,456
233
muellerzr
2024-12-11T02:52:48
Correct, that's not what we want to do because with the fix to how we calculate the number of items in the batch, the losses will not align and things will be off, so we *don't* divide the loss by accumulation steps if we know that value. I'd need to play with this a bit as I'm not 100% sure if we can just modify the g...
2,456
234
AIR-hl
2024-12-11T03:10:34
> The issue arises from how the accelerator is configured in [`create_accelerator_and_postprocess`](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990). @qgallouedec I have a new question that if the problem arises from [create_accelerator_and_...
2,456
235
qgallouedec
2024-12-11T10:21:00
> @qgallouedec I have a new question that if the problem arises from [create_accelerator_and_postprocess](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990) in `transformers.Trainer`, why `trl.SFTTrainer`'s behavior is normal, but `trl.DPOTrainer...
2,456
236
qgallouedec
2024-12-11T10:41:00
I may have found the solution: https://github.com/huggingface/transformers/pull/35207 Running some experiments...
2,456
237
qgallouedec
2024-12-11T11:12:53
## Does it solve the issue? ### Before the fix same effective batch size (32) - grad accumulation = 32 / batch_size = 1 - grad accumulation = 8 / batch_size = 4 ![Screenshot 2024-12-11 at 12 04 50](https://github.com/user-attachments/assets/d4b7513b-23c3-427a-aed7-72614bf337d0) We can see here that the gra...
2,456
238
AIR-hl
2024-12-11T13:01:32
@qgallouedec Thanks for ur work! So this bug actually only affects the reported logs and not the training results, right? :)
2,456
239
qgallouedec
2024-12-11T13:03:18
That's what the results suggest yes
2,456
240
qgallouedec
2024-12-11T13:25:54
Leaving the issue open until https://github.com/huggingface/transformers/pull/35207 is properly merged
2,456
241
August-murr
2024-12-10T09:29:57
@qgallouedec how's everything so far? Is there anything you'd like me to change?
2,455
242
qgallouedec
2024-12-10T09:38:57
Thanks @August-murr for this PR! As mentioned in this [comment](https://github.com/huggingface/trl/issues/2429#issuecomment-2515244907), I think it would be better to start by only adding this feature to the functions of `trl/data_utils.py` and check that everything works as expected, without adding it to any traine...
2,455
243
qgallouedec
2024-12-10T11:34:47
Looks good! We just need to update the docstrings of the functions and add some unittests
2,455
244
August-murr
2024-12-11T11:58:34
I'm assuming we should also integrate the functions from `data_utils.py` into all the trainers, correct?
2,455
245
qgallouedec
2024-12-11T12:12:44
> I'm assuming we should also integrate the functions from `data_utils.py` into all the trainers, correct? Indeed, but we'll do that in follow-up PR. I think it's the best way to go
2,455
246
HuggingFaceDocBuilderDev
2024-12-11T12:21:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2455). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,455
247
August-murr
2024-12-11T19:09:33
@qgallouedec let me know if there is anything else I need to do.
2,455
248
qgallouedec
2024-12-11T21:44:24
Looks good to me! Just one munir comment
2,455
249
qgallouedec
2024-12-12T15:46:38
```python from transformers import AutoProcessor from trl import apply_chat_template tokenizer = AutoProcessor.from_pretrained("trl-internal-testing/tiny-LlamaForCausalLM-3.2") # Define dummy test tools def get_current_temperature(location: str): """ Gets the temperature at a given location. A...
2,455
250
HuggingFaceDocBuilderDev
2024-12-09T17:55:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2454). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,454
251
kashif
2024-12-09T13:30:15
@NINGBENZHE do you know where in the code the issue is occurring?
2,453
252
NINGBENZHE
2024-12-09T14:09:34
> @NINGBENZHE do you know where in the code the issue is occurring? I haven't found the issue yet, but after making the modifications, the critic's loss is functioning normally, and the optimizer's functionality has been restored.
2,453
253
kashif
2024-12-09T14:10:58
ok let me try to pin-point the issue... and perhaps try to add a failing test?
2,453
254
NINGBENZHE
2024-12-09T14:23:17
> ok let me try to pin-point the issue... and perhaps try to add a failing test? You can repeat the same data and observe the critic's loss; it remains unchanged.
2,453
255
NINGBENZHE
2024-12-09T14:24:38
I found that the issue might have been introduced by this PR. https://github.com/huggingface/trl/commit/16fa13ce728e537a91742571b0c4824fb3a98a30
2,453
256
HuggingFaceDocBuilderDev
2024-12-09T14:44:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2453). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,453
257
kashif
2024-12-09T14:47:50
@NINGBENZHE can you kindly run `make precommit` in the root dir to fix the formatting?
2,453
258
NINGBENZHE
2024-12-10T01:10:44
> @NINGBENZHE can you kindly run `make precommit` in the root dir to fix the formatting? I made the submission directly on the web, without using local Git, and only modified the parameter names, so it should not have introduced any new formatting issues. Can you force the merge?
2,453
259
kashif
2024-12-12T10:45:09
closing as these changes have been merged into the PR #2463
2,453
260
asparius
2024-12-10T14:08:59
You have two gpus, but you only use it 1 in your accelerate config. You could also use deepspeed to further decrease the memory footprint. Lastly, keep per_device_train_batch_size as low as possible, instead increase gradient_accumulation step.
2,452
261
gp-1108
2024-12-12T18:20:59
Hi @asparius, thank you for the suggestions. As I am running this code on a computing cluster I am having some problems with [deepspeed](https://github.com/microsoft/DeepSpeed/issues/2772#issuecomment-2151669077). I would like to keep this issue open and get back once I have solved those issues
2,452
262
qgallouedec
2024-12-13T22:33:34
It might come from your data. Do you have long sequences in your dataset? It's very recommended to set these arguments: `max_length`, `max_prompt_length`, `max_completion_length` in the `DPOConfig`. Eg. ```python DPOConfig( ..., max_prompt_length=128, max_completion_length=512, ) ```
2,452
263
asparius
2024-12-14T00:46:01
@gp-1108 I faced similar issues. I would recommend to check available modules in your cluster by a command like "module avail" and load a cuda installation by "module load", of course this is assuming you are in slurm env. If you dont have cuda in available modules, perhaps you could ask cluster admins to download it. ...
2,452
264
gp-1108
2024-12-16T00:52:13
Hi all, I have finally fixed all of the CUDA issues with the computing cluster 😮‍💨. However, I did not fix the original issue. I am still running OOM even after using two full A40s. I have tweaked both the script and the accelerate config so I will leave them below (I hope everything is setup as it should be). ...
2,452
265
gp-1108
2024-12-20T12:02:25
Hi, I have solved the issue finally and I am going to leave it here for the posterity. The issue lay mainly in two things: 1. **Some samples were too long** 2. **The PEFT configuration was not working** **MANAGING SAMPLE LENGTH**: I plotted the lengths across a couple of metrics: ![image](https://github.com/u...
2,452
266
Kallinteris-Andreas
2024-12-08T23:00:17
What is the reason that your model is not a `torch.nn.Module`, my first reaction would be that you are doing something wrong, unless you provide a detailed explanation as to why can you convert your model to a `torch.nn.Module`? but if you have a good reason for a custom reward model class, you would have to modi...
2,451
267
hwhyyds
2024-12-09T06:30:26
In my code, I have trained a reward model with three outputs was in a format similar to '{"type1": 1, "type2": -1, "type3": 0}', which is different from the traditional output
2,451
268
asparius
2024-12-10T14:02:16
> In my code, I have trained a reward model with three outputs was in a format similar to '{"type1": 1, "type2": -1, "type3": 0}', which is different from the traditional output I believe you are doing some sort of classification, so you could still have nn-based module for classification part and then map its resu...
2,451
269
hwhyyds
2024-12-16T10:58:18
Such as my scores from GPT-4o, can't be used by nn-based module
2,451
270
kashif
2024-12-11T12:31:48
thanks @NIL-zhuang great catch!
2,450
271
HuggingFaceDocBuilderDev
2024-12-11T12:37:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2450). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,450
272
qgallouedec
2024-12-11T12:45:14
Do we test this collator somewhere?
2,450
273
kashif
2024-12-11T12:50:49
we do test it.. but more for the labels rather than the content of the ids... let me see if i can add a failing test
2,450
274
kashif
2024-12-11T13:18:21
@qgallouedec added failng tests
2,450
275
asparius
2024-12-10T14:23:57
It is the entropy of the generated sequence by the policy given the prompt. Do you intend to measure another thing?
2,448
276
hubstrauss
2024-12-10T15:36:05
Oh I see, my bad - as the tokens were sampled from the model, you can get a sample based estimation of entropy. Thanks ! So but then, why is the default value of INVALID_LOGPROB set to 1 ? When computing `-logprobs`, these masked tokens contribute -1 each to the sum ?
2,448
277
asparius
2024-12-10T18:54:38
This was already mentioned in #2281.
2,448
278
HuggingFaceDocBuilderDev
2024-12-06T12:33:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2447). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,447
279
anhuong
2024-12-06T17:30:29
Does this also need to be updated in the [requirements.txt](https://github.com/huggingface/trl/blob/main/requirements.txt#L4)? as it still shows `transformers>=4.46.0`
2,447
280
kashif
2024-12-06T18:09:57
@anhuong I believe the `requirements.txt` is used by the CI and the issue is fixed in the main branch...
2,447
281
qgallouedec
2024-12-06T12:24:20
Thanks for reporting. A patch release is coming asap. In the meantime, downagrading transformers to 4.46 should work. ``` pip install transformers==4.46 ``` Related to https://github.com/huggingface/trl/pull/2381 Keeping the issue open until the release
2,445
282
gp-1108
2024-12-06T14:59:42
Thanks @qgallouedec, downgrading to the specified version worked!
2,445
283
qgallouedec
2024-12-13T22:36:16
Solved with https://github.com/huggingface/trl/releases/tag/v0.12.2 ``` pip install --upgrade trl ```
2,445
284
pspdada
2024-12-20T04:02:00
> Solved with https://github.com/huggingface/trl/releases/tag/v0.12.2 > > ``` > pip install --upgrade trl > ``` Hello, I've noticed that the issue with the latest version of trl==0.13.0 has resurfaced. Since in this version, the requirement has reverted to "transformers>=4.46.0", this problem has reappeared. Co...
2,445
285
qgallouedec
2024-12-20T10:29:41
This issue should be fixed in 0.13. Can you share your system info? (`trl env`)
2,445
286
pspdada
2024-12-20T11:56:57
> This issue should be fixed in 0.13. Can you share your system info? (`trl env`) I understand what happened with the changes in this part; it was due to an error in my implementation. I apologize for the disturbance.
2,445
287
melissamao
2024-12-29T13:22:04
Same questions.
2,444
288
HuggingFaceDocBuilderDev
2024-12-05T19:15:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2443). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,443
289
HuggingFaceDocBuilderDev
2024-12-05T18:54:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2442). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,442
290
HuggingFaceDocBuilderDev
2024-12-05T15:00:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2441). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,441
291
qgallouedec
2024-12-06T09:08:45
Thanks! can you approve?
2,441
292
HuggingFaceDocBuilderDev
2024-12-05T14:24:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2440). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,440
293
qgallouedec
2024-12-04T18:15:42
LGTM, thanks :)
2,439
294
dame-cell
2024-12-10T15:19:45
not really done yet but for now everything seems to be working if padding_free is set to True the trainer will not pad and also when padding_free =True attention_mask will not be used for now here are some task to be done : - [x] Ensure when padding_Free =True the trainer will not pad - [x] Ensure that...
2,437
295
dame-cell
2024-12-11T13:02:27
most of the stuff is done just some small stuff left like dealing with list and converting to tensor
2,437
296
dame-cell
2024-12-11T14:43:58
Hey @osanseviero, The main idea for using padding_free is mostly in place now, but there are still a few things that need to be done. It would be awesome if you could take a look at the code and let me know if there's anything else I should address or add. I've made it so the user can directly do this ```pyt...
2,437
297
dame-cell
2024-12-13T12:21:52
@osanseviero I think this fixes it sorry for the small mistake I have been making , thanks for your patience
2,437
298
dame-cell
2024-12-13T14:44:55
still more work to be done not really ready yet
2,437
299