could you please release the fintuning code?
hi, could you please add the fintuning code? Thanks.
hi, could you please add the fintuning code? Thanks.
To be compatible with various downstream task heads, my Chinese modernBERT removes the Masked Language Model (MLM) head. When fine-tuning for new tasks, first freeze the model backbone and train for a few steps using mean pooling and an MLP, then unfreeze the backbone parameters.
hi, could you please add the fintuning code? Thanks.
To be compatible with various downstream task heads, my Chinese modernBERT removes the Masked Language Model (MLM) head. When fine-tuning for new tasks, first freeze the model backbone and train for a few steps using mean pooling and an MLP, then unfreeze the backbone parameters.
Could you please release the fintuning scripts?I have some unkonwn problems and the NER fintuning results was bad.
hi, could you please add the fintuning code? Thanks.
To be compatible with various downstream task heads, my Chinese modernBERT removes the Masked Language Model (MLM) head. When fine-tuning for new tasks, first freeze the model backbone and train for a few steps using mean pooling and an MLP, then unfreeze the backbone parameters.
Could you please release the fintuning scripts?I have some unkonwn problems and the NER fintuning results was bad.
sorry,I have know that token level task is bad
Do you have any idea to improve the token-level tasks? Thank you.
I used the script below to train NER model.
https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py
Is there some wrong or should I revise the script?