Skip to content

Conversation

@desireevl
Copy link

@desireevl desireevl commented Aug 9, 2019

Fix to #14. I believe that this should update the things that have changed from the migration from pretrainedbert to pytorch-transformers. This update runs fine for me using the toxic Kaggle dataset however, I think there may be some other things in trainer.py that I have missed.

Also in bert_fine.py, I have commented out output_all_encoded_layers in the forward function as I'm not sure how to include this

@maxware0715
Copy link

Thank you!
By the way, focal loss is better than BCE loss, especially for hundreds of labels

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants