1 year ago

#368042

test-img

Abd Al-Rahman Odeh

fine-tuning bert for abstractive text summarization

I am using BERT (araBert to be more specific) for Arabic abstractive text summarization, but I don't want to train all the parameters from scratch. What I am looking for is a way to freeze the layers and then add some layers (LSTM or Transformer layers) to train the model on my data. How can I accomplish that? Because training the model from scratch takes a lot of resources and time. Thanks for your answers in advance

Tried the HuggingFace approach for fine-tuning summarization task, but it trains all the parameters.

deep-learning

nlp

bert-language-model

summarization

fine-tune

0 Answers

Your Answer

Accepted video resources