LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation using Pretraining Language Model
Abstract
This paper describes our submission to subtask a and b of SemEval-2020 Task 4. For subtask a, we use a ALBERT based model with improved input form to pick out the common sense statement from two statement candidates. For subtask b, we use a multiple choice model enhanced by hint sentence mechanism to select the reason from given options about why a statement is against common sense. Besides, we propose a novel transfer learning strategy between subtasks which help improve the performance. The accuracy scores of our system are 95.6 / 94.9 on official test set and rank 7$^{th}$ / 2$^{nd}$ on Post-Evaluation leaderboard.
- Publication:
-
arXiv e-prints
- Pub Date:
- July 2020
- DOI:
- 10.48550/arXiv.2007.02540
- arXiv:
- arXiv:2007.02540
- Bibcode:
- 2020arXiv200702540L
- Keywords:
-
- Computer Science - Computation and Language;
- Computer Science - Artificial Intelligence;
- I.2.7
- E-Print:
- Accepted in SemEval2020. 7 pages, 4 figures