Swiss Legal Benchmark for Classification and Generation
Contributors
Supervisors:
Description
We introduce four new datasets within the Swiss jurisdiction for two classification and two generative tasks and yielded significant insights into the domain of legal NLP. We discovered the predominance of comprehensive pre-training over model size in the classification tasks. In the generation tasks, larger models generally outperformed smaller ones, though the generated text often lacked in logical coherence. This exposes the current models’ limitations and underlines the necessity for enhancements in these tasks. This research sets the stage for future investigations, emphasizing the potential of comprehensive pre- training and the improvement of logical coherence in legal language models.
Files
BachelorThesis_Vishvaksenan_Rasiah.pdf
Files
(1.4 MB)
Name | Size | Download all |
---|---|---|
md5:5a85ab1117fe0cffa104ac2394a8b21b
|
1.4 MB | Preview Download |