Published March 30, 2024 | Version CC-BY-NC-ND 4.0
Journal article Open

A Case Study on the Diminishing Popularity of Encoder-Only Architectures in Machine Learning Models

  • 1. Department of Data Science, Northeastern University, San Jose, United States.
  • 1. Department of Data Science, Northeastern University, San Jose, United States.
  • 2. Department of Computer Science, University of Massachusetts Amherst, Sunnyvale, United States.
  • 3. Department of Information Security, Carnegie Mellon University, San Jose, United States.
  • 4. Department of Computer Science, Carnegie Mellon University, San Jose, United States.

Description

Abstract: This paper examines the shift from encoder-only to decoder and encoder-decoder models in machine learning, highlighting the decline in popularity of encoder-only architectures. It explores the reasons behind this trend, such as the advancements in decoder models that offer superior generative capabilities, flexibility across various domains, and enhancements in unsupervised learning techniques. The study also discusses the role of prompting techniques in simplifying model architectures and enhancing model versatility. By analyzing the evolution, applications, and shifting preferences within the research community and industry, this paper aims to provide insights into the changing landscape of machine learning model architectures.

Files

D982713040324.pdf

Files (310.4 kB)

Name Size Download all
md5:6d5c9e9d2879183955497313d4c170b8
310.4 kB Preview Download

Additional details

Identifiers

Dates

Accepted
2024-03-15
Manuscript received on 28 February 2024 | Revised Manuscript received on 08 March 2024 | Manuscript Accepted on 15 March 2024 | Manuscript published on 30 March 2024.

References

  • Vaswani, A., et al. (2017). "Attention Is All You Need." Advances in Neural Information Processing Systems.
  • Radford, A., et al. (2018). "Improving Language Understanding by Generative Pre-Training." OpenAI Blog.
  • Devlin, J., et al. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding." arXiv preprint arXiv:1810.04805.
  • Brown, T., et al. (2020). "Language Models are Few-Shot Learners." arXiv preprint arXiv:2005.14165.
  • Raffel, C., et al. (2019). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer." Journal of Machine Learning Research.
  • Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation https://doi.org/10.1162/neco.1997.9.8.1735
  • Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks. arXiv:1409.3215
  • Radford, A., et al. (2018). Improving Language Understanding by Generative Pre-Training.
  • Kitaev, N., Kaiser, Ł., & Levskaya, A. (2020). Reformer: The Efficient Transformer. arXiv:2001.04451
  • Touvron, Hugo, et al. "LLaMA: Open and Efficient Foundation Language Models." ArXiv, 2023, /abs/2302.13971.
  • Radford, A., et al. (2019). Language Models are Unsupervised Multitask Learners
  • Liu, Pengfei, et al. "Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing." ArXiv, 2021, /abs/2107.13586.
  • Lewis, P., et al. (2020). "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.
  • Hu, H., et al. (2021). "LoRA: Low-Rank Adaptation of Large Language Models.
  • Srikanth, P., Ushitaasree, & Anand, Dr. G. P. (2019). Conversational Chatbot with Attention Model. In International Journal of Innovative Technology and Exploring Engineering (Vol. 9, Issue 2, pp. 3537– 3540). https://doi.org/10.35940/ijitee.b6316.129219
  • Balaji, S., Gopannagari, M., Sharma, S., & Rajgopal, P. (2021). Developing a Machine Learning Algorithm to Assess Attention Levels in ADHD Students in a Virtual Learning Setting using Audio and Video Processing. In International Journal of Recent Technology and Engineering (IJRTE) (Vol. 10, Issue 1, pp. 285–295). https://doi.org/10.35940/ijrte.a5965.0510121
  • Nayak, R., Kannantha, B. S. U., S, K., & Gururaj, C. (2022). Multimodal Offensive Meme Classification u sing Transformers and BiLSTM. In International Journal of Engineering and Advanced Technology (Vol. 11, Issue 3, pp. 96–102). https://doi.org/10.35940/ijeat.c3392.0211322
  • Singh, S., Ghatnekar, V., & Katti, S. (2024). Long Horizon Episodic Decision Making for Cognitively Inspired Robots. In Indian Journal of Artificial Intelligence and Neural Networking (Vol. 4, Issue 2, pp. 1–7). https://doi.org/10.54105/ijainn.b1082.04020224
  • Sharma, T., & Sharma, R. (2024). Smart Grid Monitoring: Enhancing Reliability and Efficiency in Energy Distribution. In Indian Journal of Data Communication and Networking (Vol. 4, Issue 2, pp. 1–4). https://doi.org/10.54105/ijdcn.d7954.04020224