Applied and Computational Engineering
- The Open Access Proceedings Series for Conferences
Series Vol. 57 , 30 April 2024
* Author to whom correspondence should be addressed.
In recent years, large language models (LLMs) have revolutionized natural language processing (NLP) with their transformative architectures and sophisticated training techniques. This paper provides a comprehensive overview of LLMs, focusing on their architecture, training methodologies, and diverse applications. We delve into the transformer architecture, attention mechanisms, and parameter tuning strategies that underpin LLMs' capabilities. Furthermore, we explore training techniques such as self-supervised learning, transfer learning, and curriculum learning, highlighting their roles in empowering LLMs with linguistic proficiency. Additionally, we discuss the wide-ranging applications of LLMs, including text generation, sentiment analysis, and question answering, showcasing their versatility and impact across various domains. Through this comprehensive examination, we aim to elucidate the advancements and potentials of LLMs in shaping the future of natural language understanding and generation.
Large Language Models, Natural Language Processing, Transformer Architecture, Training Techniques, Self-Supervised Learning
1. Kasneci, Enkelejda, et al. "ChatGPT for good? On opportunities and challenges of large language models for education." Learning and individual differences 103 (2023): 102274.
2. Kandpal, Nikhil, et al. "Large language models struggle to learn long-tail knowledge." International Conference on Machine Learning. PMLR, 2023.
3. Schaeffer, Rylan, Brando Miranda, and Sanmi Koyejo. "Are emergent abilities of large language models a mirage?." Advances in Neural Information Processing Systems 36 (2024).
4. Kirchenbauer, John, et al. "A watermark for large language models." International Conference on Machine Learning. PMLR, 2023.
5. Singhal, Karan, et al. "Large language models encode clinical knowledge." Nature 620.7972 (2023): 172-180.
6. Parde, Natalie. "Natural language processing." The SAGE Handbook of Human–Machine Communication (2023): 318.
7. Bharadiya, Jasmin. "A comprehensive survey of deep learning techniques natural language processing." European Journal of Technology 7.1 (2023): 58-66.
8. Phatthiyaphaibun, Wannaphong, et al. "Pythainlp: Thai natural language processing in python." arXiv preprint arXiv:2312.04649 (2023).
9. Treviso, Marcos, et al. "Efficient methods for natural language processing: A survey." Transactions of the Association for Computational Linguistics 11 (2023): 826-860.
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open Access Instruction).