Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences


Proceedings of the 6th International Conference on Computing and Data Science

Series Vol. 57 , 30 April 2024


Open Access | Article

Unlocking the potential: A comprehensive exploration of large language models in natural language processing

Qing Xue * 1
1 Xi’an Zhiwen Intelligent Technology Co., Ltd

* Author to whom correspondence should be addressed.

Applied and Computational Engineering, Vol. 57, 247-252
Published 30 April 2024. © 2023 The Author(s). Published by EWA Publishing
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Citation Qing Xue. Unlocking the potential: A comprehensive exploration of large language models in natural language processing. ACE (2024) Vol. 57: 247-252. DOI: 10.54254/2755-2721/57/20241341.

Abstract

In recent years, large language models (LLMs) have revolutionized natural language processing (NLP) with their transformative architectures and sophisticated training techniques. This paper provides a comprehensive overview of LLMs, focusing on their architecture, training methodologies, and diverse applications. We delve into the transformer architecture, attention mechanisms, and parameter tuning strategies that underpin LLMs' capabilities. Furthermore, we explore training techniques such as self-supervised learning, transfer learning, and curriculum learning, highlighting their roles in empowering LLMs with linguistic proficiency. Additionally, we discuss the wide-ranging applications of LLMs, including text generation, sentiment analysis, and question answering, showcasing their versatility and impact across various domains. Through this comprehensive examination, we aim to elucidate the advancements and potentials of LLMs in shaping the future of natural language understanding and generation.

Keywords

Large Language Models, Natural Language Processing, Transformer Architecture, Training Techniques, Self-Supervised Learning

References

1. Kasneci, Enkelejda, et al. "ChatGPT for good? On opportunities and challenges of large language models for education." Learning and individual differences 103 (2023): 102274.

2. Kandpal, Nikhil, et al. "Large language models struggle to learn long-tail knowledge." International Conference on Machine Learning. PMLR, 2023.

3. Schaeffer, Rylan, Brando Miranda, and Sanmi Koyejo. "Are emergent abilities of large language models a mirage?." Advances in Neural Information Processing Systems 36 (2024).

4. Kirchenbauer, John, et al. "A watermark for large language models." International Conference on Machine Learning. PMLR, 2023.

5. Singhal, Karan, et al. "Large language models encode clinical knowledge." Nature 620.7972 (2023): 172-180.

6. Parde, Natalie. "Natural language processing." The SAGE Handbook of Human–Machine Communication (2023): 318.

7. Bharadiya, Jasmin. "A comprehensive survey of deep learning techniques natural language processing." European Journal of Technology 7.1 (2023): 58-66.

8. Phatthiyaphaibun, Wannaphong, et al. "Pythainlp: Thai natural language processing in python." arXiv preprint arXiv:2312.04649 (2023).

9. Treviso, Marcos, et al. "Efficient methods for natural language processing: A survey." Transactions of the Association for Computational Linguistics 11 (2023): 826-860.

Data Availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Authors who publish this series agree to the following terms:

1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.

2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.

3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open Access Instruction).

Volume Title
Proceedings of the 6th International Conference on Computing and Data Science
ISBN (Print)
978-1-83558-393-7
ISBN (Online)
978-1-83558-394-4
Published Date
30 April 2024
Series
Applied and Computational Engineering
ISSN (Print)
2755-2721
ISSN (Online)
2755-273X
DOI
10.54254/2755-2721/57/20241341
Copyright
30 April 2024
Open Access
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

Copyright © 2023 EWA Publishing. Unless Otherwise Stated