Artificial IntelligenceTechnology Trends

Large Language Models: The Transformative Force Shaping the 21st Century

By Rosie Dutt

It is no secret that as a species we are consistently looking for ways to simplify day-to-day tasks and save some time. This was seen when humans embraced the use of calculators to accelerate the computation of simple mathematical equations, self-checkout machines that expedited the purchase of goods, and the emergence of robot vacuums autonomously cleaning our homes. More recently, we have seen how the use of large language models (LLMs) such as ChatGPT and Bard have taken center stage in our lives, being utilized daily for a whole host of tasks — given that they can generate human-like text by understanding complex language patterns. But how do these LLMs work?

To understand LLMs, we need to understand how artificial intelligence (AI) algorithms work. Algorithms themselves can be thought of as a set of instructions used to complete a task or solve a problem. An AI algorithm does the same, just through the use of computers that perform the task or solve the problem which would normally require human intelligence. These AI algorithms can use different AI techniques, such as machine learning and deep learning. Machine learning involves training an algorithm to learn the statistical relationship, patterns and structures, whilst deep learning (a subset of machine learning) uses multiple layers of association to learn and extract patterns from data. A common form of deep learning is neural networks, which have several association layers that can process and transform data input to make predictions or recognize patterns.

LLMs are advanced AI models which leverage machine learning techniques, including neural networks, and AI algorithms to comprehend and subsequently generate human-like text. These models process data to identify patterns, relationships and insights that can be used in the decision-making process or for the prediction and classification of other data. LLMs are trained on specific datasets and they improve their performance over time through exposure and training on different types of data. A LLM that everyone has become familiar with is ChatGPT. This LLM is trained on data (text) that is freely accessible on the internet — this includes data from websites, books and articles — with the aim that ChatGPT interacts with the user requests/comments by generating coherent, context-specific responses.

There are several reasons why LLMs are being used by people and organizations. The first is their versatility — due to these models being trained on a diverse set of data, they can respond to an assortment of topics and perform a range of tasks — this, coupled with their comprehensibility and instant response time has led to their widespread adoption in a range of industries. Additionally, these LLMs can process multiple interactions in parallel — this scalability, together with their capacity to automate routine tasks, is resulting in discussions and subsequent action in the use of AI for numerous tasks; diverting and minimizing the use of human resources.

Furthermore, AI chatbots are also on the rise in various industries, contributing to general customer support, scheduling appointments, lead generation, and content delivery to name a few. AI chatbots can also use LLMs to generate their responses by leveraging the LLM language capabilities and context comprehension. Outside of the workforce, social media platforms such as Snapchat have partnered with ChatGPT to provide a chatbot for their users (i.e., My AI). Several people across all ages reported they feel less lonely using chatbots, given they are available 24/7 to provide feedback and make them feel heard. This is an example of how humans are using AI for personal use, seeking companionship via applications using LLM.

Despite their versatility, scalability, rapid response times, and emotional companionship, there are several limitations to be noted when using a LLM. The first is dependent on when the LLM has been trained. For example, ChatGPT was last trained on open-access internet data in September 2021. As such, any recent developments from then until now may not be encapsulated in the model’s response, which can lead to outdated replies. More so, if the type of dataset the LLM has been trained on is over-representing certain types of information, the responses can be skewed and biased, which in turn can impact the reliability and confidence for the widespread adoption of these models. On the contrary, training of LLM on generalized data can also limit its application in domain-specific contexts. More so, LLM can generate false, harmful and inaccurate information, which raises ethical concerns. Some users may be unaware that they are interacting with an LLM, and this lack of transparency can raise concerns about data privacy and user consent — if the user does share personal and sensitive information.

Additionally, LLMs such as ChatGPT generate content based on existing, open-access material. Accordingly, they may inadvertently generate content without consent from original creators, with some referring to this as plagiarism, and others citing a lack of proper authorship attributions. These concerns resulted in the strikes held by actors and writers in Hollywood earlier this year. Actors cited their concerns about their likeness being used by AI technology, whilst writers highlighted issues surrounding credit and prestige. Due to the speed at which developments are occurring, existing intellectual property law does not fully address all the nuances that arise when using AI-generated content, such as guidelines and regulations on creative control, credit attributions, and intellectual property protection. The complex interplay between the use of LLMs, ethical concerns, and worker’s rights will continue to be a topic of conversation as AI develops and threatens to pose a risk to the livelihood of many.

Akin to the development and expansion of the internet, and how it played a significant role in shaping the 20th century, these advancements seen in AI technologies, particularly LLMs, are becoming integrated into all aspects of society as we progress into the 21st century. As we navigated the internet connecting us with each other, the development of LLMs is taking this notion a step further and connecting us with AI. This begs the question as to what will the 22nd century be shaped by, what will we be connecting with then? Given all these developments, it is important that everyone stays abreast of these topics, and understands how LLMs work and are evolving, as they can and are shaping how we work and interact daily. With new frontiers come new challenges as well as opportunities, and all of this requires continuous dialogue between us, along with being open to change and adapting.

Advertisement

Rosie Dutt

Rosie Dutt, Ph.D. is an incoming Assistant Instructional Professor at the University of Chicago. Her teaching focuses on topics at the intersection of computer science, engineering, neuroscience, data science and psychology. Additionally, Rosie is passionate about sharing science with the masses, which has led to her in assisting numerous individuals to start businesses and is regularly is invited to discuss how to commercialize science through entrepreneurship. Rosie also has a background in journalism and editing having written for media outlets in the UK and having been the former Editor-in-Chief at the Journal of Science Policy and Governance.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button