A large language model (LLM) is a type of generative artificial intelligence (GAI) algorithm that uses deep learning techniques and massively large data sets to summarize, predict, and generate content. The GAI does not create new ideas, 'think,' or understand, but it can be useful to locate or summarize existing information.
AI is a good mimic and detector of patterns, so it can do a great job finishing sentences like "The first president to have their State of the Union speech televised was...". It doesn't "think." So, although GAI can sound confident while it’s predicting an answer, it may provide wrong answers. If it trained on a website that reported about the speech Thomas Jefferson gave on TV in 1801, the LLM might provide that as the answer. (Even though TV was not around in 1801). They may also 'invent' articles and can attribute them to real authors and actual journals. It can be dangerous to trust the answers unless you already have some knowledge of the topic.
GAI training is selected by humans and can reflect human biases and gaps in knowledge. This is one reason LLMs can reinforce stereotypes. For example, if all the training data included doctors as White men and nurses as Black women, the LLM might respond to queries that all doctors and nurses were those combinations, despite what facts, experience, or logic would be clear to a human.
Generative AI is not intelligent. It doesn't 'understand' your question or it's answer. It was created to provide persuasive guesses, not facts. However if you provide background for your AI tool, it will be better able to guess what is useful to you.
Here are some tips for Educators.
Questions to consider before using generative AI.
One way to write a prompt is to provide the LLM with this information: