Artificial intelligence: Perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.
Artificial general intelligence: The ability of an intelligent agent to understand any intellectual task that human beings or other animals can.
Fragility: A term for the lack of robustness for natural language processing models – highly sensitive to the semantic or syntactic layout of the input prompt.
Generative artificial intelligence: Artificial intelligence that can produce novel data, such as text, audio, and images.
Hallucination: A confident response by an AI that does not seem to be justified by its training data.
Intelligent agent: Anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance or acquiring knowledge.
Large language model: Computer programs for natural language processing that use deep learning and neural networks.
Parameter: Variables in an AI system whose values are adjusted during training to establish how input data gets transformed into the desired output; for example, the connection weights in an artificial neural network. (ourworldindata.org)
Tokenization: The process of breaking down text into smaller pieces called tokens. (Towards Data Science)
Transformer: A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data. (Wikipedia)
Word vectorization: A natural language processing methodology mapping words or phrases from vocabulary to a corresponding vector of real numbers which is used to find predictions, word similarities and semantics. (Towards Data Science)