🔤AI-glossary
AI Agent:
In a broad definition, AI agents can be described as a system with reasoning capabilities, memory, and the necessary tools to execute tasks. For more, AI agents and agentic workflows.
Artificial Intelligence (AI):
Artificial intelligence or AI is a broad term that refers to any type of computer software that engages in human-like activities. The primary objective of AI is to develop self-reliant machines that can think and act like humans. For more, AI and GenAI.
Chatbot:
A chatbot is a software application designed to conduct text or voice conversations.
Data:
Data is the fundamental bedrock of AI. Not only in training foundation models themselves, but also in fine-tuning those models to perform specific tasks.
Deep Learning:
A subfield of machine learning that utilizes artificial neural networks with multiple layers to process and analyze complex patterns and structured data.
Embeddings:
An embedding is a vector representation of a piece of data that is meant to preserve aspects of its content and/or its meaning. Chunks of data that are similar will tend to have embeddings that are closer together than unrelated data, which makes them useful for search.
Fine-tuning:
Fine-tuning is the process by which foundation models are adapted for specific downstream tasks or with specific knowledge using a particular dataset.
Foundation models:
Foundation models are complex machine learning systems trained on vast quantities of data (text, images, audio, or a mix of data types) on a massive scale.
Generative Artificial Intelligence (GenAI):
Generative AI refers to a subset of artificial intelligence that focuses on creating new and original content, such as images, music, or text, using algorithms and machine learning models. It enables machines to generate creative outputs that mimic human-like patterns and styles.
Hallucination:
Hallucination often stems from overfitting, bias, or the model's inability to be context-aware. Large language models can generate outputs that are not directly derived from the input data and hallucinate if not trained to not do so.
Large Language Models (LLMs):
LLM is a type of AI model that uses deep learning techniques and vast amounts of data to perform a variety of Natural Language Processing (NLP) tasks such as generating and classifying text, answering questions in a conversational manner, summarizing content or predicting new content.
Natural Language Generation (NLG):
The process of generating human-like language or text using AI algorithms, allows machines to produce coherent and meaningful written content.
Natural Language Processing (NLP):
Natural language processing in AI is the ability of a computer system to understand, interpret, and generate human language, enabling interactions between humans and machines through speech or text.
Natural Language Understanding (NLU):
The AI’s capability to comprehend and interpret human language in a meaningful way, allows machines to extract intent, context, and relevant information from textual or spoken inputs.
Machine learning:
A subfield of AI where machines can learn from data and perform certain tasks and functions without being explicitly programmed to do so.
Reinforcement learning:
A type of machine learning in which the algorithm learns by acting toward an abstract goal. During training, each effort is evaluated based on its contribution toward the goal.
Retrieval Augmented Generation (RAG)
RAG is a method that enhances the capabilities of LLMs by integrating them with a dynamic retrieval system. This system continuously feeds the LLM with updated and relevant information from external data sources. For more, Retrieval Augmented Generation (RAG).
Structured data
Data that has a predefined data model or is organized in a predefined way.
Unstructured data
Data that does not have a predefined data model or is not organized in a predefined way.
Workflow automation:
Workflow Automation is the process of using software or tools to automate manual tasks or workflows in a business environment. This advanced form of automation learns and improves over time, reducing human intervention, minimizing errors, and enhancing efficiency and productivity.
Last updated