🧠AI agents and agentic workflows
Last updated
Last updated
While there isn’t a widely accepted definition, there have been attempts to describe the framework of AI agents. In a broad definition, they can be described as a system with reasoning capabilities, memory, and the necessary tools to execute tasks.
The fundamentals of AI Agents are not completely new, as they are based on a structured prompt chaining concept, which generally requires handcrafted rules. However, modern AI agents innovate in the way they interact with a user using a system-defined persona, use an LLM to reason from a specific query and create a dynamic plan to answer and execute it with the help of different tools or applications.
Given a user request, the AI Agent comes up with a plan to solve the problem by answering questions, such as "Which tools should be used," and "In what order should they be used." It determines when it needs (or not) to conduct research using the selected tools, formulate one or multiple search queries, review the results, or seek clarification, and decides when to provide a relevant answer to a specific query.
The main idea of AI agents is to use LLMs to choose a sequence of actions to answer a specific query.
AI agents streamline the traditional implementation of RAG systems, where each query is handled separately.
The reasoning skills of AI agents come from different components:
Core agent: It refers to the control component that decides the core logic and behavioral characteristics of an AI Agent. It is generally profiled or assigned a persona through various methods.
Planning: Complex problems often require a nuanced approach. This complexity can be managed by using task and question decomposition, where the agent breaks down complex tasks into smaller and manageable subtasks, in a multi-step plan to achieve a goal.
Tools (or Actions): They are workflows that agents can use to execute tasks. For instance, agents can use a RAG pipeline to generate context-aware answers, an API to search information over the internet, a code interpreter to solve programmatically task, etc.
Reflection: The agent performs what is generally called “self-criticism” or “self-reflection” over its previous actions, learns from mistakes and refines if necessary to improve the quality of the output.
Memory: The memory module emulates human memory processes, and enables agents to make more consistent, reasonable, and effective decisions. There are different types of memory modules, notably short-term and long-term memory, where short-term memory enables the model to remember details from previous steps, helping to maintain coherence and context in the outputs, and long-term memory provides the agent with the capability to retain and recall information over extended periods.
These are the possible components of the hypothetical AI agent, but there are still other important considerations and a multitude of developments, notably regarding multi-agent collaboration, where more than one AI agent works together, acting with different roles (product manager, designer, customer service, etc.), splitting up tasks and discussing to come up with better solutions than a single agent would (even if the outputs might be more hazardous in such a situation).
While AI agents have been recognized as intelligent entities capable of accomplishing specific tasks, the field of AI agents is still at its initial stage, and several significant challenges need to be addressed in their development, some being similar to “traditional” AI applications, while others being more agentic specific.
Among the challenges, we can mention prompt robustness and reliability, as AI agents involve an entire prompt framework that might encounter reliability issues, or hallucination, as agents interact with external components that could introduce conflicting information. Some are also related to the efficiency of the actions, considering the number of requests that might be needed (which involves a cost-predictive solution), context length due to the restricted context capacity limits which might limit mechanisms like self-reflection, but also challenges in long-term planning and task decomposition. Agents might struggle to define and adjust plans, notably compared to humans. Finally, it’s important to remind that AI agents are only good as they can easily access the necessary tools and applications.
In real-world scenarios, tasks and workflows are generally characterized by complexity and variability, thereby addressing complex tasks through a one-step planning process might not be enough.
The agent designs are becoming more and more refined and ready for production considering proprietary and open-source LLMs start now to reach a performance level that makes them suitable for powering agent workflow capable of making decisions in real-world and complex scenarios.
The introduction of agentic workflows in enterprises set a new standard for performance, accuracy, and productivity.