AI is evolving beyond static automation and reactive models. The next wave – agentic AI – introduces autonomous systems capable of making decisions and adapting to dynamic environments without human intervention.
Agentic AI can execute complex, multi-step workflows, but its use poses serious risks to security and data privacy. How can organizations safely address these challenges while still extracting the full value of agentic AI?
In this comprehensive resource, we’ll cover agentic AI and its use cases across different industries like healthcare and finance. We’ll also look at the risks of building and deploying AI agents and ways to address them.
Agentic AI refers to AI-driven autonomous systems that execute complex, multi-step processes with minimal human intervention. Unlike traditional AI models that require explicit prompts, agentic AI operates continuously, adapting to real-time data, making independent decisions, and optimizing workflows based on predefined objectives.
Three core features distinguish agentic AI from traditional AI systems:
Traditional AI models like ChatGPT rely on users prompting large language models (LLMs) to generate novel outputs, but such models are rapidly evolving. Gartner predicts that one-third of interactions with gen AI will use autonomous agents for task completion by 2028. Understanding how agentic AI works is key for organizations to leverage its capabilities effectively.
Agentic AI combines advanced technologies and decision-making algorithms with goal-oriented behavior to operate without human intervention.
These systems follow a four-step process to solve multi-step problems:
Let’s look at how agentic AI differs from other AI technologies.
AI isn’t new. The underlying technologies behind AI—machine learning and data science—have been around for decades. An early example is the nearest neighbor algorithm in 1967, which used pattern recognition to improve route optimization. Agentic AI builds on these early foundational technologies.
Agentic AI is transforming the financial sector. Advanced agents can anaAgentic AI is transforming the financial sector. Advanced agents can analyze market trends and identify potential investment opportunities. They can even create personalized investment plans for individual clients based on their financial goals. AI agents are also being used to combat financial crime, including fraud detection and money laundering.
As agentic AI gets increasingly evaluated and developed inside banking and other financial institutions, difficult questions around data access must be addressed. Will agents have access to customer PII or payments data? How will data be protected and governed for use in training LLMs and agents?
Agentic AI systems can process vast datasets, such as patient histories, clinical notes, and diagnostic imaging. They can help doctors analyze medical records, automate data entry, and streamline decision-making.
Autonomous agents are being used to speak with patients and provide support through empathetic responses. For example, a radiotherapy patient can receive AI-generated messages explaining the treatment and reviewing appointment details. By handling patient-facing tasks, these agents can help reduce administrative burdens.
AI has the potential to have a powerful impact in healthcare, but it also raises privacy concerns around access to and potential leaks of sensitive data.
Watch Now: How GoodRx protects and governs PII for millions of healthcare customers
Companies have relied on basic chatbots to handle customer interactions for years. But these chatbots are often limited in their capabilities. They’re pre-programmed to respond to specific questions or requests (e.g., “What are your business hours?”)
Agentic AI enables more robust customer service capabilities beyond answering basic questions. It can understand customer intents, resolve complex issues, and anticipate customer needs.
For example, if a customer asks, “What’s my order status?” a traditional chatbot may simply provide a tracking number for them to check. But an agentic AI system can go further by automatically escalating the ticket to the logistics team and routing it through a faster shipping partner.
Though agentic AI systems can streamline customer service departments, organizations need to establish escalation paths for customers to speak with a human. A cautionary example is Air Canada’s chatbot, which gave a customer inaccurate information about a bereavement fare, misleading him into buying a full-price ticket. Air Canada was ordered to pay the difference between what the customer paid and the discounted bereavement fare. Incidents like these highlight the importance of maintaining human oversight.
With its ability to handle multi-step workflows, agentic AI can help retailers optimize their operations and deliver better customer experiences through automated inventory management. The capabilities of agentic AI in retail become more evident when looking at real-world cases.
For instance, some major clothing brands are using agentic AI to enhance the customer shopping experience.
Others are implementing agentic AI systems that monitor inventory levels in real time and automatically place orders based on predefined thresholds. AI agents can work with other agents—known as multi-agent systems—to coordinate deliveries and avoid supply chain bottlenecks.
Much like in finance and healthcare, it’s critical to consider the types of data agentic AI may need or get access to in retail use cases. Will the agent require access to payments data or customer PII? How will that access be strictly governed, and how will the customer data be protected and secured?
Read More: How a global eCommerce company protects customer PII
Agentic AI offers powerful capabilities to help organizations improve efficiency. However, their autonomous nature seriously affects how enterprises implement AI and oversight mechanisms. For example, if your organization uses agentic AI to monitor company accounts, how do you ensure it handles sensitive data responsibly?
As businesses grapple with these challenges, the adoption of agentic AI is rapidly accelerating. Gartner predicts that 33% of enterprise applications will use agentic AI by 2028, up from less than 1% in 2024.
As enterprises increasingly deploy agentic AI systems, governance frameworks must evolve to address security, compliance, and operational risks. Plus, with low-code tools like Microsoft’s Copilot Studio for building intelligent agents, it won’t be long before more companies implement agentic AI in their processes—without the necessary expertise to make them secure. Understanding the risks of agentic AI is key to developing safe AI strategies.
Agentic AI systems continuously ingest and process data, often across multiple integrated platforms through APIs. However, this interconnectedness increases the risk of data breaches, unintentional data leaks, unauthorized access, and regulatory non-compliance.
APIs enable agentic AI systems to connect to different systems and perform complex, multi-step workflows. These connections are vulnerable to a range of attacks. PandaBuy, an international e-commerce platform, experienced a significant breach that exposed the data of over 1.3 million customers due to vulnerabilities in its API.
Security researchers demonstrated that Copilot is susceptible to prompt injections—a method of feeding AI systems malicious prompts. Attackers can use this attack method to steal sensitive data and manipulate it to make potentially harmful decisions. In one case, researchers manipulated a Copilot to direct users to a phishing site to steal their credentials.
Mitigation strategies:
Read more: Privacy in the Age of Generative AI
Agentic AI operates with autonomy, which means that, over time, models may develop behaviors that deviate from their intended objectives, especially in environments with dynamic inputs.
For example, a logistics company uses agentic AI to optimize delivery routes. Over time, the system may prioritize cost efficiency over customer service, delaying shipments without factoring in customer preferences.
Without explainable AI (XAI) mechanisms—insights that allow humans to understand an AI system’s outputs—tracing the cause of these decisions becomes difficult.
Mitigation strategies:
With agentic AI making autonomous decisions in regulated industries like finance, healthcare, and insurance, compliance violations and ethical concerns become critical risks. Among these concerns is bias in decision-making.
For example, if an AI agent is trained on data where insurance claims are disproportionately denied, it could perpetuate that bias in future decisions. In fact, insurance providers UnitedHealthcare, Humana, and Cigna faced lawsuits after their AI-driven claims processing systems were found to disproportionately deny claims.
The lack of oversight mechanisms led to systemic discrimination, exposing the providers to legal liability. Two families of deceased beneficiaries of UnitedHealthcare have filed a lawsuit against the company, claiming its algorithm denied medically-necessary care to elderly patients. Another case was brought by a student who claimed the company denied coverage for drugs that were deemed “not medically necessary” despite doctors recommending continued treatment.
Mitigation strategies:
Agentic AI can prove game-changing for organizations that successfully adopt it. However, training these systems on existing data sets raises privacy and compliance risks.
Agentic AI systems rely on LLMs to perform a wide range of natural language processing (NLP) tasks. However, LLMs can’t be easily governed, something Samsung discovered when its employees inadvertently leaked sensitive data while using ChatGPT.
Organizations often need to train models with sensitive data for workflows like analytics. Organizations must correctly establish data governance policies – rules that specify who is allowed to access and use certain types of data. Proper access controls ensure only authorized individuals can access sensitive data (and even then those permissions must be carefully managed).
Data privacy vaults like Skyflow’s use an architectural structure that isolates and secures sensitive data. Our Detect Invoker Role simplifies data governance by allowing Vault Administrators to set and enforce access control levels for sensitive data.
The graphic below shows an example of an LLM Privacy Vault:
With proper data governance, organizations can prevent unauthorized access to sensitive data and mitigate privacy risks of LLM-based AI agents by adhering to the principle of least privilege.
Read more: Generative AI Data Privacy with Skyflow LLM Privacy Vault
Almost two-thirds of respondents in a survey indicated that HITL is critical for responsible AI use. For agentic AI, “human-in-the-loop” (HITL) is essential when AI actions impact compliance, security, or high-value transactions.
Organizations must determine where to place human “checkpoints” to balance efficiency with risk mitigation. Examples include:
For example, an AI agent reviewing loan applications in financial services may approve low-risk applications autonomously but escalate high-risk cases for manual review.
Given the autonomous nature of agentic systems, outputs can stray from desired results. Guardrails are frameworks that ensure agents operate within set boundaries and execute routine tasks correctly. For example, a guardrail could allow agents to process refunds up to a certain amount but require human approval above that amount.
Agentic AI systems can make questionable decisions and produce unexpected outcomes despite their advanced capabilities. Organizations can counter these risks by maintaining audit trails that log all system actions and data used to make decisions. Human reviewers can review these logs and reverse behaviors if necessary.
The rise of agentic AI marks a significant leap forward in artificial intelligence that offers more advanced decision-making capabilities.
However, as companies implement agentic AI into more of their business processes, they need to address risks like data privacy head-on. Without the proper guardrails, organizations risk their AI systems exposing sensitive data
AI-ready data vaults ensure agentic AI systems operate within a zero-trust framework, securing PII while enabling real-time decision-making without exposing raw data.
Watch this video explaining how to build privacy-preserving AI agents and how to architect AI systems that prioritize user privacy.