AI Security Glossary
Clear definitions of AI and machine learning security terms — from prompt injection and shadow AI to deepfakes, RAG, and AI governance.
Adversarial Machine Learning
The study and practice of attacks against machine learning systems — including techniques to fool, manipulate, or extract information from AI models.
AI Copilot Security
Security considerations for AI-powered copilot tools — coding assistants, productivity AI, and enterprise AI assistants — that are integrated into sensitive business workflows.
AI Governance
The policies, processes, and accountability structures an organisation uses to ensure AI is developed and used responsibly, safely, and in compliance with applicable laws.
AI Hallucination
When an AI model generates confident, plausible-sounding output that is factually incorrect, fabricated, or not grounded in reality.
AI Red Teaming
The practice of systematically testing AI systems by attempting to make them behave harmfully, unsafely, or in ways that circumvent their intended guidelines.
AI Risk Management
The process of identifying, assessing, and mitigating risks associated with developing or deploying AI systems — from technical failures to legal exposure and ethical harms.
Data Poisoning
An attack that corrupts an AI model's training data to manipulate its behaviour — causing misclassifications, backdoors, or degraded performance.
Deepfakes
AI-generated synthetic media — video, audio, or images — that realistically portray a person doing or saying something they never did, used increasingly in fraud and social engineering.
15 of 15 terms