Course Review - Certified AI Security Expert

Posted on Nov 29, 2025

We’re officially in the era of AI, no matter what field you work in. I’ve been trying to catch up with everything happening in this space, and as a security engineer, I want to learn how to use AI—and more importantly, how to secure it. But one thing kept holding me back: my lack of fundamental knowledge.

As a security engineer, I have always believed that before we can secure anything, we must first understand how it works. That mindset has guided my learning journey from day one, helping me appreciate the design and architecture behind systems and, as a developer security advocate, enabling me to communicate risks more clearly and effectively to engineering teams.

I’ve read articles, watched countless videos, and even taken a few free courses, but I still struggled to truly understand core concepts like LLMs, vector databases, RAG, and more.

Most of the resources I found either pointed me to long, dense research papers or dove deep into heavy math. And I’ll be honest—I’m not great at math.

More recently, I started exploring MCP and experimenting with a security tool’s MCP server. While I managed to get it working and could see the results, I still didn’t fully understand how it worked or the technology behind it. I went through tons of content trying to make sense of MCP, but most of it didn’t click—until last month, when I came across the Certified AI Security Expert course by Modern Security.

This is my detailed review of the course for anyone who may be interested in taking it. I’ve shared my experience, key takeaways, and insights to help you decide whether this course aligns with your learning goals.

First Impressions

I went through the course website and browsed the chapters it covers, but I still found myself hesitating. I wasn’t sure if it was the right fit or just another course on the growing list of AI security content. But after watching the course overview video, a few preview videos and spending more time with the curriculum, something shifted. I could see the structure, the intent, and the depth behind it—and that’s when I decided to enroll for it.

What really resonated with me was the learning path it follows, one that has always worked for me in security: Fundamentals → Building → Attacking → Defending

Prerequisites

The course materials are built around Python3, so having some familiarity with Python will definitely be beneficial for anyone interested in taking it.

The course mentions that no prior AI or LLM experience is required—and that’s true. As I mentioned earlier, I didn’t have a strong foundational understanding of AI or LLMs before enrolling.

Course format

One thing I really liked is that it’s completely self-paced. You can move through the modules and labs whenever it fits your schedule, without feeling rushed.

The course content aligns with 4 overarching modules, delivered through 33 lessons accompanied by hands-on labs and projects. Each lesson includes video lectures and code examples that you can follow along with.

The course doesn’t include an exam. Instead, you earn the Certificate of Completion by finishing the video lessons and following along the code assignments.

Review

The course covers 4 broad modules:

  • Fundamentals of AI
  • Building security tools using AI
  • Attacking AI technologies
  • Defending AI technologies
Fundamentals of AI

As part of the fundamentals section, the course begins with an introduction to AI and LLMs, followed by leveraging internal data in LLMs using embeddings—including creating custom embeddings.

It then delves into the fundamentals of Retrieval-Augmented Generation (RAG), and explores how to store and use embeddings through vector databases.

From there, the course introduces the LangChain ecosystem and demonstrates how tools like LangSmith can be used for monitoring and logging LLM workflows.

Building security tools using AI

In the building modules, the course starts by introducing AI agents and the different tools and functions that go into creating them. This part really helped me connect the dots. It then dives deeper into how these pieces actually work, using LangSmith to trace, debug, and understand an agent’s behavior in a way that finally made things click for me.

The course also dives into MCP servers—how they work, why they matter, and how they fit into the bigger picture—and then walks you through building your own MCP server and actually putting it into action. That hands-on experience was especially valuable for me.

The course then moves into real, security-focused projects like building a web security scanner agent and an AI-powered threat modeling tool. Seeing everything come together in practical use cases made the learning feel directly relevant to my day-to-day work.

Attacking AI technologies

As part of the attacking modules, the course explores different attack methodologies across a variety of AI tech stacks, including LLMs, model architectures, MCP servers, and other components commonly used in modern AI systems. It doesn’t just list the attacks—it walks through how they actually work, why they’re effective, and what weaknesses they exploit.

Each attack is broken down clearly, along with the risks it introduces and the potential impact on applications that rely on these AI components. This practical approach made it much easier for me to understand not just what the risks are, but why they exist

Defending AI technologies

Finally, the course wraps up with defense strategies for securing AI technologies. It covers best practices, mitigation techniques, and how to put the right security controls in place to protect AI systems from real-world threats. What I really appreciated was the focus on secure coding and designing with security in mind from the start. Instead of just showing how to fix issues after the fact, it helped me understand how to build AI applications in a way that avoids those risks in the first place.

The course also introduces several open-source tools and frameworks for managing AI security, with practical demos that make it easy to see how everything ties together.

Closing Note

Overall, this course turned out to be exactly what I needed at this stage of my learning journey. I came in with gaps in my understanding of AI fundamentals and often felt overwhelmed by overly complex resources online. This course helped simplify those concepts without oversimplifying the technology behind them. The hands-on labs, real security use cases, and clear explanations finally connected the dots for me—both as a security engineer and as someone trying to understand how AI systems really work.

If you’re looking to build a solid foundation in AI, learn how to break it, and understand how to defend it, I genuinely think this course is worth exploring. It has helped me approach AI security with more confidence, curiosity, and clarity, and I’m excited to continue applying what I’ve learned in my day-to-day.

Feel free to reach out to me on LinkedIn if you have any questions or want to discuss anything about the course.