The Testing Academy

In-Depth Videos On Software Testing & Test Automation. All The Hurdles And Lesson Learned Are Beautifully Documented As Blog & Video Tutorials.​ From API Testing using postman, Selenium tutorials and Performance Testing and Manual and Automation tutorials.


The Testing Academy

A developer pushes code at 2pm. AI wrote the code. AI wrote the tests. CI pipeline runs. All tests pass.

15 minutes later error rate spikes 5x.

"But all the tests passed."

Yes. And that is exactly the problem.

AI-generated tests pass because they test what the code does. Not what the code was supposed to do. The AI that wrote the code and the AI that wrote the tests share the same blind spots.

If the code misunderstands the business logic, the test will also misunderstand the business logic. Both will agree with each other. Both will be wrong. And the dashboard will be green.

I watched a team last quarter ship a discount stacking bug to production. AI wrote the checkout code. AI wrote the tests. Every test passed. But nobody told the AI that promotional codes cannot be combined with employee discounts. The test did not check for it because the code did not handle it.

All tests passing does not mean all scenarios were tested. It means the AI tested the scenarios it could imagine. Production tested the rest.

This is the one thing AI cannot do for you — know what your business actually needs. The edge case that comes from 3 years of domain knowledge. The rule that exists in a Slack thread from 2023 and nowhere else. The scenario your biggest customer hits every Friday at 5pm.

That is why QA is not dead. That is why ICSR starts with human-written Instructions and Context before AI touches anything.

AI can execute tests at machine speed. Only a human can decide what is worth testing.

PS: What is the most expensive "all tests passed" bug your team has shipped?

#QA #AI #TestAutomation #SDET #SoftwareTesting

16 hours ago | [YT] | 18

The Testing Academy

I found two flags in Playwright MCP that most QA engineers do not know about.
--caps=vision
--caps=devtools


Vision gives your AI agent screenshot-based page understanding. It can see canvas charts, custom-drawn UI, visual layouts — things the accessibility tree completely misses.
DevTools gives your AI agent the full Chrome debugging toolkit. Console errors. Network requests. Performance metrics. Security headers. SSL certificates.
Combine them: --caps=vision,devtools


Now your AI agent does something no human can do at this speed — sees the visual page AND reads network traffic AND checks console errors AND audits security headers. All in one automated pass.
I have been using snapshot mode (the default) for months. It handles 95% of testing. But the 5% it misses is exactly where production bugs hide. The silent console error on every page load. The API call returning 500 that nobody sees. The chart rendering incorrectly on canvas.
Vision + DevTools catches that 5%.


I am making a full tutorial — setting up each mode, five real QA use cases (silent error detection, slow API detection, visual regression on canvas, security header audit, full pre-release deep dive), and how this connects to the ICSR framework.
What is the hardest bug you have caught that a normal test suite would never find? Drop it below.

1 day ago | [YT] | 21

The Testing Academy

Make sure, you know what to use when...

1 week ago | [YT] | 34

The Testing Academy

I spent a year writing CLAUDE.md rules for AI-generated tests.

"Use role-based locators." "Never hardcode credentials." "Run tests before stopping."
Claude followed them. Most of the time.



Then I set up Claude Code hooks — shell commands that fire automatically before every file write, after every edit, and before every task completion.
Now "most of the time" became "every time."



One JSON file. Four hooks. Exit code 2.
That last part is important. Exit code 1 = warning (the action still happens). Exit code 2 = blocked. One developer spent a full week debugging because they used exit 1 instead of exit 2.
The difference between a rule and a gate is one digit.



Full tutorial dropping soon on the channel

I will walk through the complete settings.json, the bash scripts for QA anti-pattern detection, and show you how to connect hooks to the 3-file architecture (instructions.yml + rules.yml + testdata.yml) from the 500-test case study.



Drop a comment if you want this as a live coding session or a structured tutorial.

1 week ago | [YT] | 47

The Testing Academy

It is happening in QA right now that nobody's talking about:

My team used to push 2-3 automated test cases a day.
Last week, one engineer committed 11 in a single day.
We reviewed every one. Approved them all.

Then our automation suite started failing in the nightly run.

When I looked closer, I found three things:

The volume had outpaced our review process. Tests that looked clean enough got approved because nobody had time to dig deeper (even AI code review approved it due to lack in context).

A chunk of those tests were redundant. Covering flows already covered elsewhere. The engineer who committed them didn't even realize AI generated them, he accepted the output, and moved on.

And the part that actually stung — the critical flows we needed covered were never covered. He hadn't told the AI what mattered.

So it wrote tests for what was easy to automate. And the rest of the QA team reviewed it and agreed, because in isolation, every test looked reasonable.

Three problems.
All from code we reviewed.
All from code we approved.

Your test code review process was built for human-speed output.
AI doesn't write at human speed anymore.
If you lack in context, AI will always give you garbage in garbage out.

#QA #TestAutomation #AI #SoftwareTesting #SDET

2 weeks ago (edited) | [YT] | 32

The Testing Academy

🚀 200,000 Subscribers Achieved! Wow 😄

When I started The Testing Academy, I never imagined that one day we would become a community of 200,000 testers learning and growing together.

This milestone is not just about numbers.
It represents thousands of learners improving their skills, getting jobs, switching careers, and becoming better engineers.

Every view, every comment, and every message from you has motivated me to keep creating content and helping this community grow.

Thank you for trusting the journey.

And remember — this is just the beginning.

🚀 Next goal: 500K testers community!

📚 Your Mentor, Promode Dutta

3 weeks ago | [YT] | 71

The Testing Academy

The manual vs automation divide in QA is dying.
And most people haven't noticed yet.

Three things I'm seeing that nobody is talking about:

Manual testers are writing code now. Not struggling with it. Not "learning to code" for 6 months. They're pairing with AI tools and shipping full test suites. I've watched 8+ manual testers go from zero automation experience to writing production Playwright tests. No bootcamp. No transition plan. Just AI-assisted learning and determination.

Playwright is the reason. It dropped the barrier low enough that domain knowledge matters more than syntax knowledge. The testers who understand the product are now the ones writing the best tests — because they know what to test. The tooling just handles the how.

But here's the part nobody wants to hear.

The biggest gap left between a manual tester and an automation engineer is debugging. When the AI-generated test breaks, that's when you see who understands the code and who was just generating it.
Everything else has compressed.
If you're a manual tester feeling left behind — you're not.

Your domain knowledge plus AI is a dangerous combination. The barrier dropped. Walk through the door.
If you're an automation engineer feeling comfortable — reality check. Your advantage is shrinking fast. Double down on debugging, architecture, and problem-solving. That's the moat now. Not syntax.
The testing world is being reshuffled.
The question isn't whether you'll adapt.
It's how fast.
#QA #SoftwareTesting #Playwright #AI #TestAutomation

3 weeks ago | [YT] | 23

The Testing Academy

Most QA teams test the API. The UI. The database.
Nobody tests what the AI actually says.
I watched a team ship a RAG chatbot with 94% code coverage.
Three days later it hallucinated a refund policy that didn't exist.
The fix? 20 minutes.
The damage? Weeks.
There's an open-source framework called DeepEval that closes this gap.
It works like Pytest but for LLM outputs.
You define metrics. Set thresholds. Run in CI/CD.
It has metrics for hallucination, RAG faithfulness, agent tool correctness, task completion — even traces your agent's entire execution to score whether it actually did the job.
50+ research-backed metrics. Apache 2.0 license. Free.
If your team ships AI features, someone needs to own this.
Full breakdown of every metric and how to set it up — dropping this week.
Comment "DeepEval" if you want me to film it.

4 weeks ago | [YT] | 19

The Testing Academy

🚀 Let's Build LIVE 2+ LIVE AI Agents with LangFlow, n8n, 90 Days Roadmap, to Become AI powered QA

🔗 Join Here:
us06web.zoom.us/webinar/register/WN_ZJ3i_OqQSzG0HA…

📅 6 March 2026 (Friday)
⏰ 8:00 PM IST

In this LIVE session, you’ll learn how QA professionals are using AI to build powerful automation systems.

🔥 What you’ll see in the session:

✅ Build 2+ AI Agents LIVE
✅ Use LangFlow to design AI workflows
✅ Automate tasks using n8n
✅ Understand the 90 Days roadmap to become an AI-powered QA
✅ Learn how AI can assist in testing, automation, and debugging

This will be a practical session with real demos, not just theory.

⚠️ Limited access for the live webinar.
⏳ Join 10–15 minutes early to secure your spot. 🚀

1 month ago | [YT] | 27

The Testing Academy

Your Selenium locators just became irrelevant.
Not fragile. Not flaky. Irrelevant.
Here's why:
Google just shipped WebMCP in Chrome 146.
It lets websites tell AI agents exactly what they can do.
No screenshots.
No DOM scraping.
No guessing which button is "Submit."
Just structured function calls.
How it works (stupidly simple):

Add toolname and tooldescription to your HTML form
Chrome auto-generates a tool schema
AI agent calls the function directly

That's it. 2 HTML attributes.
Or use the JavaScript API:
→ navigator.modelContext.registerTool()
→ Define name, description, input schema
→ Agent calls execute() with typed params
→ Gets structured JSON back
The old way: screenshot → vision model → guess element → click → pray
The new way: discover tool → call function → get response
67% less compute.
98% accuracy.
And here's what nobody is talking about:
This creates a THIRD test layer.
Layer 1: UI (what humans see)
Layer 2: API (what services call)
Layer 3: Agent Tools (what AI calls) ← NEW
Every registerTool() is a new endpoint.
Every tool description is a testable contract.
Every inputSchema needs validation.
QA just got a whole new job.
But honest take — it's early.
Chrome Canary only. Spec has TODOs. No cross-browser yet.
Learn it now. Don't rewrite yet.
The QA engineers who understand WebMCP today will lead tomorrow.
Full breakdown + testing framework + code examples on my channel.
Drop a 🔥 if you want a hands-on tutorial.

1 month ago | [YT] | 39