Your daily does of #DotNet videos. We cover a wide variety of topics like #AspNetCore, #Blazor, EntityFramework Core, software architecture and so much more. You'll also find here regular live coding sessions.


Codewrinkles

Stop pushing back on everything. I learned this leading a team through chaos!

AI-powered features. Fast deadlines. Requirements that changed mid-week. The company wanted startup speed. The team wanted enterprise clarity.

Classic tension. The team started to default to "no":
→ "That's not possible"
→ "That will take too long"
→ "This isn't well-defined enough"

I understood the frustration. But I saw the damage as well. Stakeholders stopped coming to us with ideas. We became "the team that blocks everything."

So I reframed it for the team: delivery isn't a battle. It's a negotiation. And in any negotiation, you need leverage.

Here's how you get it: you can't push back on everything and expect anyone to listen when it actually matters. Pick your battles. The UI change that feels wasteful? Say yes. The architectural shortcut that creates real technical debt? That's where you fight!

We started saying yes to the 80% that didn't matter. We even showed enthusiasm. We delivered fast. We built trust.

Then when we pushed back on the 20% that actually mattered, the decisions that would create real problems, people listened. Within months, we went from "the blocking team" to one of the most respected teams in the division.

The lesson:
Constant pushback doesn't protect your team. It destroys your credibility. Strategic agreement creates the leverage to win the fights that matter.

Think about your last 5 pushbacks. How many actually mattered?

1 hour ago | [YT] | 0

Codewrinkles

The Video on thee production readi AuthN and AuthZ setup I have recently worked on will be published on Wednesday, Nov 26th. I think it will be 🔥

3 days ago | [YT] | 19

Codewrinkles

💡Technical decision making isn’t about knowing answers. It’s about understanding trade-offs.

People often get frustrated when architects or other technical decision makers reply with “it depends.” But the truth is: it really does depend.

You can't just stop using REST and go the gRPC way because it's “always better.” You can’t Google whether microservices is the “right” style. And you definitely can’t ask ChatGPT whether an auction system should use queues or a topic. Technical decision making isn’t about picking the “best practice.”

It’s about understanding the context: deployment environment, team skills, budgets, business drivers, constraints, timelines, and even company culture.

Let's dwell on an example: a Bid Producer service needs to send bid data to three other services. You could use a topic or queues. Topics give you extensibility and decoupling. Add a new service? Just subscribe. No changes to the producer. Queues give you isolation, security boundaries, heterogeneous contracts, and better control over scaling.

Which one is “better”? Well… it depends.

❓Do you value extensibility or security more?

❓Are heterogeneous contracts a requirement or not?

❓Do you need independent auto-scaling?

❓What does the organization care about most?


Technical decision making is about the critical thinking skills that allow you to put everything into a decision matrix and come up with the right answer for that specific context. If you want to become a technical decision maker, the key is to start practicing this skill every single day.

Any time you need to decide anything in the software you create, even the smallest decisions, like choosing between an if/else or a ternary. With enough practice, you’ll eventually master it, and you’ll notice that people around you start asking more and more for your advice.

#SoftwareArchitecture #TechnicalLeadership #SystemDesign #EngineeringExcellence #TechStrategy

5 days ago (edited) | [YT] | 10

Codewrinkles

What would you like the topic of the video on Monday to be?

1 week ago | [YT] | 3

Codewrinkles

It's been almost 4 week since I started my newsletter on Substack.
Till now, I have sent out 4 newsletters in which I:

1️⃣Talked about the technical foundation we're building at Atherio, focusing on quality, security and reliability from the ground up
2️⃣Described my AI-assisted engineering framework consisting of 4 pillars: context, scope, focus and guardrails
3️⃣Explained why my CTO journey began in Excel instead of an IDE and why this actually matters a lot in software architecture and technical decision making
4️⃣Told the story of an AI feature that didn't need AI, but just a smarter query

Next Saturday, a new newsletter will land in the inbox of over 300 people. Based on the feedback I get, it seems the contents are perceived as high quality. That's why I am fairly sure YOU don't want to miss any of the future newsletters.

I'm sure you know what to do. Here's the SUBSCRIBE link: architecttocto.substack.com/subscribe

🙏If you are one of the 304 recipients of my last newsletter and you find it useful, it would be a tremendous help if you could LIKE and REPOST this post. Thank you!

1 week ago (edited) | [YT] | 10

Codewrinkles

‼️You’re probably using only 20% of your coding agent’s real power. Here’s the decision framework I wish I had on day one. 👇

Most developers still use their AI coding agent like a smarter autocomplete: prompt → code → repeat.

But modern agents can automate workflows, enforce architecture, standardize procedures, and integrate directly with your tools if you use the right capability for the right type of task.

Here’s the decision tree I now apply with Claude Code:

1️⃣ Repeated tasks → Use Slash Commands
Any action you run constantly like tests, linting, builds, formatting, PR checks should become a slash command. They execute repeatable work instantly, prevent mistakes, and eliminate the “did I forget something?” problem.

2️⃣ Multi-step procedures → Use Skills
Whenever a workflow must be performed the same way every time (e.g., creating components, scaffolding features, setting up modules), define a skill. It’s a step-by-step sequence your agent follows with perfect consistency, giving you standardized outputs across your entire codebase.

3️⃣ Deep analysis or architectural enforcement → Use Specialized Agents
For architecture rules, code quality evaluation, domain-specific validation, or expert-level reasoning, create focused agents. They behave like embedded specialists who enforce your standards, catch violations early, and guide less-experienced developers toward correct patterns.

4️⃣ External tool checks → Use MCPs
If you often jump to dashboards, cloud portals, quality reports, or monitoring tools, connect them to your agent through MCPs. This gives the agent direct access to your systems, removes context switching, and lets it provide recommendations based on real, live data.

5️⃣ Complex workflows → Combine them
Your most powerful automations appear when commands, skills, agents, and MCPs work together. A command can orchestrate a workflow, trigger an expert agent, and use external data to produce a complete, end-to-end analysis without manual intervention.


Why this matters:

Used well, these tools let you institutionalize team knowledge. Architecture rules, quality gates, scaffolding patterns, and infrastructure insights stop living in people’s heads and instead become reusable, automated, enforceable assets your whole team benefits from.

If you want folder structures or practical examples for commands/skills/agents/MCPs, drop a comment. Happy to share.


#AIEngineering #AIForDevelopers #DeveloperExperience #AIAutomation #CodingAgents #SoftwareArchitecture #DevTools #EngineeringExcellence #SoftwareDevelopment

1 week ago (edited) | [YT] | 14

Codewrinkles

🌶️If AI adoption doesn’t change your processes, or even your entire operating model, then you’re not doing AI right.

McKinsey’s State of AI 2025 report has one number that completely reframes the conversation:

👉 Only 6% of companies qualify as “AI high performers.”

What does “high performer” actually mean?

These are the companies that aren’t just using AI. They’re getting real, measurable value from it. Things like >5% EBITA uplift, accelerated innovation, and a stronger competitive position. And they behave very differently from everyone else.

Here’s what sets them apart:

✅High performers are 3.6× more likely to pursue AI as a way to fundamentally redesign the business — not just automate a few tasks.
✅They’re 2.8× more likely to restructure processes so AI becomes an integrated part of how work gets done.
✅Senior leadership is 3× more involved: directing, sponsoring, prioritizing, and aligning the entire organization behind it.
✅High performers push AI into critical workflows, the places where accuracy and reliability actually matter. That pressure forces them to build better guardrails, stronger validation, and more mature operational practices.

🧠 So, most companies use AI, but the top 6% change because of AI. That’s the real gap where AI fails. And that's the gap that all the skeptics fill nowadays. It's a 94% gap. Huge!

So if your AI work hasn’t forced you to rethink workflows, governance, processes, incentives, maybe even org structure, then AI will definitely fail you.

That’s also my biggest takeaway as a CTO building Atherio, an AI-native product: the hard part isn’t building AI features. The hard part is building the business around what AI makes possible.

Where do you think your org sits today?

#AI #Artificialintelligence #FutureOfWork #AIAdoption #Productivity #Technology #Engineering #CTO

1 week ago (edited) | [YT] | 3

Codewrinkles

Here’s how I lost 2 DAYS to vibe coding 👇

Recently, we worked with a creative to get some cool stuff out. This person did an amazing job and always wanted to give us more than we had agreed on. He even provided us with an entire page built with HTML, CSS, and JavaScript, even though he had no experience in that area.

The truth? It looked AMAZING! So I said to myself: “Cool, let’s just use it as is!” That’s when everything started to fall apart.

I wanted to split the page into different React components and make it work with our design system. And the first challenge became apparent: the HTML, CSS, and part of the JavaScript were all minified into a single, extremely long line. As a human, it was impossible to understand any of it.

Claude impressed me again. It came up with an interesting solution: a Python script that read the string and looked for known HTML and CSS patterns. It managed to identify portions of the minified line where different sections were present. That’s how I was able to remove some parts of the page we didn’t want anymore.

But the CSS was totally messed up. In our app it looked awful. Nothing like the original version we had gotten from the AI app.

So the next cheap workaround: use the HTML file as is and display it inside an iframe. This actually worked. However, I then noticed that the mobile version looked awful: buttons were missing, text overlapped visuals, nothing had structure. Truly awful. Easy fix, right?

Well… no.

It turned out the implementation was extremely strange: responsiveness was achieved mostly by removing every single element and replacing it with a new one specific to each viewport. But where were those elements? They weren’t in the string Claude had extracted with the Python script.

It turned out those elements were hydrated through JavaScript. On closer inspection, I realized a lot of the HTML was being continuously hydrated from the server. We were blocked again.

But that’s not all.

Since I stick to a trunk-based development approach, I pushed the changes we did have (the iframe version) and went for a run. I didn’t even get 1 km in before I received an email (saw it on my Garmin) saying our build had broken. After another 15 km, back at my computer, I was baffled to see:

1️⃣ Two major security issues marked as blockers by SonarQube
2️⃣ 5 reliability issues marked as blockers and several non-blockers
3️⃣ 18 maintainability issues

And all of this for a page with just a few elements.

The next day I said: ENOUGH!
Scrapped everything, took a screenshot of the page, and rebuilt it with Claude. This time using my proper AI-assisted engineering workflow.

Now I’m curious:
Am I the only one running into these kinds of issues with vibe-coded apps/pages?
Is this a “me” problem? Am I missing something?

#vibeCoding #AI #AIAgents #AICoding #softwareEngineering #softwareDevelopment #softwareDesign

1 week ago (edited) | [YT] | 6

Codewrinkles

Hot take: Not every problem needs AI. In fact, sometimes AI makes things slower, worse, and more expensive.

I wrote about a real example from my own experience and why at Atherio we’re building differently.

Curious where the line is between “smart use of AI” and “AI for the sake of AI”?

Join my Substack newsletter: open.substack.com/pub/architecttocto/p/when-not-to…

1 week ago (edited) | [YT] | 8

Codewrinkles

🌶️Everyone is using AI. But almost nobody is using it well! IF you're one of "almost nobody", you're already behind!

It's not me saying this. Instead, that’s the harsh truth in McKinsey’s State of AI 2025 report.

88% of companies now use AI in at least one business function. AI is everywhere.

☑️Every team has a “pilot.”
☑️Every leadership team has a slide deck.
☑️Every engineer has played with a model.

But here’s the plot twist:
👉 Only ~1 in 4 companies (about 23–33%) have actually scaled AI.

The rest? They’re trapped in the business equivalent of tutorial hell: trying things, experimenting, running POCs… but never turning AI into real, repeatable ROI.

And the irony?
Leaders believe they’re “ahead” just because they’re doing something with AI.

Actually, it's exactly the opposite. Doing "AI project" is not innovation anymore. It's business as usual.


❗Doing AI projects ≠ Scaling AI.
❗Experimentation ≠ Transformation.
❗Pilots ≠ Value.

2026 will be the turning point. The companies that succeed next year will be the ones that finally break out of pilot mode and move into scaling mode: embedding AI into processes, redesigning workflows, and investing at the capability level, not the project level.

Everyone else, even the ones who say “we’re doing AI”, are already behind. The real competitive advantage is no longer adopting AI. It’s scaling it.

2 weeks ago (edited) | [YT] | 6