Our team at RebelCode builds internal AI tools constantly, from small utilities to workflow apps, and everyone on the team uses AI to build them. The tools all work fine, but most of them start out looking exactly the same.
Mine usually don’t, and it’s not because I’m a better developer or a design expert. It’s because I’ve spent the past 12 years writing briefs for designers and developers, getting them wrong, learning the hard way what “make it look good” actually communicates (which is nothing), and slowly figuring out what specificity really means.
AI just made that gap impossible to ignore.

When I prompt AI to build something, I upload screenshots of app designs I like, point to specific elements I want replicated, and give it concrete design language to follow. “Use Apple’s Liquid Glass styling for the card backgrounds. Match this spacing. Reference this color palette.” Structured, visual, and specific.
Others on the team might prompt the AI to “make it look good.” I know this because those are the same words I would have used a while back myself. The same words that produce the same generic output every time.
That gap between “make it look good” and actually showing what good looks like is the most important management lesson AI has accidentally taught me.

The Failure Mechanism Is Identical
I’ve been thinking about this a lot since reading Mike Taylor’s column in Every about what he calls “New Taylorism,” the idea that the techniques making AI reliable are identical to the techniques making human teams effective. (Note: the full piece requires a free signup to read, but it’s worth it.)
Taylor ran a 50-person marketing agency before becoming an AI engineer, and his observation hit close to home.
Watching our team work through AI prompts was like watching a replay of every vague client brief I’ve ever written myself. “Make it modern.” “We want it clean and professional.” These instructions feel meaningful to the person giving them, but they’re practically useless to anyone receiving them, whether that’s an AI model or a junior developer.

The reason my internal tools look and flow better isn’t taste, although that helps. It’s specificity.
I give the AI reference images, design constraints, and named styles. It’s the same reason experienced project managers pull up reference sites when a client says “professional” and ask “which of these captures what you mean?” Anchoring to something concrete kills the ambiguity.
AI has no ego, so you can give it terrible instructions fifty times and it won’t update its resume. It’s a mirror that shows you exactly how unclear your communication is, without the social cost of someone actually quitting on you.
That’s what makes it such an effective training tool, because every bad prompt is a bad brief you get to see the consequences of in thirty seconds instead of three weeks.
Why WordPress Pros Should Care Right Now
Here’s the part that most WordPress professionals aren’t talking about yet.
AI is making execution cheap and judgment expensive.
If your value proposition is “I write code” or “I build WordPress sites”, you’re now competing with tools that are effectively free and instant. We’ve already seen this with vibe coding, where anyone can describe what they want in plain language and get working software back. You cannot win a price war against zero.
I’m not saying this to be dramatic. I’m saying it because I run a WordPress product company and I see the same pressure on my own business. The WordPress professionals who thrive from here will be the ones who charge for judgment, not keystrokes. Not for 100 hours of work, but for the insurance that the right thing gets built the first time.
This is already showing up in job postings. “Prompt engineering” keeps appearing in descriptions and people assume it means knowing ChatGPT cheat codes.
What they’re really asking for is someone who can decompose a problem, set clear constraints, and judge the result. Those are management skills with a new label.
This isn’t just my observation. The World Management Survey, an 18-year research project across 35 countries, found that bad management accounts for up to a third of why some companies and countries are less productive than others. Managing well makes you more money, and AI is now giving everyone a free place to practice it.

The Messy Middle Is Where the Money Lives
Taylor describes something he calls the “NCO gap,” borrowed from military structure. Officers set strategy, soldiers execute orders, and the sergeants in between translate strategy into action under messy, real-world conditions.
AI is increasingly good at both extremes. It can synthesize 50 competitor sites and spot patterns. It can generate pixel-perfect code from a design spec. The top and the bottom are getting covered.
The middle is where WordPress professionals live, and that’s where the value is.
You see it on projects all the time. A client sends an email saying “Quick question, can we add a blog section?” AI sees a task, but you see an iceberg. There is no content strategy, no editorial calendar, and no discussion about who’s actually writing. That “quick question” is easily $15,000 of unbilled work if you don’t catch it.
Or a client asks for a pop-up newsletter signup that fires the instant someone lands on their homepage. Give that prompt to an AI and it executes perfectly: clean code, working pop-up, job done.
Here’s what a WordPress professional with judgment sees instead: that pop-up will spike bounce rates, Google has penalized intrusive interstitials on mobile since 2017, and the client’s actual goal is list growth, not annoying first-time visitors. A slide-in after 30 seconds of reading would perform better. The AI built exactly what was asked for, it just didn’t build what was needed.
AI can build anything you ask for, but the premium is on knowing what to ask for.
What This Actually Looks Like in Practice
I’m not going to repackage Taylor’s five prompting principles here. Go read his piece if you want the full framework. What I will share is the pattern I noticed in my own work, once I started paying attention to what I was actually doing differently from the rest of the team.
It came down to two habits.
I show instead of describe
Instead of writing a paragraph explaining what I want a dashboard to look like, I screenshot one that does it right and upload it. One image replaces ten back-and-forth prompts.
Where do those reference images come from? I browse Mobbin for real app UI patterns, Dribbble for visual inspiration, and sometimes 21st.dev for component-level references. It takes five minutes to find a screenshot that communicates more than any paragraph could. Lazar Jovanovic, Lovable’s first professional vibe coder, made this point on Lenny’s Podcast (you can also watch it on YouTube). He stressed that design skills and taste are becoming the most valuable skills in an AI-first workflow, and most of your time should go into planning, not prompting.
Sometimes you don’t even need an image. Pasting a code snippet of the component structure you want, or a CSS block showing your preferred spacing and colors, can be more precise than any screenshot. AI models parse code perfectly, so if you can describe what you want in a few lines of CSS, do that instead.
This works identically with clients. Stop writing novels describing a layout, just send the picture. When one of our team members started pulling design cues directly from the WP RSS Aggregator website for an internal app, the results improved immediately. The same principle was applied, whereby you anchor to something concrete instead of hoping “make it look good” just lands.
I define the box before anyone fills it
“Build me an internal tool” is not a brief. “Build a candidate screening dashboard with a sortable table, status filters, and a detail panel that slides in from the right” is the start of one. The gap between those two statements is where scope creep lives, whether you’re prompting AI or scoping a client project.
Just take a look at the design system specs below that I gave Claude Code when building PRISM; an internal analytics dashboard for one of our e-commerce products. These instructions were generated by Claude itself in an earlier conversation where I prompted it to create a design system based on two screenshots of two different app designs from Dribbble.
Design System Specifications
Color Palette
Primary Colors:
Indigo: rgb(99, 102, 241) / #6366f1
Indigo Dark: rgb(79, 70, 229) / #4f46e5
Status Colors (Subdued):
Active/Success: rgba(16, 185, 129, 0.1) bg, rgb(5, 150, 105) text
Warning: rgba(245, 158, 11, 0.1) bg, rgb(180, 83, 9) text
Danger/Error: rgba(239, 68, 68, 0.1) bg, rgb(185, 28, 28) text
Inactive: rgba(107, 114, 128, 0.1) bg, rgb(75, 85, 99) text
Neutrals:
Background: linear-gradient(135deg, #f5f7fa 0%, #e8ebef 100%)
Slate 50-900 scale from Tailwind
Glass Card Specifications
.glass-card {
background: rgba(255, 255, 255, 0.7);
backdrop-filter: blur(20px) saturate(180%);
-webkit-backdrop-filter: blur(20px) saturate(180%);
border: 1px solid rgba(209, 213, 219, 0.3);
border-radius: 0.75rem; /* rounded-xl */
}
.glass-card:hover {
background: rgba(255, 255, 255, 0.75);
transform: translateY(-2px);
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.06);
}
This took a vague idea of the design style I wanted to apply to the app and turned it into clear, concise instructions that the AI could easily apply and get right the first time. Feel free to use it as inspiration for your next projects.
None of this is new, it’s basic project management. AI just forces you to practice it on every single interaction, dozens of times a day, until it becomes instinct. Our team is learning the same lesson now, one ugly vibe-coded app at a time. Myself included.
The Gap We Need to Fill
WordPress training typically focuses almost entirely on technical execution and how to build things. There’s a massive gap in education around the judgment layer: when to build, what to build, and how to push back when a client asks for the wrong thing. We wrote about a similar shift happening on the marketing and visibility side, where AI is rewriting the rules faster than most businesses can adapt.
Those who figure this out in 2026 will have a significant edge over the next twelve months. The ones who don’t might find themselves competing on price with tools that work for free.
Most people won’t notice they’re getting management training every time they open ChatGPT or Claude. They’ll think they’re just getting better at AI.
So the next time you’re about to prompt an AI to “make it look good”, try something different. Upload a screenshot, name a specific style, or define the constraints. Then try the exact same approach the next time you brief your team or scope a client project.
The skills are identical. The only question is whether you’re paying attention.