This is a submission for the GitHub Copilot CLI Challenge
This post is for developers who like writing in WordPress but want the speed and safety of static sites — and the freedom to publish anywhere from a single editorial hub.
📋 Table of Contents
Summary
What I Built
I spent years wrestling with WordPress performance and security issues.
Optimizing caching layers, hardening installations, fighting plugin bloat — all to keep public-facing sites running acceptably.
Then I discovered static site generators. Hugo. Astro. Fast, secure, elegant.
But months of tweaking themes, debugging build pipelines, and fighting with deployment workflows taught me something: I'd just traded one set of problems for another.
Today, I use both. Not as competitors, but as partners.
Here's the paradox I kept running into: WordPress is probably the best writing environment ever built. The interface is mature, the editor works, and you can focus on what matters — writing.
Static site generators are probably the best deployment target ever built. Fast, secure, cheap to host, and scalable by default.
So why do we keep choosing between them?
In a comment on one of my previous posts, @juliecodestack captured this tension perfectly:
"I spent quite a lot of time tweaking Hugo sites instead of writing, and I'm afraid I'll do the same thing if I transfer to Astro."
That's the real problem. Not WordPress. Not Hugo. The friction between writing and deploying.
The problem with keeping WordPress public-facing isn't the editor — it's everything else.
Exposing a complete WordPress site to the public means decent hosting, heavy dependency on plugins — at minimum for security and SEO — and serious maintenance.
And by default, performance is variable, to say the least.
The Search for an Alternative
This observation gradually led me to look for a different solution.
For some time now, I've been moving my content publishing to static sites.
But what I really wanted was simpler: keep WordPress as a writing environment while completely removing its public presence.
At a time when static sites can be deployed in seconds on almost any infrastructure, keeping WordPress as a frontend doesn't always make much sense anymore — as long as you have a robust deployment solution.
Write normally. Publish automatically.
No manual export, no scripts to run, no friction.
Existing Tools: Effective but Heavy
Having dozens of articles on my WordPress blogs, I developed a suite of Python tools capable of exporting a complete site to Hugo or Astro.
Functional, reliable, but based on a global export logic: complete site generation, transformation, then deployment.
An effective process, but heavy.
And especially unnatural in a daily writing workflow.
This search for a more fluid editorial workflow gave birth to the project.
The solution: A WordPress plugin implementing per-post, multi-target publishing strategies — GitHub (static), dev.to (API), or both — with atomic commits, structured Markdown conversion, and deployment automation.
It converts posts to Markdown with generator-compatible front matter, optimizes images (WebP/AVIF), commits atomically to GitHub, and can trigger static deployments via GitHub Actions.
Five publishing strategies:
- WordPress Only – Pure WordPress site, plugin inactive
- WordPress + dev.to – Public WordPress site, optional dev.to syndication per post
- GitHub Only – Headless WordPress → static generator → GitHub Pages
- Dev.to Only – Headless WordPress → direct dev.to publishing
- Dual (GitHub + dev.to) – Static site as canonical + dev.to as controlled syndication
Tech Stack
WordPress Core:
- WordPress 6.9+ (PHP 8.1+)
- Action Scheduler (async background processing)
- Native WordPress APIs (WordPress.org compliant)
Image Processing:
- Intervention Image (AVIF + WebP optimization)
- Local processing before upload (reduces GitHub Actions cost)
Publishing Destinations:
- GitHub API (Trees API for atomic commits)
- dev.to API (Forem REST API for direct publishing)
Static Site Generators (via GitHub):
- Hugo (YAML/TOML front matter)
- Jekyll (different conventions)
- Astro (content collections)
- Eleventy (custom structures)
Deployment:
- GitHub Actions (automated Hugo builds)
- GitHub Pages (free static hosting)
Architecture:
- Universal adapter pattern (SSG-agnostic)
- Async queue system (Action Scheduler)
- Atomic commits (all-or-nothing sync)
- Strategy-based routing (5 publishing modes)
- Post-level sync control (per-post checkboxes)
- Headless mode with 301 redirects
Development Timeline
- Core plugin development: 48 hours of active coding across 8 sessions
- WordPress.org compliance: 3 hours (code review + fixes)
- Documentation & testing: 3 hours
- Total: ~54 hours over 9 days (Feb 6-14, 2026)
The 23 checkpoints represent iterative development—each a working,
tested increment. Copilot CLI contributed ~70% of implementation time.
Key Features:
✅ Fully asynchronous sync (no admin blocking)
✅ Atomic commits (Markdown + all images in one commit)
✅ Native WordPress APIs only (WordPress.org compliant)
✅ Multi-format image optimization (AVIF → WebP → Original)
✅ Zero shell commands (100% GitHub API)
✅ HTTPS + Personal Access Token authentication
✅ WP-CLI commands for bulk operations
Demo
The workflow is now operational.
Quick Test (3 minutes)
- Visit: githubcopilotchallenge.tsw.ovh/wp-admin
- Login:
tester/ Password:Github~Challenge/2k26- Create a post, click Publish → Watch GitHub commit + Hugo deploy
Result: WordPress → Hugo in ~30 seconds.
Demo version: The live demo runs on commit
da4e82f(frozen at challenge deadline, Feb 15, 2026). The main branch continues active development—Astro support added post-challenge.
Result is visible on the demo website - as you can see below:
You can also see commited files in the website repository, where you can also find the workflow in .github/workflows.
Articles are written in WordPress, as before.
When publishing or updating, a dedicated plugin automatically triggers synchronization to a GitHub repository.
Each piece of content is converted to Markdown with Hugo-specific front matter, along with optimized images (WebP and AVIF).
Everything is sent in a single commit via the GitHub API.
A GitHub Actions workflow then takes over: static site generation, then deployment to GitHub Pages.
Concretely, publishing in WordPress is now enough to put a complete static version of the site online, without manual export or additional intervention.
The Real-World Gauntlet
This wasn't a smooth 48-hour sprint. The project survived several reality checks:
Hugo Theme Version Hell: The theme I wanted required Hugo 0.146.0 minimum. My local install was 0.139.0. GitHub Actions defaulted to 0.128.0. Each environment needed explicit version pinning, and debugging failures meant decoding cryptic TOML errors across three different build contexts.
GitHub Pages URL Stuttering: The deployed site initially rendered with broken internal links because Hugo's baseURL configuration didn't match GitHub Pages' expectations. Pages built locally worked fine. CI builds deployed with relative paths pointing to void. Solution: hardcode the production URL in the workflow, accept that local previews would have slightly broken navigation.
Image Pipeline Memory Limits: Processing 10+ images per post with AVIF encoding pushed PHP's memory limits on shared hosting. First attempt: fatal errors. Second attempt: disable AVIF, keep WebP. Final solution: increase memory_limit to 512M and batch-process images sequentially instead of in parallel.
Action Scheduler Race Conditions: Early versions created duplicate commits when saving a post multiple times quickly. WordPress's save_post hook fires on autosaves, manual saves, and quick edits. Needed: debouncing logic, transient locks, and post meta flags to prevent redundant syncs.
PHP 8.1 Strictness: A single explode() call on a null value was enough to freeze the entire sync pipeline. We had to implement a try-catch-finally pattern to guarantee that even on crash, the sync lock is released and the UI updated. No more hung admin screens.
Git Line Ending Hell (LF vs CRLF): GitHub Actions Linux runners rejected files modified on Windows because of line ending mismatches. Solution: enforce LF via .gitattributes globally. One config file, zero cross-platform headaches.
The Partial Save Trap: WordPress tabbed interfaces only submit visible fields. When updating the Front Matter template, the GitHub PAT field wasn't sent, resulting in accidental deletion. Fix: array_merge() logic to preserve existing values during partial updates.
None of this was in the initial specifications. All of it was mandatory to ship.
Where Copilot CLI Accelerated Development
Copilot CLI's impact was most visible on structured, repetitive work—precisely where manual development is most tedious.
Measured Example: WordPress.org Compliance Refactoring
When WordPress.org review flagged 6 compliance issues, the fixes required:
- Global plugin renaming (30+ files)
- Intervention Image API migration (v2 → v3)
- Asset restructuring (inline scripts → proper enqueuing)
- README updates (external services documentation)
Time with Copilot CLI: 3 hours 8 minutes (measured)
Estimated manual time: 7-10 hours (based on scope)
Acceleration: ~3× faster
This wasn't the only acceleration—it was the only one with precise timestamps.
Other Notable Accelerations (estimated based on comparable WordPress development):
- GitHub Trees API integration: ~90 minutes with CLI vs estimated 4-6 hours manual (API docs, trial/error, debugging)
- Admin UI components: ~1 hour with CLI vs estimated 3-4 hours manual (WordPress Codex research, boilerplate)
- Image optimization pipeline: ~2 hours with CLI vs estimated 5-6 hours manual (library selection, testing, error handling)
The pattern: Copilot CLI consistently provided 3-4× acceleration on structured tasks where the goal was clear and the implementation was well-documented.
What didn't accelerate: Architecture decisions, integration debugging, WordPress.org submission process, edge case discovery.
The real value wasn't just speed—it was eliminating context switching. No searching documentation, no hunting for syntax examples, no copy-pasting boilerplate from other projects.
Why Local Image Optimization Matters
The plugin processes images on the WordPress server before uploading to GitHub. This is crucial:
Without local optimization:
- Upload 5MB original JPEGs to GitHub
- GitHub Actions must download, process (ImageMagick/Sharp), then deploy
- Build time: 2-3 minutes per post
- GitHub Actions runner minutes consumed: high
- Failed builds leave orphaned large files in Git history
With local optimization (current approach):
- WordPress generates AVIF (50-150KB) + WebP (100-300KB) + original
- Upload ~500KB total per post to GitHub
- GitHub Actions just copies files, no processing
- Build time: 15-30 seconds
- Clean Git history, minimal runner usage
The trade-off: PHP memory limits and processing time on the WordPress side. But WordPress is idle 99% of the time. GitHub Actions runners cost money per minute.
Processing locally shifts the bottleneck to where it's free.
Architecture
What's Currently Handled:
✅ Posts and Pages: Both sync automatically with proper Hugo front matter
✅ Deletions: Trashing a post/page in WordPress triggers file deletion in GitHub
✅ Updates: Editing content re-syncs, overwriting existing files
✅ Categories and Tags: Converted to Hugo taxonomies in front matter
✅ Featured Images: Optimized and linked in front matter (featured_image field)
✅ Custom Fields: Basic fields map to front matter (extensible via adapter)
Current Limitations (MVP scope):
⚠️ Draft Handling: Drafts stay in WordPress, never sync (intentional)
⚠️ Revisions: Only published versions sync, revision history stays local
⚠️ Complex Blocks: Gutenberg blocks convert to HTML, then basic Markdown (no advanced block preservation)
⚠️ Shortcodes: Rendered to HTML before conversion (loses original shortcode)
⚠️ ACF/Meta Boxes: Only standard custom fields supported (ACF requires custom adapter extension)
⚠️ Author Pages: Not yet implemented (single-author blogs work fine)
Deliberate Trade-offs:
WordPress remains the source of truth. The plugin doesn't sync bidirectionally. If you edit Markdown directly in GitHub, those changes won't flow back to WordPress. This is intentional — simplicity over complexity.
Theme Changes and SSG Migration:
Thanks to the universal front matter template system, changing Hugo themes or even migrating to a different SSG is now straightforward:
- Update the front matter template in plugin settings (no code changes)
-
Bulk re-sync all posts via WP-CLI (
wp jamstack sync --all) - Optional cleanup of old file structure in Git (if directory paths changed)
The adapter pattern is already in place. Adding support for Jekyll, Eleventy, or Astro means implementing a new adapter class — the core sync engine remains untouched.
What's not yet automated: migrating between SSGs with fundamentally different content structures (e.g., Hugo's content/posts/ vs Astro's src/content/blog/). This would require a bulk file move operation in Git, which is currently manual.
But changing front matter conventions within the same SSG? That's now a settings change, not a refactoring project.
The repository contains:
- WordPress plugin (WordPress.org compliant)
- GitHub API integration (atomic commits)
- Asynchronous sync management (Action Scheduler)
- Hugo-compatible Markdown generation
- GitHub Actions workflow for deployment
How It Works
The process in detail:
1. Writing: Standard WordPress interface, no change in the writing experience
2. Automatic Commit: The GitHub repository receives Markdown, optimized images, and front matter

You can see the .github/workflows folder, where you can find the hugo.yml file (1), the content folder (2), the static/images one (3), and last deployment status (4).
3. Hugo Structure: content/posts/ structure automatically generated with correct naming
4. Deployed Site: Static version online via GitHub Pages, optimal performance
The whole thing forms a simple publishing chain: write in WordPress, publish, and let the rest execute.
Repository: ajc-bridge
Production Update: Multi-Destination Publishing
While building this plugin, I realized something obvious in hindsight: the dev.to API has existed for years.
I'd been so focused on static site generators that I missed the simpler path: publish directly to dev.to via API.
So I added it.
Five Publishing Strategies
The plugin now supports five distinct workflows, each solving different use cases:
1. WordPress Only
WordPress (public site)
Plugin configured but sync disabled. For teams evaluating the plugin or running pure WordPress sites.
2. WordPress + dev.to Syndication
WordPress (public, canonical) → dev.to (syndication)
WordPress remains your public site. Optionally syndicate posts to dev.to with canonical_url pointing back to WordPress. Perfect for established WordPress sites with existing audiences who want dev.to reach.
Per-post control: Checkbox in post sidebar: "☐ Publish to dev.to"
3. GitHub Only (Headless WordPress)
WordPress (headless) → Hugo/Jekyll → GitHub Pages
Traditional JAMstack workflow. WordPress is admin-only, frontend redirects to your static site. All published posts sync automatically.
4. Dev.to Only (Headless WordPress)
WordPress (headless) → dev.to
Zero infrastructure. WordPress writes, dev.to publishes, WordPress frontend redirects. For developers who want WordPress's editor but dev.to's community without managing static sites.
This is the mode I use. All my articles are published exclusively on dev.to.
5. Dual Publishing (GitHub + dev.to)
WordPress (headless) → GitHub (canonical) + dev.to (syndication)
Best of both worlds. Hugo site = canonical source (your domain, your control). Dev.to = syndication (massive audience, zero SEO penalty via canonical_url). WordPress frontend redirects to Hugo.
Per-post control: GitHub always syncs. Checkbox controls dev.to: "☐ Publish to dev.to"
Five publishing strategies covering WordPress-only, headless, and hybrid workflows
API key configuration
Per-post checkbox in sidebar: decide which posts syndicate to dev.to
Why This Matters
For established WordPress sites: Keep your public WordPress site (audience, SEO, landing pages) but syndicate blog posts to dev.to for community reach.
For JAMstack purists: Go fully headless with Hugo, optionally syndicate to dev.to.
For dev.to community members: Use WordPress as your writing environment, publish exclusively to dev.to.
For migrations: Start with WordPress Only, test strategies incrementally, migrate when ready. Zero lock-in.
Technical Implementation
The architecture supports all five modes through strategy-based routing:
switch ( $strategy ) {
case 'wordpress_only':
return; // No sync
case 'wordpress_devto':
if ( post_meta '_wpjamstack_publish_devto' === '1' ) {
sync_to_devto( $post, canonical_url: get_permalink() );
}
break;
case 'github_only':
sync_to_github( $post );
break;
case 'devto_only':
sync_to_devto( $post, canonical_url: null );
break;
case 'dual_github_devto':
sync_to_github( $post );
if ( post_meta '_wpjamstack_publish_devto' === '1' ) {
sync_to_devto( $post, canonical_url: $hugo_url );
}
break;
}
Headless mode automatically redirects WordPress frontend (301) to the canonical destination (Hugo or dev.to) in headless strategies. WordPress admin remains fully functional.
Per-post control in WordPress + dev.to and Dual modes: a checkbox in the post sidebar lets authors decide which posts syndicate externally.
Canonical URL handling:
- WordPress + dev.to:
canonical_url→ WordPress permalink - Dual (GitHub + dev.to):
canonical_url→ Hugo site URL - Dev.to Only: No canonical (dev.to is primary)
Real-World Usage
This article was published using dev.to's rich editor before I implemented the adapter.
Future articles will be published via the plugin: I write in WordPress, click Publish, and the plugin handles the rest via dev.to's API.
The dev.to API isn't new. The Forem platform has supported it for years.
What's new is the integration: WordPress as the writing environment, dev.to as the publishing platform, zero manual steps.
That's the difference between a demo and production: the tool becomes part of the workflow, not just a talking point.
What's Next
The plugin ships with Hugo and dev.to adapters, but the architecture supports more:
Additional platforms ready:
- Jekyll (GitHub Pages native SSG)
- Hashnode (GraphQL API)
- Ghost (Admin API)
AI-native architecture: Compatible with emerging frameworks like WordPress/agent-skills for future voice-controlled publishing.
WordPress.org submission: Submitted February 6, 2026. Pending review for public distribution to the WordPress plugin directory.
The adapter pattern means adding new destinations is straightforward. The hard part — WordPress integration, async processing, error handling — is done.
Technical Highlights
1. Universal Front Matter Engine
Instead of hardcoding the plugin for a single Hugo theme, we built a raw template system. Users define their own YAML (or TOML) with custom delimiters and placeholders like {{id}}, {{title}}, or {{image_avif}}.
This means the same plugin can adapt to any SSG convention:
- Hugo with YAML front matter
- Jekyll with different taxonomy names
- Eleventy with custom data structures
You control the output format. The plugin just fills in the blanks.
2. Asset Management by WordPress ID
To guarantee unbreakable links, optimized images (WebP and AVIF) are stored in folders named by WordPress ID: static/images/1460/.
Rename your post slug for SEO ten times? Your images never break. The ID is immutable. The file paths are permanent.
3. Native WordPress Integration
The plugin integrates as a first-class citizen with its own sidebar menu and tabbed navigation.
Role-based security: Authors only see their own sync history. Critical settings (GitHub PAT) remain admin-only.
Responsible cleanup: A "Clean Uninstall" option removes all plugin traces (options and post meta) on uninstall, leaving zero database pollution.
4. Atomic Commits via GitHub Trees API:
Instead of multiple sequential commits (one per image, one for Markdown), the plugin uses GitHub's Git Data API to create a single commit containing all files:
// Collect all files (Markdown + images)
$all_files = [
'content/posts/2026-02-07-this-is-a-post.md' => $markdown_content,
'static/images/1447/featured.webp' => $webp_binary,
'static/images/1447/featured.avif' => $avif_binary,
'static/images/1447/wordpress-to-hugo-1024x587.webp' => $webp_binary,
'static/images/1447/wordpress-to-hugo-1024x587.avif' => $avif_binary,
];
// Single atomic commit
$git_api->create_atomic_commit($all_files, "Publish: This is a Post");
This approach is transactional: either everything commits or nothing does. No partial states, cleaner history.
5. Beyond Hugo: Multi-SSG Architecture
While this demo targets Hugo, the adapter pattern isn't locked to a single SSG. The same codebase can support:
- Hugo (YAML/TOML front matter, content/posts/)
- Jekyll (different taxonomy conventions, _posts/)
- Eleventy (custom data structures, src/content/)
- Astro (content collections, src/content/blog/)
Adding a new SSG means writing one adapter class — the sync engine, image optimization, and GitHub integration remain untouched.
This architectural choice transforms the plugin from "Hugo-only" to a platform for any static site workflow. The 43 million WordPress sites aren't just potential Hugo users — they're potential static site adopters, period.
6. WordPress-Native Compliance:
To meet WordPress.org requirements, the plugin uses exclusively native WordPress APIs:
-
wp_remote_post()instead of curl -
WP_Filesysteminstead offile_put_contents() -
$wpdbprepared statements - No
exec(),shell_exec(), or Git CLI
This makes it suitable for publication in the official WordPress plugin repository.
My Experience with GitHub Copilot CLI
When I started this project, I wasn't looking for a tool to code for me.
I was looking for a way to accelerate the execution of a project whose architecture was already clear.
Having already used GitHub Copilot CLI, Gemini CLI, and various LLMs on other projects, I knew these tools could produce code quickly.
But I also knew that without a precise framework, they mainly produce... code.
Not necessarily a coherent system.
Note: This isn't the autocomplete in the editor, but a command-line tool capable of generating complete files from structured prompts.
Specification First, Code Second
The first step wasn't to code. The first step was to write specifications. Define the scope. Break down the project into functional blocks.
Identify non-negotiable constraints: WordPress native only, no shell execution, reliable async processing, atomic GitHub commits, WordPress.org compliance to publish the plugin in the official repository and benefit the community.
Then organize development into successive stages.
Development unfolded in structured phases: bootstrap, core architecture, async queue system, media pipeline, atomic GitHub commits, deletion lifecycle, bulk operations, and admin-side hardening.
Copilot CLI was used as an execution partner — implementing each layer under explicit constraints (WordPress native APIs only, async safety, atomic operations, repository compliance). The focus was not just faster coding, but building a production-ready system.
This approach is very similar to what a technical project manager or lead engineer would do before entrusting implementation to a team. The difference here is that the “team” consisted of a tool capable of producing code extremely quickly — but only if instructions were precise and constraints well defined.
So I didn’t write code in the traditional sense. I wrote functional and technical specifications, prompts, refined instructions, corrected trajectories, and validated outputs against the intended architecture.
Each step consisted of describing what needed to be built, verifying what was produced, and adjusting accordingly.
Sometimes Copilot proposed a relevant structure on the first try. Sometimes it required reworking, clarification, or tighter constraints.
The Work Pattern
Very quickly, a work pattern emerged: specification → generation → verification → correction → iteration.
In this process, Copilot behaves less like a magic generator than like a fast executor.
It can structure an entire class in seconds, propose a coherent implementation, or refactor a complete block.
But it can also forget an essential hook, overwrite an existing method, or produce functional code that doesn't comply with initial constraints.
Real Examples of Issues:
- Method replaced by an incomplete stub
- Hook not registered, causing silent failures
- File generated but not actually written to disk
- Fatal error on activation, typical of strict WordPress environments
Each incident required going back to fundamentals: verify, understand, correct, reformulate.
The Prompt Gallery: Steering the CLI
To move from concept to production-grade code, I steered Copilot CLI through complex engineering hurdles. These aren't just snippets; they are the instructions that shaped the architecture.
1. The Atomic Shift (Core Logic)
Context: Moving from simple file uploads to the GitHub Trees API to ensure images and Markdown commit simultaneously.
Refactor the Git_API class to use the Trees API. I need a single atomic commit containing the Markdown file and all processed images. Use the SHA of the base branch to create the tree.
2. The Stateful Sync (Lifecycle Management)
Context: Preventing duplicates on Dev.to by persisting remote IDs.
Update the Sync_Runner to check for '_atomic_jamstack_devto_id'. If it exists, use a PUT request to update the existing article; otherwise, POST a new one and save the returned ID.
3. The Security Audit (Hardening)
Context: Passing the official WordPress 'Plugin Check' tool.
Scan admin/class-settings.php. Identify all missing nonces and un-sanitized $_POST variables. Apply wp_verify_nonce and sanitize_text_field according to WP.org standards.
What This Actually Means
This is probably the most interesting aspect of the experience.
Using Copilot effectively doesn't mean writing one prompt and waiting for a result.
It's much more like continuous piloting, where the quality of instructions directly conditions the quality of what's produced.
In this context, the tool becomes particularly effective for accelerating everything that's structured: class creation, file organization, repetitive function implementation, refactoring, documentation.
As soon as the objective is clearly defined, execution can become very fast.
But the responsibility for architecture, technical choices, and overall coherence remains entirely human.
The Real Value
In the end, the experience is less like "AI-assisted development" than a form of assisted technical direction.
Code is produced quickly, but it must be thought out, supervised, and validated continuously.
This project was built in less than two days. Not because the tool replaces design work, but because once that work is done, execution can be considerably accelerated.
This is probably where GitHub Copilot CLI becomes most interesting: it's not a substitute for development, but an accelerator for an already thought-out and structured project.
The Development Rhythm
Each feature followed a consistent workflow that kept development focused and auditable:
1. Planning: Describe the goal in natural language
Example: Add atomic commit support using GitHub Trees API
2. Proposal: Copilot suggests implementation approach
Copilot outlines: file changes, API calls, error handling
3. Review: Validate architecture before generation
Check: Does this align with WordPress standards? Any edge cases?
4. Generation: Multi-file code updates with consistent patterns
Copilot writes across 5-10 files simultaneously
5. Testing: Manual verification and integration testing
Test: WordPress admin, GitHub API, Hugo deployment
6. Checkpoint: Document working increment for audit trail
Create: checkpoint file with context and decisions
This rhythm repeated 23 times throughout development. Each checkpoint represents a tested, working state—not just code, but verified functionality.
The checkpoints weren't documentation overhead. They were the development cadence.
Real Example: AVIF Generation Fix
Here's what an actual development session looked like:
Detailed problem description → Copilot proposes complete solution with code, validation, and error handling
The prompt describes:
- What to fix: AVIF generation failures
- How to fix it: Use explicit AvifEncoder, add file validation
- How to verify: Check encoder usage, test file creation
Copilot responds with a 281-line implementation plan covering:
- Code changes across 2 methods
- Import statements updates
- Error logging improvements
- Verification commands
After applying changes: verification confirms correct implementation, Before/After shows API migration
Time: ~15 minutes from problem to verified solution
Manual estimate: 1-2 hours (research v3 API docs, update all calls, test each format)
Acceleration: ~4-6× faster
This pattern—detailed prompt, comprehensive response, systematic verification—repeated throughout development. The 23 checkpoints represent 23 iterations of this cycle.
The Audit Trail Advantage
Beyond just writing code, GitHub Copilot CLI acts as a technical scribe.
My session history evolved through 23 distinct checkpoints, documenting every architectural pivot from the initial foundation to the final security hardening:
001-wordpress-plugin-foundation
002-media-processing-with-avif-support
003-deletion-and-bulk-sync
004-atomic-commits-and-monitoring
...
019-fix-plugin-check-errors
020-add-nonce-security
021-uninstall-api-compliance
022-fix-nonce-sanitization-warnings
023-fix-token-double-encryption.md
You can browse the complete checkpoint history here: https://github.com/pcescato/ajc-bridge/tree/main/docs, including the initial specifications.
Each checkpoint includes context files like wordpress-api-compliance-guide.md, token-preservation-fix.md, or settings-merge-test-plan.md.
This isn't just a side effect — it's a massive win for maintainability.
It transforms the "black box" of AI generation into a transparent, step-by-step engineering log. Six months from now, when I need to understand why a particular decision was made, I won't be guessing. The checkpoint history tells the story.
What worked well:
✅ Rapid scaffolding of classes following WordPress standards
✅ Boilerplate code generation (hooks, filters, nonces)
✅ Refactoring large blocks (sequential commits → atomic commits)
✅ Documentation generation from inline comments
What required constant supervision:
⚠️ Architecture decisions (adapter pattern, async queues)
⚠️ WordPress.org compliance verification
⚠️ Error handling and edge cases
⚠️ Integration testing across components
Code Review: Validating Production Readiness
Once the plugin was functional and submitted to WordPress.org (February 6), I wanted to validate its code quality independently—without waiting for the review team's feedback.
The question: Is this truly production-ready, or did fast development introduce critical bugs?
The Review Process
I used GitHub Copilot CLI to conduct a comprehensive security and compliance audit.
The prompt: Review this WordPress plugin for critical blockers. Focus on security vulnerabilities, WordPress.org compliance, and data integrity issues.
Review completed in 10 minutes with an 800-line report covering:
- Security (authentication, sanitization, secrets handling)
- Compliance (coding standards, uninstall cleanup, version consistency)
- Correctness (race conditions, SQL queries, data loss risks)
Findings: 3 Critical Blockers
Grade: C+ (production-ready with critical fixes needed)
The review validated the architecture ("excellent design, clean separation") but identified 3 blockers that would likely trigger WordPress.org rejection:
1. Secret Logging (Security - CRITICAL)
Issue: GitHub token previews logged to debug files.
Risk: Partial token exposure in database logs and files.
Fix: Removed all secret previews (30 minutes).
2. Version Mismatch (Compliance - CRITICAL)
Issue: Plugin header showed 1.1.0, readme.txt showed 1.2.0.
WordPress.org automated checks reject version inconsistencies.
Fix: Updated plugin header and constant to 1.2.0 (5 minutes).
3. Incomplete Uninstall (Compliance - CRITICAL)
Issue: Plugin created 13 post meta keys but only cleaned up 5 in uninstall.php.
Why this matters: WordPress.org reviewers manually check uninstall cleanup. Incomplete removal is a rejection reason.
Fix: Added 7 missing meta keys + 1 option cleanup (15 minutes).
Fixing the Blockers: 20 Minutes
I created a surgical prompt for Copilot CLI:
Fix ONLY the 3 critical blockers.
Do NOT refactor anything else.
Changes made:
✅ Removed secret logging (verified with grep)
✅ Synced all versions to 1.2.0
✅ Completed uninstall cleanup (11 meta keys total)
WordPress.org Submission: The Road to Approval
Moving code from a local environment to the official WordPress.org repository is the ultimate moment of truth. For this project, I implemented a two-tier quality assurance strategy that transformed potential rejection into a near-immediate technical validation.
Process Timeline
- February 6: Initial plugin submission.
- February 12: Internal review via Copilot CLI. Identified and fixed 3 critical logic and security bugs in 20 minutes.
- February 14 (00:00): Received official WordPress.org review. Identified 6 specific compliance and ecosystem issues.
- February 14 (03:08): Within 3 hours, all 6 issues were addressed and version 1.2.0 re-submitted.
Current Status: ✅ Pending Final Manual Approval (Typical queue wait: 7–14 days).
The Refactoring Challenge: 3 Hours Instead of 10
The 6 compliance issues required substantial code changes:
What needed to be done:
- Global renaming: 30+ files, hundreds of references
- API migration: intervention/image v2 → v3
- Assets restructuration: Extract 6 inline scripts, implement wp_enqueue
- Documentation: External services section with API details
Manual estimate (senior developer): 7-10 hours
Actual time with Copilot CLI: 3 hours 8 minutes
The difference? Copilot CLI excels at systematic refactoring:
- Global renaming: One prompt replaced hundreds of references across 30+ files without missing edge cases
- API migration: Copilot read the intervention/image v3 changelog and refactored all image processing calls automatically
-
Pattern extraction: Identified all inline
<script>/<style>tags and generated proper enqueue functions with correct hooks
This wasn't about generating new code—it was about surgical precision
at scale. The kind of work that's technically straightforward but
humanly tedious and error-prone.
The result: A compliant, tested plugin ready for re-submission in
the time it would have taken to just complete the renaming manually.
The Power of Dual Review (AI + Human)
This workflow demonstrates that even "production-ready" code benefits from an external perspective. The two audits served very different, yet complementary, purposes:
1. Internal Audit (Copilot CLI): Security & Logic
Copilot acted as a tactical second pair of eyes, catching issues that fast-paced development often misses:
- Security Vulnerabilities: Identified partial exposure of sensitive secrets.
- Operational Integrity: Ensured rigorous cleanup during activation/uninstallation.
- Version Hygiene: Fixed inconsistent constants that would have triggered automated rejection.
- Result: Transitioned the code from "fragile" to a robust, enterprise-grade architecture.
2. WordPress.org Review: Compliance & Ecosystem
Since writing this, the plugin passed WordPress.org official review. The review (a mix of automated algorithms and human oversight) focused on how the plugin lives within the WordPress ecosystem:
- Intellectual Property: Renamed "Atomic Jamstack Connector" to "AJC Bridge" to eliminate trademark confusion.
- Technical Standards: Replaced inline
<script>and<style>tags with the properwp_enqueuesystem. - Transparency: Documented external service usage (GitHub & Dev.to APIs) in the
readme.txtto comply with Guideline 6. - Dependency Hygiene: Forced an upgrade of the
intervention/imagelibrary (v2.7 → v3.11) to patch known vulnerabilities.
Takeaway: Production Readiness over Theoretical Perfection
The plugin moved from "Blocked" to "Repository Ready" in record time.
- Surgical Fixes: We didn't perform a radical rewrite. Instead, we used AI to target specific files flagged in the reviews, applying and testing patches immediately.
- The Value of Audit Trails: The detailed history of architectural decisions made during development allowed the AI to understand the "why" behind the code, proposing fixes that maintained the plugin's logic.
- Engineering Pragmatism: We prioritized critical blockers for the submission while deferring minor optimizations (like edge-case race conditions) to the v1.3 roadmap.
The Lesson: AI doesn't replace official validation; it prepares you for it. By using Copilot to eliminate logical bugs early, you clear the path to focus on the platform's specific quirks — the trademark checks, the enqueue rules, the dependency audits that only humans (or their algorithms) flag.
Next Step: Official launch on the WordPress.org repository under the slug ajc-bridge.
Addendum: Automating the Future with GitHub Actions
To wrap up this intensive session, I implemented a professional CI/CD pipeline to ensure that every future release of AJC Bridge is as clean and reliable as this one.
The "Release-on-Demand" Machine
In just a few minutes, I used Copilot CLI to generate a GitHub Actions workflow that automates the entire packaging process:
- Trigger: The workflow springs into action the moment a new version tag (e.g.,
v1.2.0) is pushed, or via manual trigger. - Clean Packaging: It automatically builds a production-ready ZIP file, strictly excluding development overhead like
.git,.githubconfigurations,composer.json, and local documentation. - Standardized Deployment: The ZIP is structured specifically for WordPress.org standards (internal folder named
ajc-bridge), ensuring a seamless installation for users. - Automated Releases: The workflow creates an official GitHub Release and attaches the optimized plugin archive as a primary asset.
Why this Matters
By automating the release process, I’ve eliminated the risk of human error—like forgetting to remove a sensitive config file or misnaming a folder.
This completes the transition from a solo dev project to a professionally maintained bridge. Whether it’s a minor patch or a major feature update (like the upcoming v1.3), I can now ship a compliant, high-quality version to the community in seconds with a single Git command.
Conclusion
GitHub Copilot CLI didn't replace development.
It didn't eliminate the need to think, architect, or decide.
But used as an execution partner rather than an automatic generator, it made it possible to quickly transform a clear idea into a functional system.
That's perhaps where these tools really make sense: they don't change the way we build, but they reduce the distance between what we imagine and what we put into production.
In this specific case, they made it possible to solve a real tension: write comfortably in WordPress while publishing to a high-performance static site.
No friction. No compromises.
Just a workflow that works.
Looking ahead: v1.3 will add smart Table of Contents generation—
automatically detecting long-form content and generating dev.to /
SSG-compatible ToC with configurable thresholds (>600 words, 2+ H2) to ensure it only triggers when useful.
Final Thought
WordPress isn't the enemy; it's the most powerful editorial engine we have. By decoupling it from the frontend, we don't just fix performance—we future-proof the web.
Ready to see it in action? Try the live demo here or explore the code on GitHub.







Top comments (25)
Pascal, this is brilliant! Years ago I used to do some side work building WordPress sites for clients. That’s exactly why I chose WordPress — it was convenient, intuitive, and familiar to clients, and they never had trouble editing their own content.
But building the frontend always gave me this inner resistance: we have so many great tools now, so why are we still doing PHP on the frontend like it’s 20 years ago? 😄 Back then people talked about headless CMS and React frontends, but it all felt a bit clunky and immature.
So it’s nice to see I wasn’t the only one feeling this way! This is a wonderful solution. I don’t really build sites anymore, even for family (simply no time), but I’ll definitely be recommending and promoting approaches like this!
Thanks, Sylwia!
That inner resistance you describe — "why are we still doing PHP on the frontend like it's 20 years ago?" — I felt that hard for years. 😄
The headless CMS wave of the early 2010s promised to solve this, but you're right: it always felt clunky. GraphQL APIs, complex build pipelines, client authentication headaches... it was technically "better" but practically worse for solo developers or small teams.
What changed for me was realizing I didn't need a headless CMS. I just needed WordPress to stop being public-facing. Keep the admin, ditch the frontend. Once that clicked, the solution became obvious: let WordPress manage content, let static generators handle delivery.
The funny thing is, this workflow would've been impossible five years ago. GitHub Actions didn't exist, GitHub API rate limits were lower, and PHP image optimization libraries were primitive. Sometimes the right solution just needs the right moment in the ecosystem.
Really appreciate you sharing your experience — always good to know others were feeling the same friction!
I build Sanity on Netlify systems in that time. Most sites only required a single PHP script that fetched data from the Sanity API. I didn't even need a SSG.
The clunky part is developer error not finding the simplest solution.
CI/CD systems like Huston(Jenkins) existed long before GitHub Actions.
What does that even mean? The frontend frameworks are basically template engines, and Smarty exists since the early 2000's. UI frameworks existed since the introduction of the GUI on OS's. Web UI frameworks didn't invent something new, it is just an adaption for a specific use case.
I find it is sad people don't recognize the history of work their opinions are based on.
Wonderful, Pascal!
So now you write a draft in WordPress, then hit the Publish button, and the post will be published on dev.to and static website at the same time. Wow! It really reduces much time on uploading or editing front matter.
I haven't used WordPress yet. Do you write in rich content format instead of markdown format in WordPress?
Your work inspired me to think about simplifying my workflow. I'd also like to write in markdown and then deploy or upload in one command.
Thanks Julie! Yes, exactly—WordPress editor (Gutenberg) + one click =
published everywhere.
On your question: WordPress uses Gutenberg (block editor), but it has
native Markdown shortcuts! When you type:
## Title→ converts to H2 automatically> Quote→ becomes a quote block- List→ creates a bullet listSo you get the best of both:
It's not pure Markdown editing (like Obsidian), but the shortcuts make
it feel natural if you're used to Markdown syntax. The visual feedback
helps catch formatting issues before publishing.
For your workflow: If you prefer pure Markdown, Obsidian + git works
great. But if you want visual feedback while keeping Markdown habits,
Gutenberg's shortcuts might surprise you.
What's your current workflow? Always curious how others solve this!
The Gutenberg markdown shortcuts are the same as the ones in Markdown editing. Pretty good!
I really admire your workflow, very smooth!
I used Typora + git command to publish. I also use Obsidian, but I prefer writing in Typora for it looks better. "Obsidian + git", Do you mean integrating git in Obsidian?
I haven't try publish using dev.to api yet. From your post it looks very convenient.
Yes! The Gutenberg shortcuts are surprisingly good — I was pleasantly surprised
when I discovered them. Makes the transition from pure Markdown editors much
smoother than expected.
On Obsidian + git: I meant using Obsidian with manual git commands (like your
Typora setup), not a built-in integration. Though there are Obsidian plugins
that can automate git commits if you want that workflow.
Typora is beautiful — totally understand preferring it for visual appeal! The
live preview is unmatched.
On dev.to API: It's surprisingly easy! The plugin handles:
The nice part: you can preview on WordPress (visual editor), then publish to
dev.to with one click. No copy/paste, no manual image uploads.
Your current workflow sounds solid though — if Typora + git works for you,
that's the best setup! The plugin is really for people who want WordPress as
their writing hub but static sites as deployment targets.
Curious: what do you use Obsidian for if Typora is your main writing tool?
Knowledge base / notes?
Your guess is right!
Sometimes I use Obsidian as the local version of Evernote, because it's convenient to take notes and create a new file in it.
I mainly use Obsidian as the knowledge base. Sometimes I want to make an outline of the related articles I've written and find some missing topics to write, and Obsidian has the links to do it. What's more, the search function in Obsidian is very good. In our previous discussions, we both agree that search is important.
But I think I haven't made full use of Obsidian yet. What about you? How do you use these softwares? Like you, I'm also curious how others use these tools. The tips and experience are valuable.
When you mentioned Typora's live preview, it brought me the memory of the early days of using Typora. At the time I was using VS Code as the Markdown editor, which has two-columns view. ( One is the original text, the other is the preview.) When I first used Typora, it scared me. Why? Because I could only get the preview in it, and I wasn't familiar with Markdown, not to mention the Markdown shortcuts. When I needed change the file, I looked for the original text, but I didn't know where to start. Thus I returned to VS Code.
Later I saw many recommendations from bloggers and writers and I got some use tips in their articles or videos. As I used Typora more and got familiar with the Markdown shortcuts, I got to like Typora and regard it as a natural way of Markdown editor. Now I'm not used to the VS Code two-columns view mode. What a change, ha ha!
This comment was first written at noon, but I hit the "Dismiss" button by accident and lost the draft, so I rewrote it. Seems it's also important to save the draft haha.
Ha, I should clarify — I actually DON'T use Obsidian regularly! 😅
I tested it (along with Notion, Roam, and others) but never clicked with these
note-taking tools. I found myself spending more time organizing notes than
actually writing.
My actual workflow is simpler:
I prefer tools that get out of the way. WordPress works because I know it inside
out after years of use. Plain text works because it's zero friction — no
formatting decisions, no organizational systems to maintain.
Your Typora evolution story is fascinating though! The VS Code → Typora
transition mirrors so many tool adoption patterns. We often reject tools initially
because they break our mental models, then embrace them once we internalize
the new paradigm.
The "two-column preview" → "live WYSIWYG" shift is like going from manual
transmission to automatic. Feels wrong at first, then you can't imagine going back.
On saving drafts: YES! I've lost count of how many times I've hit "Dismiss"
or closed a tab by accident. That's actually one reason I like WordPress —
auto-save every few seconds. Paranoid writer syndrome 😄
Your Obsidian setup (links between articles, finding content gaps) sounds really
valuable for content strategy. Even if I don't use it, I respect the workflow!
Same case! From time to time, when the learning notes accumulated to a point, I planned to write articles. However, I found myself organizing notes...
So you have tested some famous note-taking softwares and finally chose a simple note-taking way: plain text files. Simplicity is the best!
I also considered this way, but I have some problems:
"Feels wrong at first, then you can't imagine going back." Absolutely! Your words depict the change vividly. Once we get used to a tool and change the mindset, it's hard to go back.
Same syndrome! What's more, I sometimes still press Ctrl+S to make sure😄.
Ha! Same problems, no real solutions 😅
I use plain .txt files and just... accept the limitations:
It's not optimal, but every "solution" I tried (Obsidian, Notion, etc.)
annoyed me more than the problems they solved.
For you though: Typora might actually work since you already like it.
Has code highlighting, WYSIWYG editing, and search within folders. Solves
your exact issues without being too heavy.
I'm too stubborn to switch from .txt, but you're probably smarter than me 😄
"Finally! Someone who gets it. 🙌
I wasted 6 months in Notion trying to build the 'perfect' system. Ended up with 47 empty databases and zero actual writing.
Now my setup:
VS Code + .md files for daily notes
GitHub Gists for quick snippets
Paper notebook for ideas (old school, I know 😅)
Tools should serve you, not the other way around.
Love the honesty! 🔥"
@harsh2644 YES! The "perfect system" trap is real.
I've been there too—spent weeks setting up Obsidian with tags, templates,
links... then realized I was organizing notes instead of taking them.
Your "47 empty databases" in Notion is painfully relatable 😅
Paper notebook for ideas is actually genius. Zero boot time, zero
organizational decisions, just write. Sometimes analog is the answer we
overthink away from.
The irony: I just built a WordPress plugin using GitHub Copilot CLI that
adds complexity (WordPress → GitHub → Static sites), but the whole point
was to remove friction from publishing. Same principle—tool serves workflow,
not the other way around.
Tools that make you think about the tool = wrong tool ✓
Hi, Pascal.
This was a great read — I really like how you reframed WordPress not as the problem, but as the best writing tool paired with the wrong deployment role. The way you broke down the real friction between writing and publishing felt very honest and relatable, especially the part about trading one set of problems for another with static generators.
What stood out to me is the clarity of the solution: keep WordPress where it shines and let static sites do what they do best. This feels like a very pragmatic middle ground, and honestly the kind of workflow more teams should aim for instead of chasing “pure” stacks.
Thanks, Art!
You nailed it — "the best writing tool paired with the wrong deployment role" is exactly the tension I've been feeling for years.
What surprised me most about this project wasn't the technical implementation, but how obvious the solution seems in hindsight. WordPress has spent 20 years perfecting content management. Hugo has spent a decade perfecting static output. Why force either one to do the other's job?
The "pure stack" mentality you mention is something I definitely fell into. I thought migrating meant choosing a side. Turns out the answer was "yes, and" instead of "either/or."
The real test will be six months from now — will I still be using this workflow, or will I have found new friction points? I'm optimistic, but I've learned to stay humble about these things. 😄
Thanks for reading!
Really enjoyed this perspective.
I like how you didn’t frame WordPress as “bad”, but more as “misused” in many scenarios.
For content-heavy sites that don’t need frequent dynamic interactions, going static with tools like Hugo + GitHub Pages feels like a no-brainer in terms of performance and security.
At the same time, WP still shines when non-technical editors are involved. Context matters a lot here.
Thanks Harsh — glad it resonated.
That’s really how I see it: WordPress isn’t the problem. It’s often just used in contexts where its strengths aren’t aligned with what’s actually needed on the public side.
For content-heavy sites with relatively stable publishing rhythms, static deployment just makes sense now — performance, security, portability… it removes a lot of unnecessary moving parts.
But as soon as non-technical editors are involved, WordPress still does something few tools do as well: it lets people write and manage content without friction. And that part is often underestimated in purely technical discussions.
So yes — context changes everything. The goal isn’t to replace WordPress, just to put it back where it’s strongest.
I really liked the contrast you drew here: functional but heavy global exports vs. a per-post, atomic publishing model. That shift alone explains why this feels better suited to real writing workflows.
Glad that contrast resonated.
The tooling was never really the problem — the mismatch with real writing workflows was.
Once publishing becomes atomic and per-post, it starts to feel natural again.
Static WordPress? Now that's a combo I didn't expect. Been using Gatsby for static sites but never thought about keeping WP as the backend. Might actually solve the "client wants familiar CMS" problem without the security nightmares.
Exactly — that “familiar CMS without the public-facing baggage” was the core idea.
Most clients don’t want a new editing interface, they just want their site to be fast and stable.
Keeping WordPress as a private writing environment while delivering a static frontend solves that surprisingly well.
I find it a bit mischievous that you not once mention JAMstack in the post. That is where you got the idea from. And there are quite a few JAMstack Wordpress plugins .
People reading the post, who don't know JAMstack, are likely to conclude you figured out the method yourself. While it was popularized in 2015.
The biggest drawback of JAMstack is the lack of page personalization. It is possible, but it much more convoluted than serving the personalized pages or page parts from the backend.
David, fair points on the historical context.
A few clarifications:
On JAMstack: The plugin is literally called atomic-jamstack-connector. I didn't hide the lineage — it's in the name and the repository. The article focuses on the implementation (atomic commits, WordPress.org compliance, universal adapters) rather than re-explaining JAMstack as a concept, which has been covered extensively since 2015.
On existing solutions: I researched WP2Static, Simply Static, and Strattic before building this. None offered the specific combination I needed: atomic commits via Trees API, universal front matter templates, async background sync, and WordPress.org compliance. That's the gap this fills.
On template engines and history: I'm well aware of Smarty and the evolution of template engines — I used Smarty extensively with Zend Framework 1.x. Modern frameworks didn't invent templating; they adapted it for different constraints. Sylwia's comment about "PHP on the frontend" was about her clients' perception and resistance, not a technical dismissal of PHP's capabilities. We both know the history.
On CI/CD: You're right that Jenkins and Hudson predate GitHub Actions. But GitHub Actions reduced friction enough to make this workflow practical for solo developers without infrastructure overhead. That's convenience driving adoption, not technical innovation.
I appreciate the technical rigor, but the condescending tone ("mischievous", "developer error", "sad people don't recognize") doesn't add much to the discussion. I'm happy to discuss trade-offs and acknowledge prior art, but let's keep it constructive.
Thanks for engaging.
the fact that that is the only mention, while the first part of your post is focused on the journey to get to the solution makes it more about you than about exploring what already existed.
That is why I called it mischievous.
I don't expect an explanation, an acknowledgment is good enough.
So only you can set a tone in your writing? That feels like measuring with two different weights.
There are a few biases in your post, so it is not a purely technical post.
When you read the other "condescending" words in their context they have a different meaning.
I think the solution is great, it is just parts of the wrapping that bothered me enough to react.
Fair enough — I can see how parts of the framing might read as more personal than exploratory.
The goal wasn’t to erase prior art but to document a concrete implementation path and workflow constraints from my side.
Glad you found the solution itself interesting. That’s ultimately what mattered here.