DEV Community

Justin Poehnelt for Google Workspace Developers

Posted on • Originally published at justin.poehnelt.com on

How to Connect PostgreSQL to Google Apps Script (JDBC Guide)

Apps Script now supports PostgreSQL through Jdbc.getConnection(). The catch: you can’t use the modern postgres:// connection string format — you must convert it to JDBC’s jdbc:postgresql:// format.

PostgreSQL connected to Google Apps Script

PostgreSQL connected to Google Apps Script

Many of you have been waiting for this one. Google Apps Script’s Jdbc service has quietly added PostgreSQL support , and it opens up a huge range of possibilities for connecting your spreadsheets, forms, and automations directly to one of the most popular relational databases in the world — no middleware required.

But before you copy your provider’s connection string and paste it in, there’s a gotcha you need to know about.

Converting your PostgreSQL connection string for Apps Script

Every modern Postgres provider gives you a connection string that looks like this:

postgres://user:pass@your-host.example.com/mydb?sslmode=require
Enter fullscreen mode Exit fullscreen mode

This will not work in Apps Script. If you paste it directly into Jdbc.getConnection(), you’ll get an unhelpful error.

The fix is to convert it to the JDBC format that Apps Script expects:

jdbc:postgresql://your-host.example.com:5432/mydb?user=user&password=pass&ssl=true
Enter fullscreen mode Exit fullscreen mode

Here’s the full breakdown of what changes:

Component Modern Format Apps Script (JDBC)
Protocol postgres:// or postgresql:// jdbc:postgresql://
Auth Inline: user:password@host Parameters: ?user=x&password=y
Port Often implicit (defaults to 5432) Must be explicit: :5432
SSL sslmode=require ssl=true (JDBC doesn’t support sslmode)

Store your JDBC URL in Script Properties (Project Settings > Script Properties), not in your source code. Never hardcode credentials. See managing secrets in Apps Script for more.

Setting up the connection

Here’s how I configure the connection. The JDBC URL is stored in Script Properties under the key DB_URL:

/**
 * CONFIGURATION
 * Set 'DB_URL' in Project Settings > Script Properties.
 * Format:
 * jdbc:postgresql://HOST:5432/DB
 * ?user=USER&password=PASS&ssl=true
 */
const DB_URL = PropertiesService
  .getScriptProperties().getProperty("DB_URL");

/**
 * HELPER: Centralized Connection Logic
 */
function getDbConnection() {
  if (!DB_URL) throw new Error("DB_URL Script Property is missing.");
  return Jdbc.getConnection(DB_URL);
}

Enter fullscreen mode Exit fullscreen mode

Testing PostgreSQL from Apps Script

I put together a test suite to validate that the full PostgreSQL stack actually works from Apps Script. These aren’t just “hello world” queries — each test targets a specific failure mode.

Here’s why I test these specific things:

  1. Connectivity — Validates the SSL handshake and credentials are all correct.
  2. Modern Types — Apps Script’s JDBC driver fails on JSONB and UUID unless you cast to ::text. This test proves the workaround.
  3. Parameterized Queries — Proof that prepareStatement works, protecting against SQL injection.
  4. Transactions — Proof that if your script times out (a common occurrence in Apps Script), the database isn’t left in a corrupted state.

Test 1: Basic connectivity

The simplest possible query — SELECT version(). If this passes, your SSL handshake, credentials, and network path are all correct.

function testConnection() {
  console.log("[1/4] Testing Basic Connection...");
  const conn = getDbConnection();
  const stmt = conn.createStatement();
  const rs = stmt.executeQuery("SELECT version()");

  if (rs.next()) {
    const version = rs.getString(1);
    console.log(" -> Connected: " + version.substring(0, 40) + "...");
  }

  rs.close();
  stmt.close();
  conn.close();
}

Enter fullscreen mode Exit fullscreen mode

Test 2: UUID and JSONB support

This is the test that will save you hours of potential debugging. Apps Script’s JDBC driver doesn’t know how to deserialize Postgres’s JSONB and UUID types natively. The fix is simple but non-obvious: cast everything to ::text in your SELECT statement.

function testModernTypes() {
  console.log("[2/4] Testing UUID & JSONB Support...");
  const conn = getDbConnection();
  const stmt = conn.createStatement();

  // Setup: Create a table with modern types
  stmt.execute(`
    CREATE TABLE IF NOT EXISTS gas_test_types (
      id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
      data JSONB,
      created_at TIMESTAMPTZ DEFAULT NOW()
    );
  `);

  // Cleanup old test data
  stmt.execute("DELETE FROM gas_test_types");

  const testData = '{"test": "json_parsing", "works": true}';
  const sql =
    "INSERT INTO gas_test_types (data) VALUES (?::jsonb)";
  const ps = conn.prepareStatement(sql);
  ps.setString(1, testData);
  ps.execute();
  ps.close();

  // FETCH: strictly cast to ::text to avoid JDBC driver errors
  const rs = stmt.executeQuery(
    "SELECT id::text, data::text FROM gas_test_types LIMIT 1",
  );

  if (rs.next()) {
    const uuid = rs.getString(1);
    const jsonStr = rs.getString(2);
    const jsonObj = JSON.parse(jsonStr);

    if (jsonObj.works === true) {
      console.log(" -> UUID fetched: " + uuid);
      console.log(" -> JSON parsed successfully: " + jsonStr);
    } else {
      throw new Error("JSON parsing mismatch");
    }
  } else {
    throw new Error("No data returned from insert");
  }

  rs.close();
  stmt.close();
  conn.close();
}

Enter fullscreen mode Exit fullscreen mode

The key line is:

SELECT id::text, data::text FROM gas_test_types LIMIT 1
Enter fullscreen mode Exit fullscreen mode

Without ::text, you get a cryptic JDBC error. With it, you get clean strings that JSON.parse() handles perfectly.

Test 3: Parameterized queries

If you’re inserting user-generated data, you must use prepareStatement with ? placeholders instead of string concatenation. This is the same pattern used in any JDBC application — the driver handles escaping for you.

function testParameterizedInsert() {
  console.log("[3/4] Testing Parameterized (Secure) Inserts...");
  const conn = getDbConnection();

  const sql = "INSERT INTO gas_test_types (data) VALUES (?::jsonb)";
  const stmt = conn.prepareStatement(sql);

  // Bind variable to the first '?'
  // We stringify because JDBC doesn't know what a JS Object is
  const data = { user: "Secure User", role: "admin" };
  stmt.setString(1, JSON.stringify(data));

  const rows = stmt.executeUpdate();

  if (rows !== 1)
    throw new Error("Parameterized insert failed to affect 1 row.");
  console.log(" -> Secure insert successful.");

  stmt.close();
  conn.close();
}

Enter fullscreen mode Exit fullscreen mode

Note the ?::jsonb cast in the SQL. The ? is the JDBC placeholder, and ::jsonb tells Postgres to treat the bound string as JSON. This way you can pass a JSON.stringify()‘d object directly.

Test 4: Transaction rollback

Apps Script has a 6-minute execution limit. If your script is in the middle of a multi-step database operation when it times out, you need to know that your data is safe.

This test proves that conn.setAutoCommit(false) plus conn.rollback() works as expected — a valid insert followed by an invalid one results in neither being committed.

function testTransactionRollback() {
  console.log("[4/4] Testing Transaction Rollback...");
  const conn = getDbConnection();

  // Disable auto-commit to start transaction mode
  conn.setAutoCommit(false);

  try {
    const stmt = conn.createStatement();

    // 1. Valid Insert
    stmt.execute(
      "INSERT INTO gas_test_types (data) " +
        'VALUES (\'{"step": "transaction_start"}\')',
    );

    // 2. Simulate Error (e.g., bad SQL syntax or script logic error)
    // This SQL is invalid because column 'fake_col' doesn't exist
    stmt.execute(
      "INSERT INTO gas_test_types (fake_col) VALUES ('fail')"
    );

    conn.commit(); // Should not be reached
  } catch (e) {
    console.log(
      " -> Caught expected error: " +
        e.message.substring(0, 50) + "...",
    );
    conn.rollback();
    console.log(" -> Rollback executed.");
  } finally {
    conn.close();
  }

  // Verification: Ensure the first insert is NOT in DB
  const verifyConn = getDbConnection();
  const verifyStmt = verifyConn.createStatement();
  const rs = verifyStmt.executeQuery(
    "SELECT count(*) FROM gas_test_types " +
      "WHERE data->>'step' = 'transaction_start'",
  );

  rs.next();
  const count = rs.getInt(1);
  if (count === 0) {
    console.log(" -> Rollback verified: No partial data exists.");
  } else {
    throw new Error("Rollback failed! Partial data found in DB.");
  }

  rs.close();
  verifyStmt.close();
  verifyConn.close();
}

Enter fullscreen mode Exit fullscreen mode

Test 5: Batch read/write performance

How fast is the JDBC bridge, really? This test inserts 100 rows using addBatch()/executeBatch() and reads them back, logging per-row timing so you know what to expect.

function testPerformance() {
  console.log("[perf] Testing Read/Write Performance...");

  const ROWS = 100;
  const insertSql = "INSERT INTO gas_test_perf (value) VALUES (?)";

  // --- Setup ---
  let setupConn, setupStmt;
  try {
    setupConn = getDbConnection();
    setupStmt = setupConn.createStatement();
    setupStmt.execute(`
      CREATE TABLE IF NOT EXISTS gas_test_perf (
        id SERIAL PRIMARY KEY,
        value TEXT,
        created_at TIMESTAMPTZ DEFAULT NOW()
      );
    `);
    // TRUNCATE is faster than DELETE
    setupStmt.execute(
      "TRUNCATE TABLE gas_test_perf " + "RESTART IDENTITY CASCADE",
    );
  } catch (e) {
    console.error("Setup failed: " + e.message);
    return;
  } finally {
    if (setupStmt) setupStmt.close();
    if (setupConn) setupConn.close();
  }

  // --- New connection, single write (n=1) ---
  let conn1, ps1;
  try {
    const t1ConnStart = Date.now();
    conn1 = getDbConnection();
    const t1ConnMs = Date.now() - t1ConnStart;

    const t1WriteStart = Date.now();
    ps1 = conn1.prepareStatement(insertSql);
    ps1.setString(1, "cold-write");
    ps1.executeUpdate();
    const t1WriteMs = Date.now() - t1WriteStart;

    console.log(
      " new conn + write (n=1): " +
        "conn: " +
        t1ConnMs +
        "ms | " +
        "write: " +
        t1WriteMs +
        "ms",
    );
  } catch (e) {
    console.error("Write n=1 failed:", e);
  } finally {
    if (ps1) ps1.close();
    if (conn1) conn1.close();
  }

  // --- New connection, single read (n=1) ---
  let conn2, stmt2, rs2;
  try {
    const t2ConnStart = Date.now();
    conn2 = getDbConnection();
    const t2ConnMs = Date.now() - t2ConnStart;

    const t2ReadStart = Date.now();
    stmt2 = conn2.createStatement();
    const readSql =
      "SELECT id, value FROM gas_test_perf LIMIT 1";
    rs2 = stmt2.executeQuery(readSql);

    if (rs2.next()) {
      // Extract data to mimic real workload
      rs2.getString("value");
    }
    const t2ReadMs = Date.now() - t2ReadStart;

    console.log(
      " new conn + read (n=1): " +
        "conn: " +
        t2ConnMs +
        "ms | " +
        "read: " +
        t2ReadMs +
        "ms",
    );
  } catch (e) {
    console.error("Read n=1 failed:", e);
  } finally {
    if (rs2) rs2.close();
    if (stmt2) stmt2.close();
    if (conn2) conn2.close();
  }

  // --- Existing connection, batch write & read ---
  let conn3, cleanStmt, ps3, stmt4, rs4;
  try {
    conn3 = getDbConnection();

    cleanStmt = conn3.createStatement();
    cleanStmt.execute(
      "TRUNCATE TABLE gas_test_perf " + "RESTART IDENTITY CASCADE",
    );
    cleanStmt.close();
    cleanStmt = null; // Prevent double-close in finally block

    // -- BATCH WRITE --
    // Disable auto-commit for batch perf
    conn3.setAutoCommit(false);
    ps3 = conn3.prepareStatement(insertSql);

    const t3Start = Date.now();
    for (let i = 0; i < ROWS; i++) {
      ps3.setString(1, "row-" + i);
      ps3.addBatch();
    }
    ps3.executeBatch();
    conn3.commit(); // Explicitly commit the transaction
    const t3Ms = Date.now() - t3Start;

    console.log(
      " batch write (n=" +
        ROWS +
        "): " +
        (t3Ms / ROWS).toFixed(2) +
        "ms/row" +
        " (Total: " +
        t3Ms +
        "ms)",
    );

    // Restore default state before reading
    conn3.setAutoCommit(true);

    // -- BATCH READ --
    stmt4 = conn3.createStatement();

    // Start timer BEFORE executeQuery
    const t4Start = Date.now();
    rs4 = stmt4.executeQuery(
      "SELECT id, value " + "FROM gas_test_perf ORDER BY id",
    );

    let count = 0;
    while (rs4.next()) {
      count++;
      // Extract data to mimic real workload
      rs4.getString("value");
    }
    const t4Ms = Date.now() - t4Start;

    if (count === 0) {
      throw new Error("Batch read returned 0 rows");
    }

    console.log(
      " batch read (n=" +
        count +
        "): " +
        (t4Ms / count).toFixed(2) +
        "ms/row" +
        " (Total: " +
        t4Ms +
        "ms)",
    );
  } catch (e) {
    console.error("Batch test failed:", e);
  } finally {
    if (rs4) rs4.close();
    if (stmt4) stmt4.close();
    if (ps3) ps3.close();
    if (cleanStmt) cleanStmt.close();
    if (conn3) {
      // Best effort pool restore
      try {
        conn3.setAutoCommit(true);
      } catch (e) {}
      conn3.close();
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

Running the full suite

Wire it all up with a single entry point:

function runAllTests() {
  console.log("=== STARTING POSTGRES TESTS ===");

  try {
    testConnection();
    testModernTypes();
    testParameterizedInsert();
    testTransactionRollback();
    testPerformance();
    console.log("=== ALL TESTS PASSED SUCCESSFULLY ===");
  } catch (e) {
    console.error("!!! TEST SUITE FAILED !!!");
    console.error(e.message);
  }
}

Enter fullscreen mode Exit fullscreen mode

If everything is configured correctly, you should see:

=== STARTING POSTGRES TESTS ===
[1/4] Testing Basic Connection...
   -> Connected: PostgreSQL 18.1 (a027103) on aarch64-unk...
[2/4] Testing UUID & JSONB Support...
   -> UUID fetched: 543cd4a1-6e72-4fd8-b492-497df26ce5b7
   -> JSON parsed successfully: {"test": "json_parsing", "works": true}
[3/4] Testing Parameterized (Secure) Inserts...
   -> Secure insert successful.
[4/4] Testing Transaction Rollback...
   -> Caught expected error: ERROR: column "fake_col" of relation "gas_test_typ...
   -> Rollback executed.
   -> Rollback verified: No partial data exists.
[perf] Testing Read/Write Performance...
   new conn + write (n=1): conn: 248ms | write: 116ms
   new conn + read (n=1): conn: 251ms | read: 120ms
   batch write (n=100): 51.02ms/row (Total: 5102ms)
   batch read (n=100): 51.36ms/row (Total: 5136ms)
=== ALL TESTS PASSED SUCCESSFULLY ===
Enter fullscreen mode Exit fullscreen mode

Your numbers will vary depending on the region of your database.

Bonus: PostGIS spatial queries

If your Postgres provider supports PostGIS, you get full spatial query support from Apps Script. That means distance calculations, proximity searches, and GeoJSON output — all in a server-side script.

This test enables PostGIS, inserts two points using WKT (Well-Known Text), and then runs a proximity query that calculates distances and returns GeoJSON:

function testPostGIS() {
  console.log("=== STARTING POSTGIS TESTS ===");
  const conn = Jdbc.getConnection(DB_URL);

  try {
    const stmt = conn.createStatement();

    // 1. SETUP: Enable PostGIS & Create Table
    // Note: 'CREATE EXTENSION' might require admin privileges.
    console.log("[1/3] Setting up PostGIS...");
    stmt.execute("CREATE EXTENSION IF NOT EXISTS postgis");

    stmt.execute(`
      CREATE TABLE IF NOT EXISTS spatial_test (
        id SERIAL PRIMARY KEY,
        name TEXT,
        geom GEOMETRY(Point, 4326) -- Standard WGS84 (Lat/Lon)
      )
    `);
    stmt.execute("DELETE FROM spatial_test"); // Clean slate

    // 2. INSERT: Using WKT (Well-Known Text)
    // We use a PreparedStatement to safely insert coordinates
    console.log("[2/3] Inserting Spatial Data...");
    const insertSql =
      "INSERT INTO spatial_test (name, geom) " +
      "VALUES (?, ST_GeomFromText(?, 4326))";
    const ps = conn.prepareStatement(insertSql);

    // Point A: The White House (-77.0365, 38.8977)
    ps.setString(1, "White House");
    ps.setString(2, "POINT(-77.0365 38.8977)");
    ps.addBatch();

    // Point B: The Washington Monument (-77.0353, 38.8895) ~1km away
    ps.setString(1, "Washington Monument");
    ps.setString(2, "POINT(-77.0353 38.8895)");
    ps.addBatch();

    ps.executeBatch();
    ps.close();

    // 3. QUERY: Spatial Math & GeoJSON
    // Ask Postgres to calculate distance
    // and format the result as JSON
    console.log("[3/3] Running Spatial Query...");
    const query = `
      SELECT 
        name, 
        ST_Distance(
          geom::geography, 
          ST_GeomFromText('POINT(-77.0365 38.8977)', 4326)::geography
        ) as meters_away,
        ST_AsGeoJSON(geom)::text as geojson 
      FROM spatial_test
      WHERE ST_DWithin(
        geom::geography, 
        ST_GeomFromText('POINT(-77.0365 38.8977)', 4326)::geography, 
        2000 -- Look for points within 2000 meters
      )
    `;

    const rs = stmt.executeQuery(query);

    while (rs.next()) {
      const name = rs.getString(1);
      const dist = parseFloat(rs.getString(2)).toFixed(0);
      const json = rs.getString(3); // Grab the GeoJSON string

      console.log(` -> Found: ${name}`);
      console.log(` Distance: ${dist} meters`);
      console.log(` GeoJSON: ${json}`);
    }

    rs.close();
    stmt.close();
  } catch (e) {
    console.error("PostGIS Test Failed: " + e.message);
    console.error(
      "Ensure your database user has " +
        "permission to 'CREATE EXTENSION postgis'",
    );
  } finally {
    conn.close();
  }
}

Enter fullscreen mode Exit fullscreen mode

The ST_Distance function with ::geography casting gives you real-world meters (not degrees), and ST_AsGeoJSON produces standard GeoJSON you can drop straight into a map library. The ST_DWithin filter keeps the query efficient by only looking at points within a 2 km radius.

The CREATE EXTENSION postgis command may require admin/superuser privileges. Most managed Postgres providers pre-enable PostGIS or let you enable it from their dashboard.

Common PostgreSQL + Apps Script problems

Once you’ve confirmed everything works, here are the four things that will bite you in production.

1. The firewall allow-listing nightmare

Google Apps Script does not run on a static IP address. It runs on a massive, dynamic range of Google IPs that change frequently.

  • The trap: You try to secure your database by only allowing connections from your server’s IP. Your script fails immediately.
  • The failed fix: You try to allow-list Google’s IP ranges. The list is huge, changes often, and is a maintenance burden.
  • The real fix:
    • Option A (cloud providers): Rely on SSL/TLS authentication rather than IP allow-listing. Configure your firewall to accept connections from any IP, but enforce ssl=true in your JDBC URL and use a strong, unique password.
    • Option B (enterprise/on-prem): If you must have a static IP (e.g., for a corporate database), Apps Script can’t connect directly. You might want to consider a proxy.

Opening your database to all IPs is a security tradeoff. Only do this if SSL/TLS is enforced at the server level (not just in your connection string) and you use long, random credentials. Most managed Postgres providers enforce SSL by default, but verify this in your provider’s settings. If your database contains sensitive data, consider Option B with a proxy instead.

2. The “connection storm”

Apps Script is serverless in the truest sense. Every time your script runs — a form submission trigger, a scheduled job, a menu click — it spins up a fresh instance and opens a new connection to Postgres.

  • The trap: If 100 people submit your form in 1 minute, Apps Script attempts 100 simultaneous connections.
  • The result: FATAL: remaining connection slots are reserved for non-replication superuser roles. Your app crashes.
  • The fix: Use a connection pooler. Some providers include this out of the box — look for the -pooler suffix in your connection URL or enable it in your provider’s dashboard. The pooler funnels thousands of incoming requests into a few stable connections to the actual database. Always use the pooled connection string for Apps Script, never the direct one. PgBouncer is a popular open-source connection pooler if you need to set this up yourself.

3. The cold start timeout

Apps Script has a strict 6-minute runtime limit. Serverless databases often “scale to zero” when idle to save costs.

  • The trap: Your nightly script tries to connect, but the database takes 5–10 seconds to wake up. The JDBC driver times out before the database is ready.
  • The fix: Implement a retry loop in your connection logic:
function getDbConnection() {
  const MAX_RETRIES = 3;
  for (let i = 0; i < MAX_RETRIES; i++) {
    try {
      return Jdbc.getConnection(DB_URL);
    } catch (e) {
      console.log("Connection failed (sleeping?): " + e.message);
      Utilities.sleep(5000); // Wait 5 seconds and try again
    }
  }
  throw new Error("DB unreachable after retries");
}

Enter fullscreen mode Exit fullscreen mode

4. The silent data corruption (timezones)

Apps Script (JavaScript) and your database (Postgres) might disagree on what time it is.

  • The trap: You insert new Date() from Apps Script. It sends 2026-02-17 10:00:00. Is that UTC? EST? PST?
  • The result: Your “Daily Report” runs at midnight but misses the last 4 hours of data because Postgres thinks those records are from “tomorrow.”
  • The fix:
    • Database side: Always use TIMESTAMPTZ (Timestamp with Time Zone) columns, never bare TIMESTAMP.
    • Script side: Let Postgres handle timestamp generation using NOW() or CURRENT_TIMESTAMP in the SQL query itself, rather than passing a JavaScript Date object.
-- Safe: let Postgres generate the timestamp
INSERT INTO logs (message, created_at) VALUES (?, NOW())
Enter fullscreen mode Exit fullscreen mode

What this unlocks

With a real PostgreSQL database behind Apps Script, you’re no longer limited to the 1000-item ceiling of PropertiesService or the 10 MB cap on Sheets. You can now build Apps Script automations that:

  • Store structured data with proper schemas, indexes, and constraints.
  • Run complex queries — joins, aggregations, window functions — directly from your script.
  • Scale with your Postgres provider’s infrastructure instead of fighting Apps Script storage limits.
  • Share data between Apps Script projects, web apps, and backend services through a single database.

The combination of Apps Script’s deep Google Workspace integration and PostgreSQL’s power as a general-purpose database is genuinely useful. I’m excited to see what people build with it.

Complete code

Here’s everything in a single file you can paste into the Apps Script editor:

/**
 * CONFIGURATION
 * Set 'DB_URL' in Project Settings > Script Properties.
 * Format:
 * jdbc:postgresql://HOST:5432/DB
 * ?user=USER&password=PASS&ssl=true
 */
const DB_URL = PropertiesService
  .getScriptProperties().getProperty("DB_URL");

/**
 * HELPER: Centralized Connection Logic
 */
function getDbConnection() {
  if (!DB_URL) throw new Error("DB_URL Script Property is missing.");
  return Jdbc.getConnection(DB_URL);
}

function runAllTests() {
  console.log("=== STARTING POSTGRES TESTS ===");

  try {
    testConnection();
    testModernTypes();
    testParameterizedInsert();
    testTransactionRollback();
    testPerformance();
    console.log("=== ALL TESTS PASSED SUCCESSFULLY ===");
  } catch (e) {
    console.error("!!! TEST SUITE FAILED !!!");
    console.error(e.message);
  }
}

function testConnection() {
  console.log("[1/4] Testing Basic Connection...");
  const conn = getDbConnection();
  const stmt = conn.createStatement();
  const rs = stmt.executeQuery("SELECT version()");

  if (rs.next()) {
    const version = rs.getString(1);
    console.log(" -> Connected: " + version.substring(0, 40) + "...");
  }

  rs.close();
  stmt.close();
  conn.close();
}

function testModernTypes() {
  console.log("[2/4] Testing UUID & JSONB Support...");
  const conn = getDbConnection();
  const stmt = conn.createStatement();

  // Setup: Create a table with modern types
  stmt.execute(`
    CREATE TABLE IF NOT EXISTS gas_test_types (
      id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
      data JSONB,
      created_at TIMESTAMPTZ DEFAULT NOW()
    );
  `);

  // Cleanup old test data
  stmt.execute("DELETE FROM gas_test_types");

  const testData = '{"test": "json_parsing", "works": true}';
  const sql =
    "INSERT INTO gas_test_types (data) VALUES (?::jsonb)";
  const ps = conn.prepareStatement(sql);
  ps.setString(1, testData);
  ps.execute();
  ps.close();

  // FETCH: strictly cast to ::text to avoid JDBC driver errors
  const rs = stmt.executeQuery(
    "SELECT id::text, data::text FROM gas_test_types LIMIT 1",
  );

  if (rs.next()) {
    const uuid = rs.getString(1);
    const jsonStr = rs.getString(2);
    const jsonObj = JSON.parse(jsonStr);

    if (jsonObj.works === true) {
      console.log(" -> UUID fetched: " + uuid);
      console.log(" -> JSON parsed successfully: " + jsonStr);
    } else {
      throw new Error("JSON parsing mismatch");
    }
  } else {
    throw new Error("No data returned from insert");
  }

  rs.close();
  stmt.close();
  conn.close();
}

function testParameterizedInsert() {
  console.log("[3/4] Testing Parameterized (Secure) Inserts...");
  const conn = getDbConnection();

  const sql = "INSERT INTO gas_test_types (data) VALUES (?::jsonb)";
  const stmt = conn.prepareStatement(sql);

  // Bind variable to the first '?'
  // We stringify because JDBC doesn't know what a JS Object is
  const data = { user: "Secure User", role: "admin" };
  stmt.setString(1, JSON.stringify(data));

  const rows = stmt.executeUpdate();

  if (rows !== 1)
    throw new Error("Parameterized insert failed to affect 1 row.");
  console.log(" -> Secure insert successful.");

  stmt.close();
  conn.close();
}

function testTransactionRollback() {
  console.log("[4/4] Testing Transaction Rollback...");
  const conn = getDbConnection();

  // Disable auto-commit to start transaction mode
  conn.setAutoCommit(false);

  try {
    const stmt = conn.createStatement();

    // 1. Valid Insert
    stmt.execute(
      "INSERT INTO gas_test_types (data) " +
        'VALUES (\'{"step": "transaction_start"}\')',
    );

    // 2. Simulate Error (e.g., bad SQL syntax or script logic error)
    // This SQL is invalid because column 'fake_col' doesn't exist
    stmt.execute(
      "INSERT INTO gas_test_types (fake_col) VALUES ('fail')"
    );

    conn.commit(); // Should not be reached
  } catch (e) {
    console.log(
      " -> Caught expected error: " +
        e.message.substring(0, 50) + "...",
    );
    conn.rollback();
    console.log(" -> Rollback executed.");
  } finally {
    conn.close();
  }

  // Verification: Ensure the first insert is NOT in DB
  const verifyConn = getDbConnection();
  const verifyStmt = verifyConn.createStatement();
  const rs = verifyStmt.executeQuery(
    "SELECT count(*) FROM gas_test_types " +
      "WHERE data->>'step' = 'transaction_start'",
  );

  rs.next();
  const count = rs.getInt(1);
  if (count === 0) {
    console.log(" -> Rollback verified: No partial data exists.");
  } else {
    throw new Error("Rollback failed! Partial data found in DB.");
  }

  rs.close();
  verifyStmt.close();
  verifyConn.close();
}

function testPerformance() {
  console.log("[perf] Testing Read/Write Performance...");

  const ROWS = 100;
  const insertSql = "INSERT INTO gas_test_perf (value) VALUES (?)";

  // --- Setup ---
  let setupConn, setupStmt;
  try {
    setupConn = getDbConnection();
    setupStmt = setupConn.createStatement();
    setupStmt.execute(`
      CREATE TABLE IF NOT EXISTS gas_test_perf (
        id SERIAL PRIMARY KEY,
        value TEXT,
        created_at TIMESTAMPTZ DEFAULT NOW()
      );
    `);
    // TRUNCATE is faster than DELETE
    setupStmt.execute(
      "TRUNCATE TABLE gas_test_perf " + "RESTART IDENTITY CASCADE",
    );
  } catch (e) {
    console.error("Setup failed: " + e.message);
    return;
  } finally {
    if (setupStmt) setupStmt.close();
    if (setupConn) setupConn.close();
  }

  // --- New connection, single write (n=1) ---
  let conn1, ps1;
  try {
    const t1ConnStart = Date.now();
    conn1 = getDbConnection();
    const t1ConnMs = Date.now() - t1ConnStart;

    const t1WriteStart = Date.now();
    ps1 = conn1.prepareStatement(insertSql);
    ps1.setString(1, "cold-write");
    ps1.executeUpdate();
    const t1WriteMs = Date.now() - t1WriteStart;

    console.log(
      " new conn + write (n=1): " +
        "conn: " +
        t1ConnMs +
        "ms | " +
        "write: " +
        t1WriteMs +
        "ms",
    );
  } catch (e) {
    console.error("Write n=1 failed:", e);
  } finally {
    if (ps1) ps1.close();
    if (conn1) conn1.close();
  }

  // --- New connection, single read (n=1) ---
  let conn2, stmt2, rs2;
  try {
    const t2ConnStart = Date.now();
    conn2 = getDbConnection();
    const t2ConnMs = Date.now() - t2ConnStart;

    const t2ReadStart = Date.now();
    stmt2 = conn2.createStatement();
    const readSql =
      "SELECT id, value FROM gas_test_perf LIMIT 1";
    rs2 = stmt2.executeQuery(readSql);

    if (rs2.next()) {
      // Extract data to mimic real workload
      rs2.getString("value");
    }
    const t2ReadMs = Date.now() - t2ReadStart;

    console.log(
      " new conn + read (n=1): " +
        "conn: " +
        t2ConnMs +
        "ms | " +
        "read: " +
        t2ReadMs +
        "ms",
    );
  } catch (e) {
    console.error("Read n=1 failed:", e);
  } finally {
    if (rs2) rs2.close();
    if (stmt2) stmt2.close();
    if (conn2) conn2.close();
  }

  // --- Existing connection, batch write & read ---
  let conn3, cleanStmt, ps3, stmt4, rs4;
  try {
    conn3 = getDbConnection();

    cleanStmt = conn3.createStatement();
    cleanStmt.execute(
      "TRUNCATE TABLE gas_test_perf " + "RESTART IDENTITY CASCADE",
    );
    cleanStmt.close();
    cleanStmt = null; // Prevent double-close in finally block

    // -- BATCH WRITE --
    // Disable auto-commit for batch perf
    conn3.setAutoCommit(false);
    ps3 = conn3.prepareStatement(insertSql);

    const t3Start = Date.now();
    for (let i = 0; i < ROWS; i++) {
      ps3.setString(1, "row-" + i);
      ps3.addBatch();

Enter fullscreen mode Exit fullscreen mode

Top comments (0)