Updated JDBC not working after modification
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
I have an SQL Statement in an JDBC action, which is working but when i add a column in the select i get the error
Schema did not match. If the table schema is altered on the target database, please test the JDBC step.
Even if i click on "Test JDBC step" it still not working, what it can be? The SQL works fine in the database and in the "Test JDBC Step".
- Labels:
-
Connection
-
JDBC
-
Query
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
ServiceNow – JDBC: “Schema did not match” after adding a column (even when SQL tests fine)
Problem
You have a JDBC step/action that worked. After adding a new column to the SELECT, runtime fails with:
Schema did not match. If the table schema is altered on the target database, please test the JDBC step.
But the SQL runs fine both in the DB and in “Test JDBC step”.
Why this happens
ServiceNow validates the result set against a cached/declared schema:
1) IntegrationHub Flow/Action (JDBC step) — the step’s output schema is fixed from the last test/generation. If the live query returns more/fewer/different columns than the declared outputs, the engine throws “Schema did not match” even if SQL is valid.
2) JDBC Data Source → Import Set table — the import set table structure (columns/dictionary) must match the incoming result set. If you add/remove columns in SQL, the import set table must be updated (new Dictionary entries) or auto-generated again; otherwise, mismatch occurs.
Identify which path you are using
A) Flow Designer / Action Designer “JDBC” step (store results in outputs)
B) Scheduled Import → Data Source (Type: JDBC) that loads into an Import Set table and then a Transform Map
Fixes
A) IntegrationHub (Flow/Action) JDBC step
1. Open the Action/Flow → the JDBC step → click **Test** with your current query so it returns a fresh sample.
2. In Action Designer: click **Use sample output to generate outputs** (or **Generate outputs from sample**) to refresh the step output schema. Ensure the new column appears under Outputs.
3. If you mapped those outputs into variables or subflow outputs, update those mappings as well.
4. Re-publish the Action/Subflow/Flow.
Common gotchas:
- The “Test” alone is not enough; you must regenerate the **Outputs** from the sample, or manually add the new field to step outputs.
- If you are using a Subflow that returns a fixed data pill schema, update the Subflow Outputs too.
- If you’re using “Store result as Records” (Data Stream), validate the record definition aligns with the new column set.
B) JDBC Data Source → Import Set
1. Open the **Data Source** (Type: JDBC).
2. Click **Test Load 20 records** (or similar) to fetch the new result set.
3. In the Data Source (or the Import Set table form), click **Auto-generate columns** / **Create table columns** / **Update source columns** (name varies by release) to add any new columns to the Import Set table. Verify the import set table (e.g., u_intune_device_import) now has the new field as a Dictionary entry.
4. Review the **Transform Map**:
- Add a Field Map for the new column (if needed).
- If “Enforce source schema” is enabled, disable it or ensure the source column set matches.
5. Run the import again (Load Data → Transform).
Common gotchas:
- Import Set table is fixed; new SELECT columns require new dictionary fields.
- If your SQL aliases changed (e.g., renamed a column), update Field Maps and any coalesce keys.
- If reconcile rules expect that column, add it before transform to avoid aborts.
Other checks (applies to both)
- Clear output caching: close/re-open the Action; sometimes the designer cache needs a fresh “Generate outputs from sample”.
- Watch for driver normalization: some JDBC drivers change case or pad spaces; match your column aliases exactly.
- If using **MID Server**: ensure the MID used by “Test” and by execution is the same; different MIDs may see different DB schemas.
- Verify your SQL ends with stable aliasing: always alias computed fields (e.g., SELECT count(*) AS total_count). Avoid duplicate column names.
- Avoid “SELECT *” — specify columns and aliases so the schema is deterministic.
Quick diagnostic checklist
- Action/Flow route: Did you regenerate Outputs from sample after the SQL change? (This is the #1 cause)
- Data Source route: Did you re-auto-generate columns for the Import Set table and update Field Maps?
- Are column aliases unique and consistent?
- Are you executing on the same MID Server as your test?
- Do you have “Enforce source schema” enabled anywhere?
TL;DR
The SQL works, but ServiceNow is enforcing a saved/cached schema. Refresh the declared outputs (IH Action) or update the Import Set table columns (Data Source/Transform Map). Simply “Test JDBC step” isn’t enough—you must regenerate outputs or recreate import-set columns to match the new SELECT.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
ServiceNow – JDBC: “Schema did not match” after adding a column (even when SQL tests fine)
Problem
You have a JDBC step/action that worked. After adding a new column to the SELECT, runtime fails with:
Schema did not match. If the table schema is altered on the target database, please test the JDBC step.
But the SQL runs fine both in the DB and in “Test JDBC step”.
Why this happens
ServiceNow validates the result set against a cached/declared schema:
1) IntegrationHub Flow/Action (JDBC step) — the step’s output schema is fixed from the last test/generation. If the live query returns more/fewer/different columns than the declared outputs, the engine throws “Schema did not match” even if SQL is valid.
2) JDBC Data Source → Import Set table — the import set table structure (columns/dictionary) must match the incoming result set. If you add/remove columns in SQL, the import set table must be updated (new Dictionary entries) or auto-generated again; otherwise, mismatch occurs.
Identify which path you are using
A) Flow Designer / Action Designer “JDBC” step (store results in outputs)
B) Scheduled Import → Data Source (Type: JDBC) that loads into an Import Set table and then a Transform Map
Fixes
A) IntegrationHub (Flow/Action) JDBC step
1. Open the Action/Flow → the JDBC step → click **Test** with your current query so it returns a fresh sample.
2. In Action Designer: click **Use sample output to generate outputs** (or **Generate outputs from sample**) to refresh the step output schema. Ensure the new column appears under Outputs.
3. If you mapped those outputs into variables or subflow outputs, update those mappings as well.
4. Re-publish the Action/Subflow/Flow.
Common gotchas:
- The “Test” alone is not enough; you must regenerate the **Outputs** from the sample, or manually add the new field to step outputs.
- If you are using a Subflow that returns a fixed data pill schema, update the Subflow Outputs too.
- If you’re using “Store result as Records” (Data Stream), validate the record definition aligns with the new column set.
B) JDBC Data Source → Import Set
1. Open the **Data Source** (Type: JDBC).
2. Click **Test Load 20 records** (or similar) to fetch the new result set.
3. In the Data Source (or the Import Set table form), click **Auto-generate columns** / **Create table columns** / **Update source columns** (name varies by release) to add any new columns to the Import Set table. Verify the import set table (e.g., u_intune_device_import) now has the new field as a Dictionary entry.
4. Review the **Transform Map**:
- Add a Field Map for the new column (if needed).
- If “Enforce source schema” is enabled, disable it or ensure the source column set matches.
5. Run the import again (Load Data → Transform).
Common gotchas:
- Import Set table is fixed; new SELECT columns require new dictionary fields.
- If your SQL aliases changed (e.g., renamed a column), update Field Maps and any coalesce keys.
- If reconcile rules expect that column, add it before transform to avoid aborts.
Other checks (applies to both)
- Clear output caching: close/re-open the Action; sometimes the designer cache needs a fresh “Generate outputs from sample”.
- Watch for driver normalization: some JDBC drivers change case or pad spaces; match your column aliases exactly.
- If using **MID Server**: ensure the MID used by “Test” and by execution is the same; different MIDs may see different DB schemas.
- Verify your SQL ends with stable aliasing: always alias computed fields (e.g., SELECT count(*) AS total_count). Avoid duplicate column names.
- Avoid “SELECT *” — specify columns and aliases so the schema is deterministic.
Quick diagnostic checklist
- Action/Flow route: Did you regenerate Outputs from sample after the SQL change? (This is the #1 cause)
- Data Source route: Did you re-auto-generate columns for the Import Set table and update Field Maps?
- Are column aliases unique and consistent?
- Are you executing on the same MID Server as your test?
- Do you have “Enforce source schema” enabled anywhere?
TL;DR
The SQL works, but ServiceNow is enforcing a saved/cached schema. Refresh the declared outputs (IH Action) or update the Import Set table columns (Data Source/Transform Map). Simply “Test JDBC step” isn’t enough—you must regenerate outputs or recreate import-set columns to match the new SELECT.
