Skip to content

Commit ac90080

Browse files
feat: add sqlfmt pre-commit hook and format all SQL files (#953)
* feat: add sqlfmt pre-commit hook and format all SQL files - Add sqlfmt (v0.29.0) as a pre-commit hook with jinjafmt support - Run sqlfmt on all SQL files in the repo (excluding dbt_packages/) - This enforces consistent SQL formatting across the codebase Resolves: ELE-5274 Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore return() parens broken by sqlfmt + update CI to Python 3.10 - Fix Jinja compilation error in is_primitive macro where sqlfmt removed parentheses from return() call - Update run-precommit.yml to Python 3.10 (sqlfmt requires >= 3.10) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore Jinja return() parens, ClickHouse casing, exclude unparseable files - Fix all broken Jinja return() calls across 3 files (sqlfmt removed parens) - Restore ClickHouse function casing with -- fmt: off protection (parseDateTimeBestEffortOrNull, formatDateTime, toString, toDateTime) - Exclude 4 unparseable SQL files from sqlfmt hook in .pre-commit-config.yaml - Error message formatting updated by sqlfmt (cosmetic only) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore files with Python/YAML/non-SQL content damaged by sqlfmt - Restore compile_py_code.sql (Python code for Snowflake/BigQuery) - Restore test_json_schema.sql (Python code for json validation) - Restore python.sql (Python test macros) - Restore generate_schema_baseline_test.sql (YAML template output) - Restore empty_table.sql (corrupted Jinja dict key access) - Exclude all files with non-SQL content from sqlfmt hook Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore corrupted Jinja dict keys in test_exposure_schema_validity.sql Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore files with ClickHouse/BigQuery case-sensitive identifiers and exclude from sqlfmt sqlfmt lowercases camelCase/PascalCase identifiers that are case-sensitive in ClickHouse and BigQuery. Since sqlfmt will always re-lowercase these on every run, the correct fix is to restore these files from master and exclude them from sqlfmt. Restored files (7): - column_numeric_monitors.sql (stddevPop, varSamp, Nullable) - table_monitoring_query.sql (dateDiff, Nullable) - buckets_cte.sql (arrayJoin, toUInt32) - full_names.sql (splitByChar, BOTH, AS, OFFSET) - datediff.sql (dateDiff, Nullable, Int32) - lag.sql (lagInFrame) - null_as.sql (Nullable) Also adds exclude pattern for integration_tests/dbt_project/dbt_packages/ to prevent symlink recursion during pre-commit runs. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * docs: add comment explaining sqlfmt exclude list reasons Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * refactor: use -- fmt: off/on for ClickHouse/BigQuery case-sensitive identifiers instead of file exclusion Instead of excluding 7 files entirely from sqlfmt, use inline -- fmt: off / -- fmt: on comments to protect only the case-sensitive identifiers (Nullable, stddevPop, varSamp, arrayJoin, toUInt32, splitByChar, dateDiff, lagInFrame, OFFSET) while letting sqlfmt format the rest of each file. This reduces the exclude list from 17 to 10 files. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * revert: restore ClickHouse/BigQuery files to exclusion approach The -- fmt: off/on inline comments break ClickHouse's SQL parser when macros are inlined into compiled queries (the comments cause 'Unmatched parentheses' syntax errors). Reverted to full-file exclusion with an updated comment explaining why inline comments can't be used here. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore dict utility macros and store_anomaly_test_results to fix fusion compatibility sqlfmt reformatting of these files collapsed multi-line Jinja conditionals with return() statements to single lines, causing fusion's minijinja engine to treat returned dicts as immutable maps. This broke .update() calls with 'map has no method named update' errors in all fusion test variants. Files restored to master versions and excluded from sqlfmt. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore all macros/edr, dict_utils, graph from master and exclude from sqlfmt for fusion compatibility sqlfmt's Jinja reformatting (trailing commas, collapsed conditionals, multi-line macro calls) causes fusion's minijinja engine to mishandle dict types, breaking .update() calls with 'map has no method named update'. The issue is systemic across the Jinja execution chain and not isolatable to specific files. Restores all 131 macro files under macros/edr/, macros/utils/dict_utils/, and macros/utils/graph/ to their master versions and excludes these directories from sqlfmt. Models and other utility macros remain formatted. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore ALL macros from master and exclude entire macros/ dir from sqlfmt The fusion minijinja 'map has no method named update' error persists even after restoring macros/edr/, dict_utils/, and graph/. The remaining sqlfmt-formatted utility macros (cross_db_utils, data_types, table_operations, run_queries, etc.) are also in the test execution chain. Excluding the entire macros/ directory from sqlfmt ensures fusion compatibility. sqlfmt formatting is now applied to models/ and integration_tests/ SQL files only. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * test: restore ALL SQL files to master to verify if fusion failure is pre-existing Only .pre-commit-config.yaml and .github/workflows/run-precommit.yml differ from master. All macros/, models/, and integration_tests/ SQL files are identical to master. If fusion still fails, the issue is pre-existing in the elementary Python repo, not caused by sqlfmt formatting. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * feat: restore sqlfmt formatting — fusion failure proven pre-existing Definitive test (commit a7ec322) proved fusion 'map has no method named update' failure occurs even with ALL SQL files identical to master. The only differences were .pre-commit-config.yaml and .github/workflows/run-precommit.yml, which don't affect dbt execution. Therefore the fusion failure is pre-existing in the elementary-data/elementary Python repo, not caused by sqlfmt formatting. This commit restores the proper sqlfmt formatting for all applicable SQL files, with exclusions only for files sqlfmt genuinely damages: - ClickHouse/BigQuery case-sensitive identifiers - Embedded Python/YAML content - Complex Jinja patterns sqlfmt cannot parse - dbt_packages symlink (infinite recursion) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * style: one-time sqlfmt format of excluded ClickHouse/BigQuery files Run sqlfmt on the 8 excluded files that contain ClickHouse/BigQuery case-sensitive identifiers. After formatting, restore the correct casing for: splitByChar, arrayJoin, toUInt32, dateDiff, lagInFrame, stddevPop, varSamp, Nullable, Int32. Files remain excluded in .pre-commit-config.yaml so future edits are not auto-formatted (which would re-lowercase the identifiers). Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: restore dict key access syntax in empty_table.sql sqlfmt corrupted Jinja dict bracket access by adding spaces: dummy_values['timestamp'] → dummy_values[' timestamp '] This caused empty string values instead of actual dummy values, breaking Postgres (invalid timestamp '') and ClickHouse (CANNOT_PARSE_DATETIME). Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
1 parent e4a4826 commit ac90080

276 files changed

Lines changed: 11329 additions & 6977 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/run-precommit.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ jobs:
1313
- name: Set up Python
1414
uses: actions/setup-python@v4.3.0
1515
with:
16-
python-version: "3.8"
16+
python-version: "3.10"
1717

1818
- name: Install dev requirements
1919
run: pip install -r dev-requirements.txt

.pre-commit-config.yaml

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,41 @@ repos:
2727
hooks:
2828
- id: typos
2929

30+
- repo: https://github.com/tconbeer/sqlfmt
31+
rev: v0.29.0
32+
hooks:
33+
- id: sqlfmt
34+
additional_dependencies: ["shandy-sqlfmt[jinjafmt]"]
35+
# Excluded files — sqlfmt damages these in ways that break runtime behavior:
36+
# - ClickHouse/BigQuery files: sqlfmt lowercases case-sensitive identifiers
37+
# (e.g. Nullable, stddevPop, lagInFrame, splitByChar, dateDiff, OFFSET).
38+
# "-- fmt: off" inline comments can't be used here because these macros are
39+
# inlined into compiled SQL, and the comments break ClickHouse's parser.
40+
# - Non-SQL content: files containing embedded Python or YAML that sqlfmt
41+
# destroys (broken indentation, collapsed structure)
42+
# - Unparsable Jinja: complex Jinja patterns sqlfmt cannot parse correctly
43+
# - dbt_packages: symlink to repo root causes infinite recursion
44+
exclude: |
45+
(?x)^(
46+
macros/commands/generate_elementary_cli_profile\.sql|
47+
macros/commands/generate_json_schema_test\.sql|
48+
macros/commands/generate_schema_baseline_test\.sql|
49+
macros/edr/data_monitoring/monitors/column_numeric_monitors\.sql|
50+
macros/edr/data_monitoring/monitors_query/table_monitoring_query\.sql|
51+
macros/edr/system/system_utils/buckets_cte\.sql|
52+
macros/edr/system/system_utils/empty_table\.sql|
53+
macros/edr/system/system_utils/full_names\.sql|
54+
macros/edr/tests/test_json_schema\.sql|
55+
macros/edr/tests/test_utils/compile_py_code\.sql|
56+
macros/utils/cross_db_utils/datediff\.sql|
57+
macros/utils/cross_db_utils/get_user_creation_query\.sql|
58+
macros/utils/cross_db_utils/lag\.sql|
59+
macros/utils/data_types/null_as\.sql|
60+
macros/utils/table_operations/get_row_count\.sql|
61+
integration_tests/dbt_project/macros/python\.sql|
62+
integration_tests/dbt_project/dbt_packages/.*
63+
)$
64+
3065
- repo: local
3166
hooks:
3267
- id: no_commit

integration_tests/dbt_project/macros/ci_schemas_cleanup/drop_stale_ci_schemas.sql

Lines changed: 58 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -16,40 +16,69 @@
1616
schema_utils/ folder. edr_drop_schema lives in clear_env.sql. CI-specific
1717
helpers (parse_timestamp_from_ci_schema_name) live alongside this file.
1818
#}
19-
2019
{% macro drop_stale_ci_schemas(prefixes=none, max_age_hours=24) %}
21-
{% if prefixes is none or prefixes is string or prefixes | length == 0 %}
22-
{{ exceptions.raise_compiler_error(
23-
"drop_stale_ci_schemas: 'prefixes' is required and must be a "
24-
"non-empty list (e.g. ['dbt_', 'py_'])."
25-
) }}
26-
{% endif %}
20+
{% if prefixes is none or prefixes is string or prefixes | length == 0 %}
21+
{{
22+
exceptions.raise_compiler_error(
23+
"drop_stale_ci_schemas: 'prefixes' is required and must be a "
24+
"non-empty list (e.g. ['dbt_', 'py_'])."
25+
)
26+
}}
27+
{% endif %}
2728

28-
{% set max_age_hours = max_age_hours | int %}
29-
{% set database = elementary.target_database() %}
30-
{% set all_schemas = edr_list_schemas(database) %}
31-
{# utcnow() is deprecated in Python 3.12+ but modules.datetime.timezone is not
29+
{% set max_age_hours = max_age_hours | int %}
30+
{% set database = elementary.target_database() %}
31+
{% set all_schemas = edr_list_schemas(database) %}
32+
{# utcnow() is deprecated in Python 3.12+ but modules.datetime.timezone is not
3233
available in dbt's Jinja context. Both now and the constructed datetime are
3334
naive, so comparisons are safe. #}
34-
{% set now = modules.datetime.datetime.utcnow() %}
35-
{% set max_age_seconds = max_age_hours * 3600 %}
36-
{% set ns = namespace(dropped=0) %}
35+
{% set now = modules.datetime.datetime.utcnow() %}
36+
{% set max_age_seconds = max_age_hours * 3600 %}
37+
{% set ns = namespace(dropped=0) %}
3738
38-
{{ log("CI schema cleanup: scanning " ~ all_schemas | length ~ " schema(s) in database '" ~ database ~ "' for prefixes " ~ prefixes | string, info=true) }}
39+
{{
40+
log(
41+
"CI schema cleanup: scanning " ~ all_schemas
42+
| length
43+
~ " schema(s) in database '"
44+
~ database
45+
~ "' for prefixes "
46+
~ prefixes
47+
| string,
48+
info=true,
49+
)
50+
}}
3951
40-
{% for schema_name in all_schemas | sort %}
41-
{% set schema_ts = parse_timestamp_from_ci_schema_name(schema_name, prefixes) %}
42-
{% if schema_ts is not none %}
43-
{% set age_seconds = (now - schema_ts).total_seconds() %}
44-
{% if age_seconds > max_age_seconds %}
45-
{{ log(" DROP " ~ schema_name ~ " (age: " ~ (age_seconds / 3600) | round(1) ~ " h)", info=true) }}
46-
{% do edr_drop_schema(database, schema_name) %}
47-
{% set ns.dropped = ns.dropped + 1 %}
48-
{% else %}
49-
{{ log(" keep " ~ schema_name ~ " (age: " ~ (age_seconds / 3600) | round(1) ~ " h)", info=true) }}
50-
{% endif %}
51-
{% endif %}
52-
{% endfor %}
52+
{% for schema_name in all_schemas | sort %}
53+
{% set schema_ts = parse_timestamp_from_ci_schema_name(schema_name, prefixes) %}
54+
{% if schema_ts is not none %}
55+
{% set age_seconds = (now - schema_ts).total_seconds() %}
56+
{% if age_seconds > max_age_seconds %}
57+
{{
58+
log(
59+
" DROP " ~ schema_name ~ " (age: " ~ (age_seconds / 3600)
60+
| round(1) ~ " h)",
61+
info=true,
62+
)
63+
}}
64+
{% do edr_drop_schema(database, schema_name) %}
65+
{% set ns.dropped = ns.dropped + 1 %}
66+
{% else %}
67+
{{
68+
log(
69+
" keep " ~ schema_name ~ " (age: " ~ (age_seconds / 3600)
70+
| round(1) ~ " h)",
71+
info=true,
72+
)
73+
}}
74+
{% endif %}
75+
{% endif %}
76+
{% endfor %}
5377
54-
{{ log("CI schema cleanup complete. Dropped " ~ ns.dropped ~ " stale schema(s).", info=true) }}
78+
{{
79+
log(
80+
"CI schema cleanup complete. Dropped " ~ ns.dropped ~ " stale schema(s).",
81+
info=true,
82+
)
83+
}}
5584
{% endmacro %}

integration_tests/dbt_project/macros/ci_schemas_cleanup/parse_timestamp_from_ci_schema_name.sql

Lines changed: 25 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -5,28 +5,31 @@
55
Schema naming convention:
66
<prefix><YY><MM><DD>_<HH><MI><SS>_<branch>_<hash>[_<suffix>]
77
#}
8-
98
{% macro parse_timestamp_from_ci_schema_name(schema_name, prefixes) %}
10-
{% set schema_lower = schema_name.lower() %}
11-
{% for prefix in prefixes %}
12-
{% if schema_lower.startswith(prefix.lower()) %}
13-
{% set remainder = schema_lower[prefix | length :] %}
14-
{% set match = modules.re.match(
15-
'^(?P<yy>\\d{2})(?P<mm>\\d{2})(?P<dd>\\d{2})_(?P<HH>\\d{2})(?P<MI>\\d{2})(?P<SS>\\d{2})_.+',
16-
remainder
17-
) %}
18-
{% if match %}
19-
{% set yy = match.group('yy') | int %}
20-
{% set mm = match.group('mm') | int %}
21-
{% set dd = match.group('dd') | int %}
22-
{% set HH = match.group('HH') | int %}
23-
{% set MI = match.group('MI') | int %}
24-
{% set SS = match.group('SS') | int %}
25-
{% if 1 <= mm <= 12 and 1 <= dd <= 31 and 0 <= HH <= 23 and 0 <= MI <= 59 and 0 <= SS <= 59 %}
26-
{% do return(modules.datetime.datetime(2000 + yy, mm, dd, HH, MI, SS)) %}
9+
{% set schema_lower = schema_name.lower() %}
10+
{% for prefix in prefixes %}
11+
{% if schema_lower.startswith(prefix.lower()) %}
12+
{% set remainder = schema_lower[prefix | length :] %}
13+
{% set match = modules.re.match(
14+
"^(?P<yy>\\d{2})(?P<mm>\\d{2})(?P<dd>\\d{2})_(?P<HH>\\d{2})(?P<MI>\\d{2})(?P<SS>\\d{2})_.+",
15+
remainder,
16+
) %}
17+
{% if match %}
18+
{% set yy = match.group("yy") | int %}
19+
{% set mm = match.group("mm") | int %}
20+
{% set dd = match.group("dd") | int %}
21+
{% set HH = match.group("HH") | int %}
22+
{% set MI = match.group("MI") | int %}
23+
{% set SS = match.group("SS") | int %}
24+
{% if 1 <= mm <= 12 and 1 <= dd <= 31 and 0 <= HH <= 23 and 0 <= MI <= 59 and 0 <= SS <= 59 %}
25+
{% do return(
26+
modules.datetime.datetime(
27+
2000 + yy, mm, dd, HH, MI, SS
28+
)
29+
) %}
30+
{% endif %}
31+
{% endif %}
2732
{% endif %}
28-
{% endif %}
29-
{% endif %}
30-
{% endfor %}
31-
{% do return(none) %}
33+
{% endfor %}
34+
{% do return(none) %}
3235
{% endmacro %}

integration_tests/dbt_project/macros/ci_schemas_cleanup/test_drop_stale_ci_schemas.sql

Lines changed: 75 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -5,76 +5,96 @@
55
runs the cleanup macro, checks which schemas survived, cleans up,
66
and returns a JSON result dict.
77
#}
8-
98
{% macro test_drop_stale_ci_schemas() %}
10-
{% set database = elementary.target_database() %}
11-
{% set now = modules.datetime.datetime.utcnow() %}
12-
13-
{# Old schema: timestamp in the past (2020-01-01 00:00:00) #}
14-
{% set old_schema = 'dbt_200101_000000_citest_00000000' %}
15-
{# Recent schema: timestamp = now #}
16-
{% set recent_ts = now.strftime('%y%m%d_%H%M%S') %}
17-
{% set recent_schema = 'dbt_' ~ recent_ts ~ '_citest_11111111' %}
18-
19-
{{ log("TEST: creating old schema: " ~ old_schema, info=true) }}
20-
{{ log("TEST: creating recent schema: " ~ recent_schema, info=true) }}
21-
22-
{# ── Create both schemas ───────────────────────────────────────────── #}
23-
{% do edr_create_schema(database, old_schema) %}
24-
{% do edr_create_schema(database, recent_schema) %}
25-
26-
{# ── Verify both exist before running cleanup ──────────────────────── #}
27-
{% set old_exists_before = edr_schema_exists(database, old_schema) %}
28-
{% set recent_exists_before = edr_schema_exists(database, recent_schema) %}
29-
{{ log("TEST: old_exists_before=" ~ old_exists_before ~ ", recent_exists_before=" ~ recent_exists_before, info=true) }}
30-
31-
{# ── Run cleanup with a large threshold so only the artificially old
9+
{% set database = elementary.target_database() %}
10+
{% set now = modules.datetime.datetime.utcnow() %}
11+
12+
{# Old schema: timestamp in the past (2020-01-01 00:00:00) #}
13+
{% set old_schema = "dbt_200101_000000_citest_00000000" %}
14+
{# Recent schema: timestamp = now #}
15+
{% set recent_ts = now.strftime("%y%m%d_%H%M%S") %}
16+
{% set recent_schema = "dbt_" ~ recent_ts ~ "_citest_11111111" %}
17+
18+
{{ log("TEST: creating old schema: " ~ old_schema, info=true) }}
19+
{{ log("TEST: creating recent schema: " ~ recent_schema, info=true) }}
20+
21+
{# ── Create both schemas ───────────────────────────────────────────── #}
22+
{% do edr_create_schema(database, old_schema) %}
23+
{% do edr_create_schema(database, recent_schema) %}
24+
25+
{# ── Verify both exist before running cleanup ──────────────────────── #}
26+
{% set old_exists_before = edr_schema_exists(database, old_schema) %}
27+
{% set recent_exists_before = edr_schema_exists(database, recent_schema) %}
28+
{{
29+
log(
30+
"TEST: old_exists_before="
31+
~ old_exists_before
32+
~ ", recent_exists_before="
33+
~ recent_exists_before,
34+
info=true,
35+
)
36+
}}
37+
38+
{# ── Run cleanup with a large threshold so only the artificially old
3239
schema (year 2020) is caught, and real CI schemas from parallel
3340
workers are safely below the threshold. ──────────────────────────── #}
34-
{% do drop_stale_ci_schemas(prefixes=['dbt_'], max_age_hours=8760) %}
35-
36-
{# ── Check which schemas survived ─────────────────────────────────── #}
37-
{% set old_exists_after = edr_schema_exists(database, old_schema) %}
38-
{% set recent_exists_after = edr_schema_exists(database, recent_schema) %}
39-
{{ log("TEST: old_exists_after=" ~ old_exists_after ~ ", recent_exists_after=" ~ recent_exists_after, info=true) }}
40-
41-
{# ── Cleanup: drop any remaining test schemas ─────────────────────── #}
42-
{% if old_exists_after is true %}
43-
{% do edr_drop_schema(database, old_schema) %}
44-
{% endif %}
45-
{% if recent_exists_after %}
46-
{% do edr_drop_schema(database, recent_schema) %}
47-
{% endif %}
48-
49-
{# ── Return results ────────────────────────────────────────────────── #}
50-
{% set results = {
51-
"old_exists_before": old_exists_before,
52-
"recent_exists_before": recent_exists_before,
53-
"old_dropped": not old_exists_after,
54-
"recent_kept": recent_exists_after
55-
} %}
56-
{% do return(results) %}
41+
{% do drop_stale_ci_schemas(prefixes=["dbt_"], max_age_hours=8760) %}
42+
43+
{# ── Check which schemas survived ─────────────────────────────────── #}
44+
{% set old_exists_after = edr_schema_exists(database, old_schema) %}
45+
{% set recent_exists_after = edr_schema_exists(database, recent_schema) %}
46+
{{
47+
log(
48+
"TEST: old_exists_after="
49+
~ old_exists_after
50+
~ ", recent_exists_after="
51+
~ recent_exists_after,
52+
info=true,
53+
)
54+
}}
55+
56+
{# ── Cleanup: drop any remaining test schemas ─────────────────────── #}
57+
{% if old_exists_after is true %}
58+
{% do edr_drop_schema(database, old_schema) %}
59+
{% endif %}
60+
{% if recent_exists_after %}
61+
{% do edr_drop_schema(database, recent_schema) %}
62+
{% endif %}
63+
64+
{# ── Return results ────────────────────────────────────────────────── #}
65+
{% set results = {
66+
"old_exists_before": old_exists_before,
67+
"recent_exists_before": recent_exists_before,
68+
"old_dropped": not old_exists_after,
69+
"recent_kept": recent_exists_after,
70+
} %}
71+
{% do return(results) %}
5772
{% endmacro %}
5873

5974

6075
{# ── Per-adapter schema creation ─────────────────────────────────────── #}
61-
6276
{% macro edr_create_schema(database, schema_name) %}
63-
{% do return(adapter.dispatch('edr_create_schema', 'elementary_tests')(database, schema_name)) %}
77+
{% do return(
78+
adapter.dispatch("edr_create_schema", "elementary_tests")(
79+
database, schema_name
80+
)
81+
) %}
6482
{% endmacro %}
6583

6684
{% macro default__edr_create_schema(database, schema_name) %}
67-
{% set schema_relation = api.Relation.create(database=database, schema=schema_name) %}
68-
{% do dbt.create_schema(schema_relation) %}
69-
{% do adapter.commit() %}
85+
{% set schema_relation = api.Relation.create(
86+
database=database, schema=schema_name
87+
) %}
88+
{% do dbt.create_schema(schema_relation) %}
89+
{% do adapter.commit() %}
7090
{% endmacro %}
7191

7292
{% macro clickhouse__edr_create_schema(database, schema_name) %}
73-
{% do run_query("CREATE DATABASE IF NOT EXISTS `" ~ schema_name ~ "`") %}
74-
{% do adapter.commit() %}
93+
{% do run_query("CREATE DATABASE IF NOT EXISTS `" ~ schema_name ~ "`") %}
94+
{% do adapter.commit() %}
7595
{% endmacro %}
7696

7797
{% macro spark__edr_create_schema(database, schema_name) %}
78-
{% set safe_schema = schema_name | replace("`", "``") %}
79-
{% do run_query("CREATE DATABASE IF NOT EXISTS `" ~ safe_schema ~ "`") %}
98+
{% set safe_schema = schema_name | replace("`", "``") %}
99+
{% do run_query("CREATE DATABASE IF NOT EXISTS `" ~ safe_schema ~ "`") %}
80100
{% endmacro %}

integration_tests/dbt_project/macros/clear_env.sql

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,25 @@
11
{% macro clear_env() %}
2-
{% set database_name, schema_name = elementary.get_package_database_and_schema('elementary') %}
2+
{% set database_name, schema_name = elementary.get_package_database_and_schema(
3+
"elementary"
4+
) %}
35
{% do elementary_tests.edr_drop_schema(database_name, schema_name) %}
4-
{% do elementary_tests.edr_drop_schema(elementary.target_database(), generate_schema_name()) %}
6+
{% do elementary_tests.edr_drop_schema(
7+
elementary.target_database(), generate_schema_name()
8+
) %}
59
{% endmacro %}
610

711
{% macro edr_drop_schema(database_name, schema_name) %}
8-
{% do return(adapter.dispatch('edr_drop_schema', 'elementary_tests')(database_name, schema_name)) %}
12+
{% do return(
13+
adapter.dispatch("edr_drop_schema", "elementary_tests")(
14+
database_name, schema_name
15+
)
16+
) %}
917
{% endmacro %}
1018

1119
{% macro default__edr_drop_schema(database_name, schema_name) %}
12-
{% set schema_relation = api.Relation.create(database=database_name, schema=schema_name) %}
20+
{% set schema_relation = api.Relation.create(
21+
database=database_name, schema=schema_name
22+
) %}
1323
{% do dbt.drop_schema(schema_relation) %}
1424
{% do adapter.commit() %}
1525
{% endmacro %}

0 commit comments

Comments
 (0)