feat: implement SQLite Adapter#5
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR introduces a complete SQLite adapter implementation with comprehensive test coverage, and updates the core Adapter interface to support JSON-path filtering through a new optional Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (1)
src/adapters/sqlite.test.ts (1)
408-432: Consider adding coverage for multi-field cursors and JSON-path sort/select.Current pagination tests only exercise a single sort field (
age). GivenFieldName<T>advertises->>paths and the cursor can carry multiple keys, tests that fail today would help surface the gaps in the adapter (multi-field keyset ordering and JSON-path handling insortBy/select/cursor). Tying these to the bugs flagged insqlite.tswould lock in the fixes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adapters/sqlite.test.ts` around lines 408 - 432, Add tests that exercise multi-field keyset pagination and JSON-path sort/select to cover gaps in the adapter: extend the existing adapter.findMany<User> tests (the "should handle cursor pagination ascending/descending" cases) to include sortBy with multiple fields (e.g., [{ field: "age", direction: "asc" }, { field: "id", direction: "asc" }]) and cursor.after carrying multiple keys (e.g., { age: X, id: Y }), and add cases where sort/select use JSON-path style FieldName (->> paths) to ensure sqlite.ts code handling sortBy, select and cursor extraction supports JSON-paths and multi-key comparisons; update or add expectations for ordering and lengths to validate correct multi-field keyset behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adapters/sqlite.test.ts`:
- Around line 358-360: Update the misleading inline comment near the assertions
that check outer1?.name and outer2 to explain that the inner transaction's
UPDATE was discarded by the rollback to the savepoint (not "reverted"
afterwards); locate the assertion using outer1 and outer2 in the test and
replace "Reverted the update from inner tx" with wording like "Inner
transaction's UPDATE was discarded by rollback to savepoint (never applied to
outer scope)" so it clearly states the inner tx effects were not committed.
In `@src/adapters/sqlite.ts`:
- Around line 238-240: The update currently issues UPDATE ... WHERE ... then
calls this.find({ model, where }) which can return null if the update changed a
column used in the original where predicate; modify the update logic in the
update method to refetch by the record's primary key instead: either derive the
PK from data if it includes the primary key, or perform a pre-update read (using
the same model/where) to capture the primary key before running await
this.db.run(`UPDATE ${this.quote(model)} SET ${setClause} WHERE ${whereSql}`,
params), then call this.find({ model, where: { [primaryKeyName]: primaryKeyValue
} }) to reliably return the updated record (refer to symbols update, this.find,
quote(model), setClause, whereSql, params).
- Around line 142-152: The create method currently returns the caller-supplied
data when select is omitted, which bypasses DB-side defaults/coercions; update
create to always refetch the persisted row from the DB (use this.find) after
insertion using the new record's primary key (use model PK info to build the
where clause) and return that fetched result instead of the original input;
ensure the code path that currently does "if (result === null) throw ..."
remains but trigger the find by PK regardless of whether select was provided so
returned object reflects actual persisted state.
- Around line 445-456: The current multi-field cursor predicate builds a simple
AND chain using cursorDirection derived from sortBy[0], which is incorrect for
lexicographic keyset pagination; update the logic in the cursor handling block
(where cursor, cursorDirection, this.quote, mapWhereValue, parts, and params are
used) to generate a lexicographic OR-chain: for n cursor fields produce clauses
like (f1 > ? ) OR (f1 = ? AND f2 > ? ) OR ... (or use a row-value comparison if
preferred), pushing appropriate mapped params for each clause, and ensure you
respect each field's individual sort direction from sortBy (not just sortBy[0])
when choosing '>' vs '<' for that field.
- Around line 499-509: The JSON path construction in the block handling
field.includes("->>") builds jsonPath via `'$.${parts.slice(1).join(".")}'`,
which can be injected if a segment contains a quote or invalid characters; in
the function where quotedField is built (look for variables: field, parts,
topLevelColumn, jsonPath, this.quote(topLevelColumn), and json_extract),
validate each segment parts.slice(1) against /^[A-Za-z_][A-Za-z0-9_]*$/ and
throw an Error for invalid segments, and/or escape single quotes by replacing
any "'" with "''" before joining so the final string passed into json_extract is
safe. Ensure the topLevelColumn continues to use this.quote(topLevelColumn) and
return the sanitized quotedField.
- Around line 384-396: The transaction method uses SAVEPOINTs on a shared
connection which allows concurrent top-level adapter.transaction(...) calls to
interleave and corrupt each other; fix by serializing top-level transactions
(e.g., add an internal mutex/queue like this.txMutex and acquire/release it
around the SAVEPOINT/RELEASE/ROLLBACK block in transaction so only one top-level
transaction runs at a time while nested calls still use savepoints), replace the
ad-hoc sp generation with a monotonic counter (e.g., this._spCounter++) to
guarantee uniqueness, and quote the savepoint identifier when executing
`SAVEPOINT`, `RELEASE SAVEPOINT`, and `ROLLBACK TO SAVEPOINT` (use a safe quoted
form like double-quotes) so identifiers are safe against refactors.
- Around line 325-382: The upsert implementation in upsert(...) ignores the
passed where clause and always keys ON CONFLICT/refetch on
modelSpec.primaryKey.fields and uses create[f] for PK refetch, which is unsafe;
change behavior to enforce that args.where is a simple set of equality
conditions covering every PK field (throw if not) and then use those equality
values (not create or update) when building the conflict/refetch key and
pkValuesWhere for the subsequent find call; also disallow or ignore PK fields in
update (or throw) so an update cannot change the PK used for refetch; update
references: upsert, args.where, modelSpec.primaryKey.fields, ON CONFLICT(...),
pkValuesWhere, mappedCreate, mappedUpdate, and find.
- Around line 195-200: The ORDER BY, buildSelect, and cursor construction use
this.quote(...) which incorrectly wraps JSON-path field names like
"metadata->>theme"; add a helper method (e.g., quoteField(field: FieldName<T>))
that detects JSON path syntax (-> and ->>) and rewrites it into
json_extract(...) expressions with proper quoted identifiers and unwrapping for
->> while leaving plain identifiers quoted via this.quote, then replace calls to
this.quote(s.field)/this.quote(f) in the sortBy mapper (ORDER BY builder),
buildSelect, and the cursor loop with this.quoteField(...) so JSON paths are
handled consistently (also keep buildWhereRecursive unchanged which already
handles JSON paths).
---
Nitpick comments:
In `@src/adapters/sqlite.test.ts`:
- Around line 408-432: Add tests that exercise multi-field keyset pagination and
JSON-path sort/select to cover gaps in the adapter: extend the existing
adapter.findMany<User> tests (the "should handle cursor pagination
ascending/descending" cases) to include sortBy with multiple fields (e.g., [{
field: "age", direction: "asc" }, { field: "id", direction: "asc" }]) and
cursor.after carrying multiple keys (e.g., { age: X, id: Y }), and add cases
where sort/select use JSON-path style FieldName (->> paths) to ensure sqlite.ts
code handling sortBy, select and cursor extraction supports JSON-paths and
multi-key comparisons; update or add expectations for ordering and lengths to
validate correct multi-field keyset behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a0fe2be9-040b-4d5a-b06f-85fdb56a57b8
📒 Files selected for processing (4)
README.mdsrc/adapters/sqlite.test.tssrc/adapters/sqlite.tssrc/core.ts
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/adapters/sqlite.test.ts (1)
372-432: PaginationbeforeEachis declared after the first test, masking shared seed data.The
beforeEachat Line 420 is registered for the wholePaginationdescribe (including themulti-field keyset paginationtest above it), so that test actually runs against 10 rows (p1-p5+m1-m5). It currently passes because the cursor{age: 30, name: "B"}happens to exclude allp*rows, but the test reads as if onlym1-m5exist and is fragile to future cursor changes.Suggestion: move the
beforeEachhook to the top of thePaginationdescribe, and either isolate the multi-field test into its owndescribewithout the shared seed or extend the inline commentary to document thatp1-p5are also in-scope.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adapters/sqlite.test.ts` around lines 372 - 432, The test "should handle multi-field keyset pagination correctly" runs before the file-scoped beforeEach that seeds p1-p5, causing it to see extra rows; either move the existing beforeEach (the Promise.all seeding loop) to the top of the Pagination describe so it runs for all tests consistently, or wrap this multi-field test in its own describe block (or add a dedicated beforeEach inside that describe) to isolate seeding for m1-m5; update/remove comments to document which seed set applies accordingly and ensure references to adapter.create remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adapters/sqlite.ts`:
- Around line 602-623: Handle empty arrays in the "in" and "not_in" branches:
detect when list.length === 0 after building list (in the case "in" and case
"not_in" blocks that call this.mapWhereValue and use quotedField), and
short-circuit to a safe always-false or always-true predicate instead of
emitting IN ()/NOT IN (); specifically return an SQL fragment like "1=0" (no
params) for empty "in" and "1=1" (no params) for empty "not_in" so
mapWhereValue, params array, and quotedField usage remain correct and no invalid
SQL is produced.
---
Nitpick comments:
In `@src/adapters/sqlite.test.ts`:
- Around line 372-432: The test "should handle multi-field keyset pagination
correctly" runs before the file-scoped beforeEach that seeds p1-p5, causing it
to see extra rows; either move the existing beforeEach (the Promise.all seeding
loop) to the top of the Pagination describe so it runs for all tests
consistently, or wrap this multi-field test in its own describe block (or add a
dedicated beforeEach inside that describe) to isolate seeding for m1-m5;
update/remove comments to document which seed set applies accordingly and ensure
references to adapter.create remain unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6b46ff4b-a7a0-4534-9976-668446b44f28
📒 Files selected for processing (4)
README.mdsrc/adapters/sqlite.test.tssrc/adapters/sqlite.tssrc/core.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- README.md
- Replace unsafe as FieldName<T> with 'in' checks and keyof T indexed accesses in getIdentityValues - Add // eslint-disable-next-line comments for remaining necessary assertions in buildIdentityFilter - Fix overly broad cast in postgres.ts from 'as Record<string, unknown>[]' to 'as Array<Record<string, unknown>>' Co-authored-by: opencode <opencode@anomaly.co>
- Replace unsafe as FieldName<T> with 'in' checks and keyof T indexed accesses in getIdentityValues - Add // eslint-disable-next-line comments for remaining necessary assertions in buildIdentityFilter - Fix overly broad cast in postgres.ts from 'as Record<string, unknown>[]' to 'as Array<Record<string, unknown>>' Co-authored-by: opencode <opencode@anomaly.co>
…into no-orm-sqlite-adapter
…tifier quoting - Replaced .map() and .forEach() with indexed loops in SQL utilities and adapters to reduce iterator overhead. - Enhanced id() helper with regex-based escaping and removed magic * fallback for better contextual safety. - Moved SELECT * default logic into adapters for explicit query orchestration. - Improved identifier quoting consistency across Postgres and SQLite adapters. - Fixed minor linting issues and redundant type assertions.
…plication Add buildSelectSql, buildInsertSql, buildUpdateSql, buildDeleteSql, buildUpsertSql, and buildCountSql to src/adapters/utils/statements.ts. Both SqliteAdapter and PostgresAdapter now delegate SQL construction to these shared builders; each adapter retains ownership of row marshaling, dialect-specific column expressions, and post-query error handling. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
Tighten SyncStatement return types so casts in createSyncSqliteExecutor are no longer needed. Restructure id() to use typeof narrowing instead of Array.isArray, which TypeScript narrows correctly for readonly string[]. Disable unicorn/no-array-sort at config level instead of per-line. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…utils Addresses promise-based error handling in memory adapter, postgres JSON operator collisions, sqlite regressions, and sort() type hint for JSON path ORDER BY. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…3 cache across transactions Split createSyncSqliteExecutor into createBunSqliteExecutor and createBetterSqlite3Executor so each executor is named after its driver rather than its implementation detail. Bun delegates caching to driver.query() which is connection-scoped, removing the manual Map entirely. better-sqlite3 keeps the FIFO Map but threads it as a default parameter so transaction executors share the parent cache instead of starting cold. Renames the async sqlite executor to createSqliteExecutor and the router to createExecutor, completing the driver-named convention. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…d @types/pg - SQLite/Postgres update(): wrap WHERE in rowid/ctid subquery so only the first matching row is mutated; updateMany() is unchanged - memory upsert(): call assertNoPrimaryKeyUpdates on the update payload to prevent pkIndexes corruption when a PK field is passed in update - package.json: add @types/pg to devDependencies so tsc passes on clean installs - sql.ts id(): remove quote-char escaping (identifiers are dev-controlled) Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
| const query = buildSelectSql({ | ||
| table: modelName, | ||
| select, | ||
| whereClause: where(args.where, { model, columnExpr: toColumnExpr, cursor, sortBy }), | ||
| orderByClause: sortBy && sortBy.length > 0 ? sort(model, sortBy, toColumnExpr) : undefined, | ||
| limit, | ||
| offset, | ||
| }); |
There was a problem hiding this comment.
Why are some functions called buildSelectSql vs others just where / sort ... Why are some parameters called whereClause, orderByClause, ... vs others just select? General rule: we are not writing Java code, keep naming simple.
| : v, | ||
| ); | ||
|
|
||
| const name = `q_${createHash("sha1").update(text).digest("hex").slice(0, 16)}`; |
There was a problem hiding this comment.
This seems to be a very inefficient function, is that really nessecary?
There was a problem hiding this comment.
For pg, prepared plans are cached server-side (per connection), that requires a query name, to ensure cache hit, it is required to hash the SQL query.
The createHash() function can be replace with inline hash() to avoid object allocation, hence speed it up for about 2-3x.
| driver: PgClient | PgPool | PgPoolClient, | ||
| inTransaction = false, | ||
| ): QueryExecutor { | ||
| function getPrepared(query: Sql) { |
There was a problem hiding this comment.
This function seems to be doing two things:
1/ Creating $1, $2, ... placeholders
2/ Mapping parameters that are not supported natively
This should be split out, most of it is probably generic functions that can be reused across drivers in the future.
| } | ||
|
|
||
| function createBunSqlExecutor(bunSql: BunSQL, inTransaction = false): QueryExecutor { | ||
| const runQuery = (query: Sql): Promise<BunSqlResult> => { |
There was a problem hiding this comment.
Bun already has it's own sql, which is native and highly optimized. How do we avoid creating another layer of duplication here? (might apply to other drivers as well)
There was a problem hiding this comment.
Our sql tagged template is designed to be composable and driver-agnostic with only main focus on query building as in where(), set(), etc. And eventually we hand off to driver (Bun Sql) to handle the execution.
If we replace it with bun native sql or other db sql, we would need to re-implement query building layer per driver and negotiate types with driver specific such as Fragments in postgresJS, SQL.Query in Bun, pg does not use tagged template, etc.
There was a problem hiding this comment.
This whole module is sql specific, move into sql.ts.
| this.executor = isQueryExecutor(driver) ? driver : createExecutor(driver); | ||
| } | ||
|
|
||
| async migrate(): Promise<void> { |
There was a problem hiding this comment.
Lot's of duplicate code between sqlite and postgres here.
There was a problem hiding this comment.
Addressed, moved it to migrateSqls in sql.ts
| const input = mapToRecord(model, data); | ||
| const fields = Object.keys(input); | ||
| const values: unknown[] = []; | ||
| for (let i = 0; i < fields.length; i++) { | ||
| values.push(input[fields[i]!]); | ||
| } |
There was a problem hiding this comment.
Why do you need to convert things into an intermediate representation, instead of just passing it to your build helpers? Every additional conversion costs CPU & memory and duplicates code across adapters. (applies to many other places as well).
There was a problem hiding this comment.
Addressed, passing mapValue function directly into set() in sql.ts
…pters Rename buildXxxSql functions and xxxClause/xxxAction parameters to use shorter, less Java-like names. Aligns with the terse naming style already used in sql.ts (where, sort, set, join, etc.). Changes: - statements.ts: buildSelectSql → selectSql, buildInsertSql → insertSql, etc. - statements.ts: whereClause → where, orderByClause → orderBy, setClause → set, onConflictAction → onConflict - sqlite.ts, postgres.ts: update all callsites to new function and param names - sqlite.ts: inline whereClause in update() method - postgres.ts: inline whereClause in update() method - sql.ts: remove redundant 'op' alias in buildWhere (access exhaustively checked default case safely) - sql.ts: remove redundant 'in' checks in isQueryExecutor (typeof check implies property existence) All tests pass; no logic changes. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
Move selectSql, insertSql, updateSql, deleteSql, upsertSql, and countSql from statements.ts into sql.ts and update imports. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…operations Pass data directly to SQL builders instead of converting to an intermediate record first. set() now accepts a mapValue callback and skips undefined values internally. mapSqliteValue extended to handle JSON objects (typeof heuristic), replacing the sqlite-specific mapToRecord conversion. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
… in sqlite and postgres adapters Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…cy fixes - Extract extractFields() helper to sql.ts, eliminating 4 duplicate field-collection loops across create/upsert in sqlite and postgres adapters - Replace globalLRU.set() with globalLRU.get() in mutation paths (update, updateMany, upsert) to avoid full set() bookkeeping when only recency promotion is needed - Guard removeFromTable swap-and-pop to skip no-op writes when removing the last heap element - Remove what-comments and rename op -> opStr in matchesWhere default branch Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/adapters/memory.test.ts`:
- Around line 66-72: The tests are incorrectly using synchronous expect(() =>
...) wrappers for promise-returning methods; update the assertions to await
promise-based rejection checks by replacing patterns like expect(() =>
adapter.create(...)).toThrow("...") with await
expect(adapter.create(...)).rejects.toThrow("..."), and do the same for
adapter.update(...) and adapter.upsert(...) occurrences (also update the similar
assertions around the other affected blocks referenced in the comment).
In `@src/adapters/memory.ts`:
- Around line 329-337: The current composite PK string builder using a "|"
delimiter (variables primaryKeyFieldNames, primaryKeyValues, and res) can
collide when values contain "|"—replace this fragile join with a deterministic,
unambiguous serialization: build an array of primaryKeyFieldNames.map(name =>
primaryKeyValues[name]) and serialize it (e.g., JSON.stringify the array or use
length-prefixed encoding) so different tuples cannot produce the same key;
ensure you preserve the same deterministic order (primaryKeyFieldNames) when
serializing and when reading back for lookup.
In `@src/adapters/postgres.ts`:
- Around line 104-111: The code is inlining user-controlled JSON path segments
(pathLiterals) which allows SQL injection; instead, stop using raw(pathLiterals)
and parameterize each segment and join them using the provided SQL helper:
import join from ./utils/sql, map the path array to parameterized SQL fragments
(one per segment) and pass the joined result into the jsonb_extract_path_text
call alongside id(fieldName) so the expression becomes
jsonb_extract_path_text(${id(fieldName)}${join(/* paramized segments */));
ensure no direct string interpolation of path elements and that join is used to
combine the parameterized segments.
In `@src/adapters/sqlite.test.ts`:
- Around line 72-78: The test is asserting a synchronous throw but
adapter.update(...) returns a rejected Promise (assertNoPrimaryKeyUpdates causes
an async rejection); change the test to assert the async rejection by either
awaiting the call in a try/catch and asserting the caught error message, or
using your test runner's async assertion (e.g., await
expect(adapter.update<{...}>({...})).rejects.toThrow("Primary key updates are
not supported.") ). Update the assertion around adapter.update and reference
adapter.update and assertNoPrimaryKeyUpdates so the test properly observes the
rejection.
In `@src/adapters/sqlite.ts`:
- Around line 223-229: The run implementation for the Sql adapter drops
affected-row counts for parameterless mutations by using driver.exec when
query.params.length === 0; instead, always use driver.run so you can read the
returned changes. Update the run function (the async run: (query: Sql) => { ...
} block) to call driver.run(query.compile("?")) when there are no params and
return { changes: res.changes ?? 0 } (same as the paramized branch) rather than
calling driver.exec which loses the change count.
In `@src/adapters/utils/common.ts`:
- Around line 98-104: The function assertNoPrimaryKeyUpdates currently checks
primary key updates by testing data[field] !== undefined which allows keys with
explicit undefined to slip through; update the guard to detect presence of the
PK field in the patch object itself (use
Object.prototype.hasOwnProperty.call(data, field) or the "field in data" check)
inside assertNoPrimaryKeyUpdates (which uses getPrimaryKeyFieldNames(model)) and
throw the same Error when the PK field key exists in data regardless of its
value.
In `@src/adapters/utils/sql.ts`:
- Around line 86-98: id() wraps identifiers but doesn't escape embedded quote
characters, so identifiers like my"field produce invalid SQL; update the
function (referencing id and raw and the quoteChar parameter) to escape any
instances of quoteChar inside each identifier by doubling them (e.g., replace
quoteChar with quoteChar+quoteChar) before wrapping with quoteChar, and apply
this both in the string branch and the array loop while keeping the existing
empty-value handling intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 692a6583-af0a-45a9-b32d-e4c7bf452ee5
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.oxlintrc.jsonAGENTS.mdREADME.mdpackage.jsonsrc/adapters/memory.test.tssrc/adapters/memory.tssrc/adapters/postgres.tssrc/adapters/sqlite.test.tssrc/adapters/sqlite.tssrc/adapters/utils/common.tssrc/adapters/utils/sql.test.tssrc/adapters/utils/sql.tssrc/core.tssrc/index.tssrc/types.test.tssrc/types.tstsconfig.json
💤 Files with no reviewable changes (1)
- src/core.ts
✅ Files skipped from review due to trivial changes (3)
- .oxlintrc.json
- AGENTS.md
- src/types.ts
| export function id(val: string | readonly string[], quoteChar = '"'): Sql { | ||
| if (val === "" || (Array.isArray(val) && val.length === 0)) return raw(""); | ||
|
|
||
| if (typeof val === "string") { | ||
| return raw(quoteChar + val + quoteChar); | ||
| } | ||
|
|
||
| let res = ""; | ||
| for (let i = 0; i < val.length; i++) { | ||
| if (i > 0) res += ", "; | ||
| res += quoteChar + val[i]! + quoteChar; | ||
| } | ||
| return raw(res); |
There was a problem hiding this comment.
Escape embedded quote characters in identifiers.
id() currently just wraps the identifier, so a schema field like my"field produces invalid SQL everywhere this helper is used.
💡 Suggested fix
export function id(val: string | readonly string[], quoteChar = '"'): Sql {
if (val === "" || (Array.isArray(val) && val.length === 0)) return raw("");
if (typeof val === "string") {
- return raw(quoteChar + val + quoteChar);
+ return raw(quoteChar + val.replaceAll(quoteChar, quoteChar + quoteChar) + quoteChar);
}
let res = "";
for (let i = 0; i < val.length; i++) {
if (i > 0) res += ", ";
- res += quoteChar + val[i]! + quoteChar;
+ res += quoteChar + val[i]!.replaceAll(quoteChar, quoteChar + quoteChar) + quoteChar;
}
return raw(res);
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/adapters/utils/sql.ts` around lines 86 - 98, id() wraps identifiers but
doesn't escape embedded quote characters, so identifiers like my"field produce
invalid SQL; update the function (referencing id and raw and the quoteChar
parameter) to escape any instances of quoteChar inside each identifier by
doubling them (e.g., replace quoteChar with quoteChar+quoteChar) before wrapping
with quoteChar, and apply this both in the string branch and the array loop
while keeping the existing empty-value handling intact.
There was a problem hiding this comment.
This case would not exist as it is not valid schema, typescript compiler will complain it.
There was a problem hiding this comment.
@buibaoanh That's a fair point — since schema field names are constrained by TypeScript's type system as object property keys, a value like my"field would be highly unusual and effectively rejected at compile time. The concern is moot in this tightly-typed schema context. I'll defer to your judgment here.
(╯°□°)╯ ┬─┬
✏️ Learnings added
Learnt from: buibaoanh
Repo: 8monkey-ai/no-orm PR: 5
File: src/adapters/utils/sql.ts:86-98
Timestamp: 2026-05-13T08:54:46.856Z
Learning: In the `no-orm` codebase (`src/adapters/utils/sql.ts`), the `id()` SQL identifier helper does not need to escape embedded quote characters because all schema field names originate from TypeScript-typed object property keys (defined in the `Schema` type from `src/types`). The TypeScript compiler enforces that such identifiers are valid and would not contain characters like `"` that could break SQL identifier quoting. Raising this escaping concern is not applicable for this codebase.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
- Block PK-field presence by key (hasOwnProperty) rather than by value,
so `{ id: undefined }` no longer bypasses the guard
- Always use driver.run() for parameterless mutations so changes count
is not silently dropped (fixes deleteMany returning changes: 0)
- Use JSON.stringify(tuple) for composite PK hashing to eliminate "|"
delimiter collisions
- Escape single quotes in postgres JSON path literals to prevent injection
- Fix async rejection assertions in tests: use rejects.toThrow() instead
of synchronous toThrow() for promise-returning adapter methods
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…PaginationCriteria - MemoryAdapter now rejects unknown fields on create/update/updateMany/upsert - Export getPaginationCriteria from common.ts (was unexported) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (2)
src/adapters/memory.ts (2)
309-313: ⚡ Quick winUse indexed loop in hot-path field validation.
assertNoUnknownFields()is called from create/update/upsert paths; use an indexed loop instead offor...of.Proposed refactor
private assertNoUnknownFields(model: keyof S & string, data: Record<string, unknown>): void { const knownFields = this.schema[model]!.fields; - for (const key of Object.keys(data)) { + const keys = Object.keys(data); + for (let i = 0; i < keys.length; i++) { + const key = keys[i]!; if (!(key in knownFields)) { throw new Error(`Unknown field "${key}" in model "${model}"`); } } }As per coding guidelines: "Prefer indexed
forloops overfor...ofor.forEachin hot paths to avoid iterator protocol overhead".🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/adapters/memory.ts` around lines 309 - 313, The validation loop in assertNoUnknownFields should use an indexed for loop for performance: capture Object.keys(data) into a local variable (e.g., keys), iterate with a classic for (let i = 0; i < keys.length; i++) and use keys[i] when checking membership against knownFields, retaining the same throw new Error(`Unknown field "${key}" in model "${model}"`) behavior; update the function assertNoUnknownFields and its local variables (knownFields, data, keys) accordingly to remove the for...of iterator overhead.
176-177: ⚡ Quick winAnnotate or eliminate unsafe adapter-boundary assertions.
These casts are currently unannotated. Either remove them via tighter helper typing or add
eslint-disable-next-linewith a short reason per occurrence.As per coding guidelines: "Use
eslint-disable-next-linewith a short, specific reason for unavoidable type assertions at adapter boundaries".Also applies to: 199-200, 230-232
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/adapters/memory.ts` around lines 176 - 177, The two unsafe casts on adapter boundaries (the casts of data to expected types used in assertNoPrimaryKeyUpdates(this.schema[model]!, data) and this.assertNoUnknownFields(model, data as Record<string, unknown>), and the similar occurrences at the other spots referenced) must be annotated or removed: either tighten the helper/type signatures so data already has the correct typed shape before calling assertNoPrimaryKeyUpdates and assertNoUnknownFields (e.g., adjust the method signatures to accept the precise Record/Schema types), or keep the casts but add a single-line eslint-disable-next-line with a concise reason immediately above each cast (e.g., // eslint-disable-next-line `@typescript-eslint/no-explicit-any` — adapter boundary: validated shape asserted here) so the unsafe assertion is documented; update the calls in the functions/methods named assertNoPrimaryKeyUpdates and assertNoUnknownFields and the other two occurrences accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/adapters/memory.ts`:
- Around line 108-109: The primary-key fast-path in find() is using
String(where.value) to lookup pkIndexes, but pkIndexes are keyed by the tuple
JSON hash used when rows are indexed; replace the String conversion with the
same tuple-hash function used at insertion (the hashing routine that populates
pkIndexes at index-time) so find() computes the identical key before calling
this.pkIndexes.get(model)!.get(...), ensuring PK lookups hit the map instead of
falling back to full scans.
In `@src/adapters/postgres.ts`:
- Around line 285-290: The migrate() implementation always starts a new
transaction via this.executor.transaction(...) even when already inside a
transaction; change it to check this.executor.inTransaction first (like
transaction() does) and, if true, run the migration statements directly on the
existing executor (e.g., loop calling this.executor.run or exec.run) instead of
opening a nested transaction, otherwise call this.executor.transaction(...) as
currently done; reference migrate(), this.executor.transaction,
this.executor.inTransaction, and transaction() when making the change.
In `@src/adapters/sqlite.ts`:
- Around line 266-271: The migrate() method currently always calls
this.executor.transaction(...) and opens a new transaction even when already
inside one; change migrate() to check this.executor.inTransaction and if true
run the migration statements directly via this.executor.run(...) (or exec.run
within the existing executor) instead of wrapping them in a new transaction,
otherwise fall back to calling this.executor.transaction(...) as it does
now—update the logic around async migrate() and the loop over stmts so it reuses
the active transaction when inTransaction is true, mirroring the guard used in
transaction().
- Around line 103-121: toJsonPath currently emits non-numeric segments as
.{segment} which breaks/changes JSON path interpretation for keys containing
dots, brackets or quotes; update toJsonPath to wrap any non-index/key segments
in bracketed quoted form (e.g. ["..."]) and escape any backslashes and double
quotes inside the segment (also handle empty-string keys as [""]). Preserve the
numeric-segment-as-index logic (keep [N] for pure-digit segments) and ensure all
non-digit segments are escaped and quoted before concatenation so json_extract()
reads the correct object key.
In `@src/adapters/utils/sql.ts`:
- Around line 190-228: The JSON-path array IN/NOT IN branches in the leaf
function pass the whole array to options.columnExpr, losing scalar type hints
and causing Postgres to treat values as text; update the "in" and "not_in" cases
inside leaf(c) (used by buildWhere) to call options.columnExpr with a scalar or
explicit type hint instead of the full array (e.g., pass c.value[0] or an
explicit { type: ... } hint derived from the field) so toColumnExpr can apply
numeric/boolean casts correctly before generating the IN/NOT IN clause with
placeholders(params).
In `@src/types.ts`:
- Around line 195-204: Cursor.after's current map shape cannot represent
multiple SortBy entries that share the same top-level field but different
SortBy.path values; update the public Cursor type so it is an ordered array
(e.g., items with { field: FieldName<T>, path?: string[], value: unknown }) to
preserve the exact SortBy ordering and values, then update any keyset helper
signatures that reference Cursor or Cursor.after to consume the ordered array
and reconstruct the original ordering predicate using SortBy entries
(references: SortBy, SortBy.path, Cursor, Cursor.after, and the keyset helper
functions).
---
Nitpick comments:
In `@src/adapters/memory.ts`:
- Around line 309-313: The validation loop in assertNoUnknownFields should use
an indexed for loop for performance: capture Object.keys(data) into a local
variable (e.g., keys), iterate with a classic for (let i = 0; i < keys.length;
i++) and use keys[i] when checking membership against knownFields, retaining the
same throw new Error(`Unknown field "${key}" in model "${model}"`) behavior;
update the function assertNoUnknownFields and its local variables (knownFields,
data, keys) accordingly to remove the for...of iterator overhead.
- Around line 176-177: The two unsafe casts on adapter boundaries (the casts of
data to expected types used in assertNoPrimaryKeyUpdates(this.schema[model]!,
data) and this.assertNoUnknownFields(model, data as Record<string, unknown>),
and the similar occurrences at the other spots referenced) must be annotated or
removed: either tighten the helper/type signatures so data already has the
correct typed shape before calling assertNoPrimaryKeyUpdates and
assertNoUnknownFields (e.g., adjust the method signatures to accept the precise
Record/Schema types), or keep the casts but add a single-line
eslint-disable-next-line with a concise reason immediately above each cast
(e.g., // eslint-disable-next-line `@typescript-eslint/no-explicit-any` — adapter
boundary: validated shape asserted here) so the unsafe assertion is documented;
update the calls in the functions/methods named assertNoPrimaryKeyUpdates and
assertNoUnknownFields and the other two occurrences accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: dd68dac8-c280-4ff6-9104-05000fa52fb1
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.oxlintrc.jsonAGENTS.mdREADME.mdpackage.jsonsrc/adapters/memory.test.tssrc/adapters/memory.tssrc/adapters/postgres.tssrc/adapters/sqlite.test.tssrc/adapters/sqlite.tssrc/adapters/utils/common.tssrc/adapters/utils/sql.test.tssrc/adapters/utils/sql.tssrc/core.tssrc/index.tssrc/types.test.tssrc/types.tstsconfig.json
💤 Files with no reviewable changes (1)
- src/core.ts
✅ Files skipped from review due to trivial changes (2)
- .oxlintrc.json
- README.md
| function toJsonPath(path: string[]): string { | ||
| let jsonPath = "$"; | ||
| for (let i = 0; i < path.length; i++) { | ||
| const segment = path[i]!; | ||
| let isIndex = true; | ||
| if (segment.length === 0) isIndex = false; | ||
| else { | ||
| for (let j = 0; j < segment.length; j++) { | ||
| const c = segment.codePointAt(j); | ||
| if (c === undefined || c < 48 || c > 57) { | ||
| isIndex = false; | ||
| break; | ||
| } | ||
| } | ||
| } | ||
| if (isIndex) jsonPath += `[${segment}]`; | ||
| else jsonPath += `.${segment}`; | ||
| } | ||
| return jsonPath; |
There was a problem hiding this comment.
Literal JSON keys need quoting in SQLite paths.
toJsonPath() emits non-numeric segments as .${segment}. Keys like a.b, x]y, or quote"key are valid JSON object names but are parsed here as path syntax, so json_extract() reads the wrong location or errors. Quote and escape object-key segments when building the path string.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/adapters/sqlite.ts` around lines 103 - 121, toJsonPath currently emits
non-numeric segments as .{segment} which breaks/changes JSON path interpretation
for keys containing dots, brackets or quotes; update toJsonPath to wrap any
non-index/key segments in bracketed quoted form (e.g. ["..."]) and escape any
backslashes and double quotes inside the segment (also handle empty-string keys
as [""]). Preserve the numeric-segment-as-index logic (keep [N] for pure-digit
segments) and ensure all non-digit segments are escaped and quoted before
concatenation so json_extract() reads the correct object key.
There was a problem hiding this comment.
This case would not exist in practice.
There was a problem hiding this comment.
@buibaoanh, understood — if the path segments are guaranteed to be simple alphanumeric keys in your use case, the current implementation is fine. It may be worth adding a brief comment in toJsonPath() to document this assumption (e.g., that path segments are expected to be plain alphanumeric keys without dots, brackets, or quotes), so future contributors are aware of the constraint.
ʕ •`ᴥ•´ʔ
✏️ Learnings added
Learnt from: buibaoanh
Repo: 8monkey-ai/no-orm PR: 5
File: src/adapters/sqlite.ts:103-121
Timestamp: 2026-05-13T08:56:37.904Z
Learning: In `src/adapters/sqlite.ts`, the `toJsonPath()` function does not quote/escape non-numeric JSON path segments (emits them as `.segment`). The author has confirmed this is intentional: path segments are expected to be plain alphanumeric keys only. JSON keys containing dots, brackets, or quotes are not supported by design and are not expected to appear in practice.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
| leaf: (c) => { | ||
| const expr = options.columnExpr(options.model, c.field as string, c.path, c.value); | ||
| const field = options.model.fields[c.field as string]; | ||
| const mapped = options.mapValue ? options.mapValue(c.value, field) : c.value; | ||
|
|
||
| switch (c.op) { | ||
| case "eq": | ||
| return c.value === null ? sql`${expr} IS NULL` : sql`${expr} = ${mapped}`; | ||
| case "ne": | ||
| return c.value === null ? sql`${expr} IS NOT NULL` : sql`${expr} != ${mapped}`; | ||
| case "gt": | ||
| return sql`${expr} > ${mapped}`; | ||
| case "gte": | ||
| return sql`${expr} >= ${mapped}`; | ||
| case "lt": | ||
| return sql`${expr} < ${mapped}`; | ||
| case "lte": | ||
| return sql`${expr} <= ${mapped}`; | ||
| case "in": { | ||
| if (c.value.length === 0) return sql`1=0`; | ||
| let params: unknown[] = c.value; | ||
| if (options.mapValue) { | ||
| params = []; | ||
| for (let i = 0; i < c.value.length; i++) { | ||
| params.push(options.mapValue(c.value[i], field)); | ||
| } | ||
| } | ||
| return sql`${expr} IN (${placeholders(params)})`; | ||
| } | ||
| case "not_in": { | ||
| if (c.value.length === 0) return sql`1=1`; | ||
| let params: unknown[] = c.value; | ||
| if (options.mapValue) { | ||
| params = []; | ||
| for (let i = 0; i < c.value.length; i++) { | ||
| params.push(options.mapValue(c.value[i], field)); | ||
| } | ||
| } | ||
| return sql`${expr} NOT IN (${placeholders(params)})`; |
There was a problem hiding this comment.
Preserve type hints for JSON-path IN / NOT IN.
buildWhere() passes the whole array to columnExpr(). Postgres only applies numeric/boolean casts in toColumnExpr() when that hint is a scalar, so a clause like { type: "number", path, op: "in", value: [1, 2] } becomes a text comparison instead of a numeric one. Pass a scalar or explicit type hint for array operators too, or typed JSON-path IN filters will fail on Postgres.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/adapters/utils/sql.ts` around lines 190 - 228, The JSON-path array IN/NOT
IN branches in the leaf function pass the whole array to options.columnExpr,
losing scalar type hints and causing Postgres to treat values as text; update
the "in" and "not_in" cases inside leaf(c) (used by buildWhere) to call
options.columnExpr with a scalar or explicit type hint instead of the full array
(e.g., pass c.value[0] or an explicit { type: ... } hint derived from the field)
so toColumnExpr can apply numeric/boolean casts correctly before generating the
IN/NOT IN clause with placeholders(params).
| export interface SortBy<T = Record<string, unknown>> { | ||
| field: FieldName<T>; | ||
| path?: string[]; | ||
| type?: FieldType; | ||
| direction?: "asc" | "desc"; | ||
| } | ||
|
|
||
| export interface Cursor<T = Record<string, unknown>> { | ||
| after: Partial<{ [K in FieldName<T>]: unknown }>; | ||
| } |
There was a problem hiding this comment.
Cursor keys are ambiguous for JSON-path sorting.
Cursor.after stores one value per top-level field, but SortBy can distinguish the same field by different paths. A cursor for sortBy: [{ field: "meta", path: ["a"] }, { field: "meta", path: ["b"] }] can only carry one meta value, so the keyset helpers cannot reconstruct the original ordering predicate correctly. This public shape needs an ordered { field, path, value }[]-style cursor to make path-based pagination correct.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/types.ts` around lines 195 - 204, Cursor.after's current map shape cannot
represent multiple SortBy entries that share the same top-level field but
different SortBy.path values; update the public Cursor type so it is an ordered
array (e.g., items with { field: FieldName<T>, path?: string[], value: unknown
}) to preserve the exact SortBy ordering and values, then update any keyset
helper signatures that reference Cursor or Cursor.after to consume the ordered
array and reconstruct the original ordering predicate using SortBy entries
(references: SortBy, SortBy.path, Cursor, Cursor.after, and the keyset helper
functions).
- Fix async .toThrow() assertions in memory tests — update/upsert threw synchronously before returning a Promise, violating the Adapter contract; make both methods async so guard throws become rejections, and update test assertions to await expect(...).rejects.toThrow() - Fix PK fast-path hash mismatch in MemoryAdapter.find() — lookup used String(value) while the index is keyed by JSON.stringify([value]) - Parameterize JSON path segments in postgres toColumnExpr — remove raw(pathLiterals) string building in favour of sql template fragments joined via join(), eliminating the manual quote-escaping - Fix IN/NOT IN losing numeric/boolean type hint for JSON paths in postgres — derive cast hint from first array element when value is an array Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
62cec53 to
f97069d
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
src/adapters/sqlite.ts (1)
266-271:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReuse the current transaction in
migrate().Line 268 always opens a new transaction. If
tx.migrate()is called from insideadapter.transaction(...), this issues a secondBEGINon the same connection instead of reusing the active executor, which can fail outright or roll back the outer unit of work.💡 Minimal fix
async migrate(): Promise<void> { const stmts = migrateSqls(this.schema, { sqlType }); - await this.executor.transaction(async (exec) => { + const run = async (exec: QueryExecutor) => { // eslint-disable-next-line no-await-in-loop -- DDL is intentionally sequential for (let i = 0; i < stmts.length; i++) await exec.run(stmts[i]!); - }); + }; + if (this.executor.inTransaction) { + await run(this.executor); + return; + } + await this.executor.transaction(run); }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/adapters/sqlite.ts` around lines 266 - 271, The migrate() implementation currently always calls this.executor.transaction, which starts a new BEGIN even when called from inside an existing adapter.transaction; change migrate to accept an optional executor/transaction context (e.g., param exec?: Executor) and, if exec is provided, run the DDL statements with the provided exec.run(...) loop, otherwise fall back to this.executor.transaction(...). Update any callers (notably adapter.transaction(...) usage) to pass the active exec into migrate so the existing transaction is reused and no nested BEGIN is emitted.src/adapters/utils/sql.ts (1)
190-228:⚠️ Potential issue | 🟠 Major | ⚡ Quick winPreserve the element type hint for
IN/NOT IN.Line 191 builds
exprbefore the operator switch, so thein/not_inbranches pass the whole array intocolumnExpr(). Adapters that infer JSON-path casts from the sample value lose the scalar hint here, and numeric/boolean path filters can fall back to text comparison.💡 Minimal fix
leaf: (c) => { - const expr = options.columnExpr(options.model, c.field as string, c.path, c.value); + const valueHint = + (c.op === "in" || c.op === "not_in") && c.value.length > 0 ? c.value[0] : c.value; + const expr = options.columnExpr(options.model, c.field as string, c.path, valueHint); const field = options.model.fields[c.field as string]; const mapped = options.mapValue ? options.mapValue(c.value, field) : c.value;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/adapters/utils/sql.ts` around lines 190 - 228, The bug is that expr is computed once using the entire array value so columnExpr loses the scalar element type for "in" / "not_in"; update the leaf function in src/adapters/utils/sql.ts so the "in" and "not_in" branches call options.columnExpr(...) using a sample element (e.g. c.value[0]) or otherwise defer/compute an element-specific expr instead of the full array, then use that element-specific expr in the IN/NOT IN SQL templates and keep the existing per-item options.mapValue logic for params.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/adapters/memory.ts`:
- Around line 170-188: The update path mutates stored rows with
Object.assign(value, data) which uses undefined fields to clear columns; change
this by first building a cleaned patch object that excludes keys whose value ===
undefined (preserving null), then call Object.assign(value, cleaned) and use
cleaned for assertNoPrimaryKeyUpdates/assertNoUnknownFields/matching flow; apply
the same undefined-filtering fix to the other similar methods that merge partial
patches (the other update/patch/updateMany helpers noted in this diff) so all
MemoryAdapter update-like functions ignore undefined fields the same way SQL
adapters do.
- Around line 61-62: The transaction method currently calls fn(this) directly so
writes are immediately applied and not rolled back on error; change it to run
the callback against a temporary transactional adapter backed by a deep copy of
the in-memory state (e.g., clone the internal store used by the MemoryAdapter),
pass that transactional Adapter<S> into transaction(fn), and on successful
resolution replace/commit the real adapter state with the transactional state;
on rejection discard the transactional state and rethrow the error so the
original memory remains unchanged. Ensure you locate and use the transaction
method and the internal store structure (the Memory adapter class and its data
store fields) when creating the cloned transactional adapter and performing the
commit/rollback.
In `@src/adapters/postgres.ts`:
- Line 1: Remove the Node-only import "createHash" and replace its usage with a
small portable string-hash implementation (e.g., DJB2 or FNV-1a) defined in this
module; locate where createHash(...) is used to generate deterministic
prepared-statement names (the prepared-statement name generation code) and swap
in the new hash function, return a short stable string (hex/base36) from that
function to form the prepared-statement name so behavior stays deterministic but
no Node builtin is required.
In `@src/types.ts`:
- Around line 73-77: The exposed CRUD signatures (e.g., create<K extends keyof S
& string, T extends Record<string, unknown> = InferModel<S[K]>>(...) and the
similar overloads at the other ranges) allow callers to override the row shape
T; change them to derive the row type directly from the schema (use
InferModel<S[K]> or an alias like RowFor<K> = InferModel<S[K]>) and remove the
caller-supplied T generic, keeping generics only for projected field sets or
predicate keys; update each affected method signature (create, update, upsert,
find, etc.) to accept model: K and data/where/select typed against the derived
RowFor<K> and to return the appropriately projected result type based on select
rather than returning full T, and apply the same pattern to the other signatures
noted (lines 84–88, 95–102, 110–116, 121–145, 150–166).
---
Duplicate comments:
In `@src/adapters/sqlite.ts`:
- Around line 266-271: The migrate() implementation currently always calls
this.executor.transaction, which starts a new BEGIN even when called from inside
an existing adapter.transaction; change migrate to accept an optional
executor/transaction context (e.g., param exec?: Executor) and, if exec is
provided, run the DDL statements with the provided exec.run(...) loop, otherwise
fall back to this.executor.transaction(...). Update any callers (notably
adapter.transaction(...) usage) to pass the active exec into migrate so the
existing transaction is reused and no nested BEGIN is emitted.
In `@src/adapters/utils/sql.ts`:
- Around line 190-228: The bug is that expr is computed once using the entire
array value so columnExpr loses the scalar element type for "in" / "not_in";
update the leaf function in src/adapters/utils/sql.ts so the "in" and "not_in"
branches call options.columnExpr(...) using a sample element (e.g. c.value[0])
or otherwise defer/compute an element-specific expr instead of the full array,
then use that element-specific expr in the IN/NOT IN SQL templates and keep the
existing per-item options.mapValue logic for params.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 15fdb474-4abd-4a4d-bff1-12c3450e765f
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.oxlintrc.jsonAGENTS.mdREADME.mdpackage.jsonsrc/adapters/memory.test.tssrc/adapters/memory.tssrc/adapters/postgres.tssrc/adapters/sqlite.test.tssrc/adapters/sqlite.tssrc/adapters/utils/common.tssrc/adapters/utils/sql.test.tssrc/adapters/utils/sql.tssrc/core.tssrc/index.tssrc/types.test.tssrc/types.tstsconfig.json
💤 Files with no reviewable changes (1)
- src/core.ts
✅ Files skipped from review due to trivial changes (3)
- .oxlintrc.json
- AGENTS.md
- README.md
| transaction<T>(fn: (tx: Adapter<S>) => Promise<T>): Promise<T> { | ||
| return fn(this); |
There was a problem hiding this comment.
transaction() is not atomic right now.
This just forwards this into the callback, so any write that happens before a thrown error stays committed. That makes failure semantics diverge from the SQL adapters and breaks the contract implied by Adapter.transaction.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/adapters/memory.ts` around lines 61 - 62, The transaction method
currently calls fn(this) directly so writes are immediately applied and not
rolled back on error; change it to run the callback against a temporary
transactional adapter backed by a deep copy of the in-memory state (e.g., clone
the internal store used by the MemoryAdapter), pass that transactional
Adapter<S> into transaction(fn), and on successful resolution replace/commit the
real adapter state with the transactional state; on rejection discard the
transactional state and rethrow the error so the original memory remains
unchanged. Ensure you locate and use the transaction method and the internal
store structure (the Memory adapter class and its data store fields) when
creating the cloned transactional adapter and performing the commit/rollback.
| update<K extends keyof S & string, T extends Record<string, unknown> = InferModel<S[K]>>(args: { | ||
| model: K; | ||
| where: Where<T>; | ||
| data: Partial<T>; | ||
| }): Promise<T | null> { | ||
| const { model, where, data } = args; | ||
| assertNoPrimaryKeyUpdates(this.schema[model]!, data); | ||
| this.assertNoUnknownFields(model, data as Record<string, unknown>); | ||
| const heap = this.tables.get(model)!; | ||
|
|
||
| for (let i = 0; i < heap.length; i++) { | ||
| const value = heap[i]!; | ||
| if (this.matchesWhere(where, value)) { | ||
| const updated: RowData = Object.assign(value, data); | ||
| this.globalLRU.get(updated); // Touch for LRU | ||
| return Promise.resolve(this.mapFromRecord<T>(updated)); | ||
| } | ||
| } | ||
| return Promise.resolve(null); |
There was a problem hiding this comment.
Filter out undefined patch fields before mutating stored rows.
These paths feed raw partial objects into Object.assign, so { age: undefined } clears the value in MemoryAdapter while the SQL adapters treat undefined as “do not update this column”. That gives the same API different behavior depending on the backend.
Suggested direction
+ private definedPatch(data: Record<string, unknown>): RowData {
+ const patch: RowData = {};
+ const keys = Object.keys(data);
+ for (let i = 0; i < keys.length; i++) {
+ const key = keys[i]!;
+ const value = data[key];
+ if (value !== undefined) patch[key] = value;
+ }
+ return patch;
+ }
+
update<K extends keyof S & string, T extends Record<string, unknown> = InferModel<S[K]>>(args: {
model: K;
where: Where<T>;
data: Partial<T>;
}): Promise<T | null> {
const { model, where, data } = args;
+ const patch = this.definedPatch(data as Record<string, unknown>);
assertNoPrimaryKeyUpdates(this.schema[model]!, data);
this.assertNoUnknownFields(model, data as Record<string, unknown>);
const heap = this.tables.get(model)!;
for (let i = 0; i < heap.length; i++) {
const value = heap[i]!;
if (this.matchesWhere(where, value)) {
- const updated: RowData = Object.assign(value, data);
+ const updated: RowData = Object.assign(value, patch);
this.globalLRU.get(updated); // Touch for LRU
return Promise.resolve(this.mapFromRecord<T>(updated));
}
}Also applies to: 194-212, 222-244
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/adapters/memory.ts` around lines 170 - 188, The update path mutates
stored rows with Object.assign(value, data) which uses undefined fields to clear
columns; change this by first building a cleaned patch object that excludes keys
whose value === undefined (preserving null), then call Object.assign(value,
cleaned) and use cleaned for
assertNoPrimaryKeyUpdates/assertNoUnknownFields/matching flow; apply the same
undefined-filtering fix to the other similar methods that merge partial patches
(the other update/patch/updateMany helpers noted in this diff) so all
MemoryAdapter update-like functions ignore undefined fields the same way SQL
adapters do.
| @@ -0,0 +1,512 @@ | |||
| import { createHash } from "node:crypto"; | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n --type ts 'from "node:' srcRepository: 8monkey-ai/no-orm
Length of output: 130
🏁 Script executed:
wc -l src/adapters/postgres.tsRepository: 8monkey-ai/no-orm
Length of output: 90
🏁 Script executed:
cat -n src/adapters/postgres.ts | head -30Repository: 8monkey-ai/no-orm
Length of output: 832
🏁 Script executed:
cat -n src/adapters/postgres.ts | sed -n '210,220p'Repository: 8monkey-ai/no-orm
Length of output: 509
🏁 Script executed:
grep -n "createHash" src/adapters/postgres.tsRepository: 8monkey-ai/no-orm
Length of output: 193
🏁 Script executed:
ls -la src/adapters/Repository: 8monkey-ai/no-orm
Length of output: 544
🏁 Script executed:
head -20 src/adapters/sqlite.tsRepository: 8monkey-ai/no-orm
Length of output: 468
🏁 Script executed:
git ls-files src/adapters | head -20Repository: 8monkey-ai/no-orm
Length of output: 274
🏁 Script executed:
# Check if there's any runtime detection or dynamic imports pattern
rg -n "typeof.*require|import\.meta|process\.env|globalThis" src/adapters --type ts | head -20Repository: 8monkey-ai/no-orm
Length of output: 43
🏁 Script executed:
# Check the main export and how these adapters are used
cat -n src/index.ts | head -30Repository: 8monkey-ai/no-orm
Length of output: 93
🏁 Script executed:
grep -n "createHash\|hash\|digest" src/adapters/sqlite.ts | head -20Repository: 8monkey-ai/no-orm
Length of output: 43
🏁 Script executed:
# Check memory.ts as well
grep -n "createHash\|node:" src/adapters/memory.ts | head -20Repository: 8monkey-ai/no-orm
Length of output: 43
🏁 Script executed:
# Check if there's any documentation or README about cross-runtime support
cat README.md | head -50Repository: 8monkey-ai/no-orm
Length of output: 1120
🏁 Script executed:
# Check package.json to understand the project's intent
cat package.json | head -40Repository: 8monkey-ai/no-orm
Length of output: 942
🏁 Script executed:
# Check if postgres adapter uses any conditional imports or dynamic requires
grep -n "dynamic\|require\|import(" src/adapters/postgres.ts | head -10Repository: 8monkey-ai/no-orm
Length of output: 43
🏁 Script executed:
# Check the exports in package.json more carefully for conditional exports
cat package.json | jq .exportsRepository: 8monkey-ai/no-orm
Length of output: 498
🏁 Script executed:
# Check if other adapters have any Node builtins
rg -n 'import.*from.*"(node|bun|deno):' src/adapters --type tsRepository: 8monkey-ai/no-orm
Length of output: 547
🏁 Script executed:
# Check if there's a pattern of how adapters handle driver-specific imports
head -50 src/adapters/sqlite.tsRepository: 8monkey-ai/no-orm
Length of output: 1016
Remove the Node-only node:crypto import and replace with a portable hash function.
The createHash import at line 1 makes this adapter Node-specific and blocks its use in Bun, Deno, and Edge environments. It is only used at line 216 to generate deterministic prepared-statement names—not for cryptographic security—so a simple, portable string hash (e.g., DJB2 or FNV-1a) would serve the same purpose without pulling in a Node builtin.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/adapters/postgres.ts` at line 1, Remove the Node-only import "createHash"
and replace its usage with a small portable string-hash implementation (e.g.,
DJB2 or FNV-1a) defined in this module; locate where createHash(...) is used to
generate deterministic prepared-statement names (the prepared-statement name
generation code) and swap in the new hash function, return a short stable string
(hex/base36) from that function to form the prepared-statement name so behavior
stays deterministic but no Node builtin is required.
| create<K extends keyof S & string, T extends Record<string, unknown> = InferModel<S[K]>>(args: { | ||
| model: K; | ||
| data: T; | ||
| select?: Select<T>; | ||
| }): Promise<T>; |
There was a problem hiding this comment.
Keep row types schema-derived instead of caller-supplied.
These signatures let callers override T, so they can lie about model fields and bypass schema-safe where/select typing entirely. They also still return full T when select can project fewer columns. Derive the row type from S[K] inside the signature, and reserve generics for things like projected field sets instead of the whole row shape.
Based on learnings: Maintain schema as the source of truth with zero-cost type inference.
Also applies to: 84-88, 95-102, 110-116, 121-145, 150-166
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/types.ts` around lines 73 - 77, The exposed CRUD signatures (e.g.,
create<K extends keyof S & string, T extends Record<string, unknown> =
InferModel<S[K]>>(...) and the similar overloads at the other ranges) allow
callers to override the row shape T; change them to derive the row type directly
from the schema (use InferModel<S[K]> or an alias like RowFor<K> =
InferModel<S[K]>) and remove the caller-supplied T generic, keeping generics
only for projected field sets or predicate keys; update each affected method
signature (create, update, upsert, find, etc.) to accept model: K and
data/where/select typed against the derived RowFor<K> and to return the
appropriately projected result type based on select rather than returning full
T, and apply the same pattern to the other signatures noted (lines 84–88,
95–102, 110–116, 121–145, 150–166).
Summary by CodeRabbit
New Features
Documentation
Tests
Chores