Skip to content

refactor(read-state): persist local/remote completion flags in pending ops#70

Open
johan456789 wants to merge 1 commit into
mainfrom
refactor-after-69-pending-read-state
Open

refactor(read-state): persist local/remote completion flags in pending ops#70
johan456789 wants to merge 1 commit into
mainfrom
refactor-after-69-pending-read-state

Conversation

@johan456789
Copy link
Copy Markdown
Owner

@johan456789 johan456789 commented May 5, 2026

Summary

Refs #69

Refactors the read-state persistence layer to track local and remote completion separately using two new boolean flags on pending_read_state_op:

  • localCommitted — set when the article table is updated
  • remoteSynced — set when the remote server acknowledges the change (pre-set to true for local accounts)

This replaces the previous approach of deleting pending ops immediately after local commit, which could lose state if the app crashed before remote sync completed.

Changes

  • Add localCommitted and remoteSynced columns to PendingReadStateOp entity
  • Add Room migration (v10 → v11) with default values
  • Add new DAO methods: queryLocalPending, queryRemotePending, markLocalCommitted, markRemoteSynced, deleteCompleted
  • Rework DiffMapHolder to use the new flags for startup replay and sync coordination
  • Add unit tests for the new pending read state behavior

Test plan

Made with Cursor

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f336fa5eaf

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 230 to +231
applicationScope.launch(ioDispatcher) {
commitAppliedDiffsToDb(diffsToCommitNow)
commitAppliedDiffsToDb(appliedDiffs.associateBy { it.articleId })
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Serialize persist and local-commit writes

updateDiff now launches persistPendingReadStateOps and commitAppliedDiffsToDb in separate fire-and-forget coroutines, so when deferDbCommits is false the commit path can run before the upsert finishes. In that ordering, markLocalCommitted(...) updates zero rows (because the row does not exist yet), then the later upsert re-inserts the op with localCommitted=0, causing already-applied changes to be replayed again on restart/flush and leaving incorrect pending state until another flush happens.

Useful? React with 👍 / 👎.

)
database.execSQL(
"""
ALTER TABLE pending_read_state_op ADD COLUMN remoteSynced INTEGER NOT NULL DEFAULT 0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Backfill remoteSynced for local-account pending rows

The 10→11 migration sets remoteSynced to 0 for all existing pending_read_state_op rows, but local accounts never run remote sync and therefore never call markRemoteSynced(...). After replay, those rows become localCommitted=1, remoteSynced=0, so deleteCompleted() can never remove them; users upgrading with preexisting local pending ops will accumulate orphaned rows permanently.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the article read-state synchronization logic by introducing "localCommitted" and "remoteSynced" flags to the "PendingReadStateOp" database entity, replacing the previous in-memory deferred diff tracking. Key changes include a database migration to version 11 and updated DAO queries to manage local and remote pending operations. Feedback identifies a high-severity race condition in "updateDiff" where persistence and database commitment are launched in parallel coroutines. Additionally, several improvements are suggested to use "partition" for more efficient collection handling and to add "isNotEmpty" guards before batch service calls to prevent redundant operations.

Comment on lines +224 to 235
applicationScope.launch(ioDispatcher) {
persistPendingReadStateOps(appliedDiffs)
}

if (!deferDbCommits) {
Timber.tag("DiffMapHolder").d("Immediately committing ${appliedDiffs.size} articles to DB")
applicationScope.launch(ioDispatcher) {
commitAppliedDiffsToDb(diffsToCommitNow)
commitAppliedDiffsToDb(appliedDiffs.associateBy { it.articleId })
}
} else {
Timber.tag("DiffMapHolder").d("Deferring DB commits for ${appliedDiffs.size} articles")
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Launching persistPendingReadStateOps and commitAppliedDiffsToDb in separate coroutines on the same dispatcher creates a race condition. If commitAppliedDiffsToDb executes its markLocalCommitted query before persistPendingReadStateOps has finished its upsertAll, the update will affect zero rows. This results in the pending operation remaining in the database with localCommitted = false, leading to redundant replay attempts on the next app startup. These operations should be sequenced within a single coroutine to ensure correctness.

        applicationScope.launch(ioDispatcher) {
            persistPendingReadStateOps(appliedDiffs)
            if (!deferDbCommits) {
                Timber.tag("DiffMapHolder").d("Immediately committing ${appliedDiffs.size} articles to DB")
                commitAppliedDiffsToDb(appliedDiffs.associateBy { it.articleId })
            } else {
                Timber.tag("DiffMapHolder").d("Deferring DB commits for ${appliedDiffs.size} articles")
            }
        }

Comment on lines +258 to +262
val markAsReadIds = pendingOps.filter { it.isRead }.map { it.articleId }.toSet()
val markAsUnreadIds = pendingOps.filter { !it.isRead }.map { it.articleId }.toSet()

rssService.get().batchMarkAsRead(articleIds = markAsReadIds, markRead = true)
rssService.get().batchMarkAsRead(articleIds = markAsUnreadIds, markRead = false)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation iterates over pendingOps twice to separate read and unread IDs. Using partition is more idiomatic and efficient. Additionally, it is best practice to guard the batchMarkAsRead service calls with isNotEmpty() checks to avoid unnecessary network or database overhead when one of the sets is empty.

        val (readOps, unreadOps) = pendingOps.partition { it.isRead }
        val markAsReadIds = readOps.map { it.articleId }.toSet()
        val markAsUnreadIds = unreadOps.map { it.articleId }.toSet()

        if (markAsReadIds.isNotEmpty()) {
            rssService.get().batchMarkAsRead(articleIds = markAsReadIds, markRead = true)
        }
        if (markAsUnreadIds.isNotEmpty()) {
            rssService.get().batchMarkAsRead(articleIds = markAsUnreadIds, markRead = false)
        }

Comment on lines 297 to 298
rssService.get().batchMarkAsRead(articleIds = diffBatch.markReadIds, markRead = true)
rssService.get().batchMarkAsRead(articleIds = diffBatch.markUnreadIds, markRead = false)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It is recommended to guard the batchMarkAsRead calls with isNotEmpty() checks to prevent no-op service invocations.

Suggested change
rssService.get().batchMarkAsRead(articleIds = diffBatch.markReadIds, markRead = true)
rssService.get().batchMarkAsRead(articleIds = diffBatch.markUnreadIds, markRead = false)
if (diffBatch.markReadIds.isNotEmpty()) {
rssService.get().batchMarkAsRead(articleIds = diffBatch.markReadIds, markRead = true)
}
if (diffBatch.markUnreadIds.isNotEmpty()) {
rssService.get().batchMarkAsRead(articleIds = diffBatch.markUnreadIds, markRead = false)
}

Comment on lines +373 to +377
val markAsReadIds = pendingOps.filter { it.isRead }.map { it.articleId }.toSet()
val markAsUnreadIds = pendingOps.filter { !it.isRead }.map { it.articleId }.toSet()

rssService.get().batchMarkAsRead(articleIds = markAsReadIds, markRead = true)
rssService.get().batchMarkAsRead(articleIds = markAsUnreadIds, markRead = false)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to flushLocalPendingOps, using partition here is more efficient than multiple filters. Also, ensure batchMarkAsRead is only called when the corresponding ID sets are not empty.

            val (readOps, unreadOps) = pendingOps.partition { it.isRead }
            val markAsReadIds = readOps.map { it.articleId }.toSet()
            val markAsUnreadIds = unreadOps.map { it.articleId }.toSet()

            if (markAsReadIds.isNotEmpty()) {
                rssService.get().batchMarkAsRead(articleIds = markAsReadIds, markRead = true)
            }
            if (markAsUnreadIds.isNotEmpty()) {
                rssService.get().batchMarkAsRead(articleIds = markAsUnreadIds, markRead = false)
            }

Comment on lines 438 to 439
val markReadIds = diffs.filter { it.isRead }.map { it.articleId }.toSet()
val markUnreadIds = diffs.filter { !it.isRead }.map { it.articleId }.toSet()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using partition here avoids double iteration over the diffs collection.

        val (readDiffs, unreadDiffs) = diffs.partition { it.isRead }
        val markReadIds = readDiffs.map { it.articleId }.toSet()
        val markUnreadIds = unreadDiffs.map { it.articleId }.toSet()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant