-
Notifications
You must be signed in to change notification settings - Fork 74
Add get_last_cursor_for_ids DB query #2535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Claude finished @neekolas's task —— View job Code Review Complete ✓Review Date: October 21, 2025, 6:43 AM UTC OverviewThe implementation of Code Quality & Best Practices ✅Strengths:
Potential Issues 🔍1. Schema Field Reference Issue (xmtp_db/src/encrypted_store/refresh_state.rs:358)The query references // Line 358 - Current code:
.select((dsl::entity_id, dsl::cursor))
// The RefreshState struct (lines 105-110) shows:
pub struct RefreshState {
pub entity_id: Vec<u8>,
pub entity_kind: EntityKind,
pub sequence_id: i64,
pub originator_id: i32,
}Expected behavior: This should be 2. Test Code Uses Non-Existent Fields (Lines 1237-1468)Multiple tests reference // Line 1237, 1248, 1271, etc.:
let entity_kind = EntityKind::Group;
// Lines 1254, 1282, etc.:
RefreshState {
entity_id: id.clone(),
entity_kind,
cursor: 456, // Should be: sequence_id and originator_id
}Impact: These tests won't compile. The code should use Performance Considerations ⚡Good:
Minor optimization opportunity: // Current (line 362):
.map(|vecs| vecs.into_iter().flatten().collect::<HashMap<_, _>>())
// Suggested:
.map(|vecs| {
let capacity = ids.len(); // or vecs.iter().map(|v| v.len()).sum()
vecs.into_iter().flatten().collect::<HashMap<_, _>>()
})However, this is a micro-optimization and the current code is fine. Security Concerns 🔒No security issues identified. The implementation:
Test Coverage 📊Excellent coverage with 7 comprehensive test cases covering:
However: The tests won't compile due to the issues mentioned above (non-existent Recommendations
Overall Assessment: The core implementation is well-designed with good chunking logic and comprehensive tests. The compilation issues with tests need to be resolved before merge. |
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
How to use the Graphite Merge QueueAdd the label mergequeue to this PR to add it to the merge queue. You must have a Graphite account in order to use the merge queue. Sign up using this link. An organization admin has enabled the Graphite Merge Queue in this repository. Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue. This stack of pull requests is managed by Graphite. Learn more about stacking. |
Add bulk
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2535 +/- ##
==========================================
+ Coverage 76.98% 77.12% +0.14%
==========================================
Files 283 283
Lines 37386 37639 +253
==========================================
+ Hits 28781 29029 +248
- Misses 8605 8610 +5 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
7b49736 to
e485c09
Compare
28482a2 to
7a8ab42
Compare
7a8ab42 to
b5fdb49
Compare
e485c09 to
3fedaac
Compare
b5fdb49 to
ea52b7e
Compare
3fedaac to
75492d6
Compare
ea52b7e to
bd4342c
Compare
bd4342c to
000ffda
Compare

TL;DR
Added a new method to batch fetch cursor values for multiple entity IDs.
What changed?
Added a new
get_last_cursor_for_idsmethod to theQueryRefreshStatetrait that allows fetching cursor values for multiple entity IDs in a single operation. The implementation processes IDs in chunks of 900 to stay under SQLite's default bind limit of 999, and returns a HashMap mapping entity IDs to their cursor values.How to test?
Why make this change?
This change improves efficiency when needing to fetch cursor values for multiple entities at once. Instead of making individual database queries for each entity ID, we can now batch them together, reducing the number of database operations and improving performance, especially when dealing with many entities.