Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces custom OTP/email/crypto auth with Convex's auth, migrates auth identity fields to strings, removes sellerEmail, adds cursor pagination and validation in listings, expands backend/frontend tests and test utilities, and updates frontend auth gating and pagination UX. Changes
Sequence Diagram(s)mermaid mermaid mermaid Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
backend/convex/reports.ts (1)
28-89:⚠️ Potential issue | 🟡 MinorStale comment numbering: steps 3–4 missing, step 6 duplicated.
The validation steps are numbered 1, 2, 5, 6, 6, 7, 8. Steps 3–4 were presumably the old user-lookup logic that was removed. The duplicate "6" appears at Lines 60 and 73.
✏️ Proposed fix: renumber steps
- // 2. Validate notes length if provided + // 2. Validate notes length ... - // 5. Validate target exists and handle malformed IDs + // 3. Validate target exists and handle malformed IDs ... - // 6. Check for duplicate report (same user + same target) + // 4. Check for duplicate report (same user + same target) ... - // 6. Check rate limiting (max 10 reports per day) + // 5. Check rate limiting (max 10 reports per day) ... - // 7. Create the report + // 6. Create the report ... - // 8. Check if content should be auto-hidden + // 7. Check if content should be auto-hiddenbackend/convex/messages.ts (1)
48-69:⚠️ Potential issue | 🟡 MinorFix type validators in
debugCreateConversationID: usev.string()forbuyerIdandsellerIdThe schema defines
buyerIdandsellerIdasv.string()with explicit comments noting they store auth identity subjects. However,debugCreateConversationIDvalidates these args asv.id('users'), which is incompatible with the actual format stored in production (identity.subject strings fromgetOrCreateConversation). Change both tov.string()to match the schema and prevent test/production behavior divergence.backend/convex/__tests__/listings.test.ts (1)
358-368:⚠️ Potential issue | 🟡 MinorTest name says "updateListing" but it actually tests
createListing.This test is titled "updateListing fails when tag is >20 characters long" but only calls
createListing(line 363). It should first create a listing, then attemptupdateListingwith an oversized tag to match its description.Proposed fix
it('updateListing fails when tag is >20 characters long', async () => { const t = convexTest(schema as any, modules); const asOwner = t.withIdentity(ownerIdentity); + const listingId = await asOwner.mutation(api.listings.createListing, baseArgs); + await expect(async () => { - await asOwner.mutation(api.listings.createListing, { - ...baseArgs, + await asOwner.mutation(api.listings.updateListing, { + id: listingId, tags: ['supercalifragilisticexpialidocious'], }); }).rejects.toThrowError('Tags must be 20 characters or less'); });
🤖 Fix all issues with AI agents
In @.github/workflows/ci.yml:
- Around line 59-61: The CI currently runs a redundant frontend build: the root
"npm run build --if-present" (which runs with --workspaces) already builds the
frontend, and the separate "npm run build --workspace=frontend" step should be
removed or consolidated; if you choose to keep the explicit frontend step, set
the required environment variable EXPO_PUBLIC_CONVEX_URL for the job (used by
frontend/app/_layout.tsx) so the build won't fail—update the workflow to either
delete the workspace-specific run or add env: EXPO_PUBLIC_CONVEX_URL with an
appropriate value to the build job.
In `@backend/convex/listings.ts`:
- Around line 341-377: The manual pagination can falsely report isDone when
query.take(MAX_COLLECT) truncates results; update the pagination logic around
MAX_COLLECT, allResults, filtered, and the returned isDone/continueCursor so
that if allResults.length === MAX_COLLECT (meaning the backend truncation
occurred) you treat the page as potentially incomplete: set isDone to false
(instead of true) when the source was truncated and compute continueCursor so
clients can request the next batch, and also propagate this same change in
getListings; specifically adjust the code that computes
hasMore/nextCursor/isDone after slicing filtered to check allResults.length ===
MAX_COLLECT and override isDone to false (and provide a nextCursor) when
truncation happened so callers are aware there may be more matching items.
In `@backend/convex/schema.ts`:
- Around line 70-85: Replace the manual auth table definitions by importing and
spreading authTables from `@convex-dev/auth/server` into your schema (use
defineSchema) and then override the users table with defineTable to add only
your custom fields (emailVerified, createdAt) while keeping the library-required
fields as authTables provides; specifically remove the manual users block,
import authTables, spread ...authTables in the schema object, and then declare
users: defineTable({...}) to match the reviewer example (use
v.optional(v.string()) for name and add the phone/email indexes as shown).
In `@frontend/app/index.tsx`:
- Around line 82-86: The reset effect clears cursor/allListings but doesn't
prevent an in-flight or immediately re-run listingsResult handler from
repopulating allListings with stale data; modify the code that processes
listingsResult (the effect handling listingsResult in lines ~67-79) to check a
currentFilter marker before applying results: introduce a stable filter
token/ref (e.g., currentFilterRef or filterVersion) updated inside the
filter-change effect that calls setCursor/setAllListings/setIsDone, and have the
listingsResult handler compare that token to the token captured when the query
was issued (or guard by matching filters) before merging results—alternatively
coordinate resets and query processing via a single state update (a
filterVersion number) so stale listingsResult are ignored.
- Around line 67-79: The effect is appending pages twice because it runs on
cursor changes before listingsResult updates; change the effect to depend only
on listingsResult and guard appends by tracking which page/cursor was already
processed (e.g., keep a ref/Set like lastProcessedCursor or processedCursors);
inside the effect check listingsResult exists and that listingsResult.cursor (or
a page identifier from listingsResult) is not in processedCursors before calling
setAllListings, setIsDone and setIsLoadingMore, then add that cursor to
processedCursors; keep handleLoadMore updating cursor as before but remove
cursor from the effect dependency so only new query results drive the update.
In `@frontend/app/listings/new.tsx`:
- Around line 36-57: The current useEffect shows an Alert when !isLoading &&
!isAuthenticated but the Alert can be dismissed leaving the component to return
null and produce a blank screen; update the logic in the useEffect and render
path so unauthenticated users are always redirected: make the Alert
non-cancelable by passing { cancelable: false } to Alert.alert (in useEffect)
and immediately call router.replace('/auth/login') outside or after Alert.alert
so navigation happens even if the alert is dismissed, and remove the render path
that returns null (or replace it with a fallback UI) to ensure the user is never
left on a blank page; reference functions/variables: useEffect, Alert.alert,
isLoading, isAuthenticated, router.replace, and the current `if
(!isAuthenticated) return null` branch.
In `@frontend/package.json`:
- Line 11: The frontend build outputs to frontend/dist as defined by the "build"
script in frontend/package.json, but frontend/.gitignore is missing an entry for
dist; update frontend/.gitignore to include a dist entry (e.g., add "dist" or
"/dist") so the frontend/dist build artifacts are ignored and commit that
change.
🧹 Nitpick comments (14)
docs/SELF_HOSTED_CONVEX.md (2)
86-88: Consider clarifying the create data example intent.The comment states creating data "requires authentication" but then provides an example payload. While the payload structure itself is accurate and matches the
createListingmutation signature, the phrasing could be clearer that the payload is for reference when using the dashboard or authenticated client, not for CLI execution.Suggested rewording:
Diff
-# Create data (requires authentication - use Convex dashboard or authenticated client instead) -# Example payload structure: +# Create data (requires authentication - use Convex dashboard or authenticated client) +# Example payload structure for reference: # {"title":"Test Book","description":"Great condition","price":25,"category":"textbooks","condition":"used","images":["https://example.com/image.jpg"]}
77-85: Function signatures verified; consider adding cursor pagination example and clarifying function use cases.The examples accurately reflect the current implementation. Both
listings:getListingsandlistings:searchAndFilterListingsfunctions and theirpaginationOptsstructure (numItemsandcursor) match the backend signatures.Two documentation improvements would benefit users:
Add a cursor pagination example: All examples show
"cursor":nullfor the first page. Include a follow-up example demonstrating how to fetch the next page using the cursor value from the previous response.Clarify function differences:
getListings: Filters by category, price range, and tags; simpler use case for browsing.searchAndFilterListings: Full-text search on titles with additional sorting options (newest,oldest,price_asc,price_desc); better for search-driven UX.📝 Suggested addition: cursor pagination example
Add after line 84:
# Search and filter listings npx convex run listings:searchAndFilterListings '{"filters":{"category":"textbooks"},"paginationOpts":{"numItems":20,"cursor":null}}' + +# Fetch next page using cursor from previous response +npx convex run listings:getListings '{"category":"textbooks","paginationOpts":{"numItems":20,"cursor":"20"}}'__mocks__/@convex-dev/auth/server.js (1)
1-16: Duplicate mock: this file andjest.setup.jsdefine the same mock.This
__mocks__/@convex-dev/auth/server.jsmanual mock is superseded by the explicitjest.mock('@convex-dev/auth/server', ...)call injest.setup.js(which runs viasetupFilesAfterEnv). The explicitjest.mock()takes priority over the__mocks__directory convention. Having both creates a maintenance burden — if one is updated, the other must be kept in sync.Consider removing one of these two duplicate mock definitions. If the explicit
jest.mock()injest.setup.jsis the intended approach (it's more visible), this file can be deleted. If you prefer the__mocks__approach, remove thejest.mock()fromjest.setup.js.backend/convex/__tests__/testUtils.ts (1)
66-95:createTestListingdoesn't allow overridingcreatedAt/postedOntimestamps.Pagination and sort-order tests often need deterministic control over timestamps. The current overrides type omits
createdAtandpostedOn, so every listing getsDate.now(). Tests that create multiple listings in rapid succession may get identical timestamps, making ordering non-deterministic.Proposed fix
export async function createTestListing( t: any, sellerId: Id<'users'>, overrides?: Partial<{ title: string; description: string; price: number; category: 'textbooks' | 'electronics' | 'furniture' | 'tickets' | 'other'; images: string[]; condition: 'new' | 'used' | 'refurbished'; tags: string[]; status: 'active' | 'sold' | 'inactive' | 'deleted'; + createdAt: number; + postedOn: number; }> ): Promise<Id<'listings'>> { return await t.run(async (ctx: any) => { return await ctx.db.insert('listings', { sellerId, title: overrides?.title ?? 'Test Listing', description: overrides?.description ?? 'Test description', price: overrides?.price ?? 100, category: overrides?.category ?? 'textbooks', images: overrides?.images ?? ['https://example.com/image.jpg'], condition: overrides?.condition ?? 'new', tags: overrides?.tags, status: overrides?.status ?? 'active', - createdAt: Date.now(), - postedOn: Date.now(), + createdAt: overrides?.createdAt ?? Date.now(), + postedOn: overrides?.postedOn ?? Date.now(), }); }); }backend/convex/profiles.ts (2)
36-44: Post-pagination filtering of hidden profiles produces inconsistent page sizes.
paginate()returns a fixed-size page, thenfilterremoves hidden profiles in-memory. If half the page is hidden, the client receives half the requested items despiteisDonebeing false. Consider either:
- Adding a composite index (e.g.,
by_isHidden) and filtering at the query level before pagination, or- Using
.filter((q) => q.neq(q.field('isHidden'), true))before.paginate()so the DB engine handles it (as done inlistings.tsLines 331, 346, 437, 452).♻️ Proposed fix: filter before paginate
handler: async (ctx, args) => { - const result = await ctx.db.query('profiles').paginate(args.paginationOpts); - - // Filter out hidden profiles and sanitize - const publicProfiles = result.page.filter((profile) => !profile.isHidden).map(toPublicProfile); - - return { - ...result, - page: publicProfiles, - }; + const result = await ctx.db + .query('profiles') + .filter((q) => q.neq(q.field('isHidden'), true)) + .paginate(args.paginationOpts); + + return { + ...result, + page: result.page.map(toPublicProfile), + }; },
67-105: Consider adding validation for theyearfield.All other fields (
name,bio,major) are validated against bounds, butyear(Line 73) has no validation. A reasonable range check (e.g., 2000–2100) would prevent nonsensical values and align with the validation pattern used for the other fields.backend/convex/users.ts (3)
78-81: Stale name and docstring:getOrCreateUsernever creates a user.With the migration to Convex Auth, the user record is created by the auth layer. This mutation only normalizes and patches the existing record. The name
getOrCreateUserand the docstring referencing "Get or create user profile" / "Called when user successfully authenticates via OTP" are misleading.Consider renaming to something like
normalizeAuthUserorensureUserNormalizedand updating the docstring.
55-74: Redundant normalization write on everyupdateUserProfilecall.Every invocation patches the email (Line 62) even when it hasn't changed, and backfills
createdAt/emailVerifiedeven after they've already been set. After the first normalization pass (viagetOrCreateUser), these fields are stable.Consider only patching fields that actually changed to avoid unnecessary writes:
♻️ Sketch: skip no-op patches
+ const updates: Record<string, unknown> = { name: args.name }; + if (authUser.email !== normalizedUser.email) { + updates.email = normalizedUser.email; + } + if (authUser.createdAt === undefined) { + updates.createdAt = authUser._creationTime; + } + if (authUser.emailVerified === undefined && authUser.emailVerificationTime) { + updates.emailVerified = true; + } - await ctx.db.patch(userId, { - email: normalizedUser.email, - name: args.name, - createdAt: authUser.createdAt ?? authUser._creationTime, - emailVerified: authUser.emailVerified ?? Boolean(authUser.emailVerificationTime), - }); + await ctx.db.patch(userId, updates);
6-24:toPublicUsersilently returnsnullfor users without email.If a user record exists but has no
null, which causes callers likegetOrCreateUserandupdateUserProfileto throw "User not found" / "Auth user not found". This masks the real problem (missing email). Consider throwing a descriptive error inside the helper or having callers differentiate "no user" from "user exists but email missing."backend/convex/listings.ts (1)
232-243: Offset-based pagination on search results is unstable across data mutations.The cursor is an integer offset into a
.collect()result set. If listings are created, deleted, or change status between paginated requests, items will be skipped or duplicated. The inline comment (Lines 187-189) acknowledges the Convex search limitation, but this is worth tracking as a known issue for production.No immediate fix needed since Convex search indexes don't support cursor-based pagination, but consider logging a TODO or tracking this limitation.
backend/convex/__tests__/reports.test.ts (1)
151-178: UnnecessarycreateTestUsercalls are dead code.In several tests (lines 154, 183, 215, 235, 255, 279, 309-311, 362-363, 400-402, 654-656),
createTestUseris called but the returned user ID is never referenced. SincereporterIdis now derived fromidentity.subject(not a DB user ID), these calls have no effect on test outcomes and can be removed.backend/convex/__tests__/listings-pagination.test.ts (1)
93-146: Consider verifying the final page andisDoneflag.The tag pagination test verifies 2 pages of 2 items each from 5 matching listings, but doesn't fetch the 3rd page to confirm the 5th item is returned and
isDonebecomestrue. This would strengthen confidence in cursor termination. Optional improvement.docs/SCHEMA_MIGRATION.md (1)
99-106: Hardcoded test count will go stale.Line 103 states "All 109 tests passing" — this will become outdated as the test suite evolves. Consider replacing with a generic statement like "All tests passing" or removing the count.
backend/convex/__tests__/messages.test.ts (1)
96-115:setTimeoutfor timestamp ordering can be fragile.Lines 103, 452, and 482 use
setTimeout(resolve, 10)to ensure timestamp differences. In fast test runners this could occasionally be insufficient. A more robust approach is to mockDate.now()or explicitly set timestamps when inserting test data.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@backend/convex/listings.ts`:
- Around line 328-338: The pagination branch that runs when needsPostFiltering
is false uses query.filter to remove hidden listings after fetching, causing
scans of hidden items; update the index range used on the query (the withIndex
call that targets by_status_createdAt) to include isHidden:false in the range
expression (i.e., add an .eq('isHidden', false) alongside .eq('status','active')
in the withIndex) so the DB range excludes hidden listings, or alternatively
update the by_status_createdAt index definition to include isHidden so it can be
used in the range expression; ensure the change affects the same query path that
currently calls query.filter and paginate so pages contain the requested
numItems.
In `@frontend/app/index.tsx`:
- Around line 76-98: The useEffect updating listingsResult intentionally omits
cursor and queryFilterVersion from its dependency array but lacks an eslint
suppression comment; add a targeted eslint disable with an explanatory comment
(e.g. // eslint-disable-next-line react-hooks/exhaustive-deps -- cursor and
queryFilterVersion intentionally omitted to avoid re-running on cursor
transitions and to use render-time snapshot) immediately above the useEffect
dependency line for the effect that references listingsResult, cursor,
queryFilterVersion, processedCursorsRef, setAllListings, setIsDone and
setIsLoadingMore so CI stops flagging the missing dependencies while preserving
the intended behavior.
In `@frontend/app/listings/new.tsx`:
- Around line 36-46: The Alert.alert is non-blocking so
router.replace('/auth/login') runs immediately and the alert is never seen;
update the useEffect so the navigation happens inside the Alert button callback
or remove the alert entirely: either move the router.replace call into the
Alert.alert button onPress handler (inside the useEffect that contains useEffect
/ Alert.alert) to navigate only after the user taps OK, or delete the
Alert.alert call and keep the immediate router.replace in the useEffect if you
want a silent redirect.
🧹 Nitpick comments (7)
frontend/.gitignore (1)
7-8: LGTM! Build output correctly ignored.The addition of
distto.gitignorealigns with the updated Expo export build process mentioned in the PR objectives.Optional refinement: Consider using
/distinstead ofdistto match only the root-level build directory and avoid accidentally ignoring any file or folder named "dist" elsewhere in the project tree.frontend/app/index.tsx (1)
41-46:currentFilterVersionRefis redundant — a single ref plus a local variable suffices.
filterVersionRefandcurrentFilterVersionRefare always set to the same value (lines 103-104). The only distinction is thatqueryFilterVersion(line 65) snapshots.currentat render time while the effect readsfilterVersionRef.currentat execution time. You can dropcurrentFilterVersionRefentirely and just capturefilterVersionRef.currentintoqueryFilterVersiondirectly — the behavior is identical.Suggested simplification
- // Filter versioning to prevent stale results - const filterVersionRef = useRef(0); - const currentFilterVersionRef = useRef(0); + // Filter versioning to prevent stale results + const filterVersionRef = useRef(0);Line 65:
- const queryFilterVersion = currentFilterVersionRef.current; + const queryFilterVersion = filterVersionRef.current;Lines 103-104:
filterVersionRef.current += 1; - currentFilterVersionRef.current = filterVersionRef.current;backend/convex/schema.ts (1)
20-21:sellerEmailkept as optional — verify cleanup intent.The PR objectives state that
sellerEmailis removed from shared types and the frontend, andcreateListingno longer accepts it. However, the schema still defines it as optional. If no code path ever writes this field going forward, consider removing it from the schema (or documenting it as a legacy/migration artifact) to avoid confusion.backend/convex/listings.ts (4)
53-59: Empty descriptions are allowed — is that intentional?
validateDescriptiononly checks the max length but has no minimum. An empty string ("") passes validation. Thetitlevalidator enforces a minimum of 5 characters — consider whether descriptions should also have a minimum length.
190-203: Unboundedcollect()on search results is a scalability risk.The search path calls
.collect()with no limit, meaning all matching documents are loaded into memory. While the comment acknowledges this limitation, consider adding a.take()ceiling (e.g., 10,000) as a safety net to prevent OOM on large datasets.♻️ Suggested safeguard
- let results = await searchQuery.collect(); + // Cap search results to prevent unbounded memory usage + const SEARCH_MAX = 10_000; + let results = await searchQuery.take(SEARCH_MAX);
232-243: Offset-based cursors are unstable across data mutations.The manual pagination uses integer offsets (
startIndex) as cursors. If listings are created or deleted between page fetches, users may see duplicated or skipped items. This is inherent to offset pagination but worth noting — for the search path it's unavoidable, but for the index-based post-filter path (Lines 364–381), a document-ID-based cursor would be more robust.Not urgent since the
wasTruncatedsafeguard is in place and the primary path uses Convex's native.paginate(), but keep this in mind if users report pagination glitches.Also applies to: 364-381
341-381: Duplicate manual pagination logic — extract a helper.The in-memory pagination pattern (take → filter → slice → compute cursor/isDone/wasTruncated) is duplicated between
searchAndFilterListings(Lines 341–381) andgetListings(Lines 451–484). Extracting a shared helper would reduce duplication and ensure future fixes apply in both places.Also applies to: 451-484
| // If no post-filtering needed, use direct pagination (most efficient) | ||
| if (!needsPostFiltering) { | ||
| const paginationResult = await query | ||
| .filter((q) => q.neq(q.field('isHidden'), true)) | ||
| .paginate(args.paginationOpts); | ||
|
|
||
| return { | ||
| page: paginationResult.page, | ||
| continueCursor: paginationResult.continueCursor, | ||
| isDone: paginationResult.isDone, | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Convex paginate filter behavior numItems ordering
💡 Result:
In Convex, pagination is defined over the ordered, filtered query you build (conceptually: order(...) → filter(...) → paginate(...)). The cursor is only valid for that exact query definition (same table/index, same ordering, same filter logic). (docs.convex.dev)
numItems vs actual number of items returned
paginationOpts.numItemsis not a hard promise of how many items you’ll get back in a reactive query. (docs.convex.dev)- It’s only an initial page size. After the first run, Convex may return more or fewer than
numItemsbecause it “pins” the page’s end position and then returns all items in the original query range to keep pages adjacent and non-overlapping under live updates (inserts/deletes). (docs.convex.dev)
Ordering rules (what pagination is “over”)
- By default, queries are ordered by system field
_creationTime(ascending unless you specify otherwise). (docs.convex.dev) - If you use an index, results are ordered by the index key fields, and
order("asc" | "desc")applies to that index ordering. (docs.convex.dev)
If you share the exact query shape you’re using (including withIndex/withSearchIndex, order, and filter), I can tell you precisely how numItems and cursor behavior will play out for it.
Citations:
- 1: https://docs.convex.dev/api/interfaces/server.QueryInitializer?utm_source=openai
- 2: https://docs.convex.dev/api/interfaces/server.Query?utm_source=openai
- 3: https://docs.convex.dev/api/interfaces/server.Query?utm_source=openai
- 4: https://docs.convex.dev/database/reading-data?utm_source=openai
- 5: https://docs.convex.dev/database/reading-data?utm_source=openai
🌐 Web query:
Convex filter before paginate index scan efficiency performance
💡 Result:
In Convex, a .filter() does not reduce what gets scanned—it only reduces what gets returned. Scan cost is determined by whether you’re doing a full table scan vs. an indexed scan and (for indexed scans) how tight your .withIndex(..., rangeExpr) is. (docs.convex.dev)
What happens with filter().paginate(...)
paginate()limits how many items come out of the pipeline (the page size). (docs.convex.dev)- If your filter is selective, Convex may need to read many rows into the pipeline to find enough passing rows to fill the page; i.e., it can still scan lots of rows from the underlying scan/range. (docs.convex.dev)
Efficiency rule of thumb
- Use
.withIndex(...)with the most specific equality/range constraints you can (this is what actually shrinks the scanned range). (docs.convex.dev) - Then use
.filter(...)only for predicates you can’t express in the index range. (docs.convex.dev) - End with
.paginate(...)/.take(...)/.first()to avoid scanning the entire range. (docs.convex.dev)
If filters are very selective
Consider setting/using maximumRowsRead for paginated queries to cap how much scanning happens “trying to fill” a page (otherwise you can paginate through a filter that matches rarely and still read a huge number of rows). (glama.ai)
Practical implication for your question
Putting a filter “before paginate” (i.e., ...filter(...).paginate(...)) is useful for correct page filling (you get numItems after filtering), but it does not make the underlying index scan more efficient unless the condition is moved into the .withIndex range expression (or a new/compound index is added). (docs.convex.dev)
Citations:
- 1: https://docs.convex.dev/database/reading-data/indexes/?utm_source=openai
- 2: https://docs.convex.dev/api/interfaces/server.QueryInitializer?utm_source=openai
- 3: https://docs.convex.dev/database/reading-data/indexes/?utm_source=openai
- 4: https://docs.convex.dev/database/reading-data/indexes/?utm_source=openai
- 5: https://docs.convex.dev/database/reading-data/indexes/?utm_source=openai
- 6: https://docs.convex.dev/database/reading-data/indexes/?utm_source=openai
- 7: https://glama.ai/mcp/servers/%40get-convex/convex-backend/blob/21ce1750222851a141a2bd1156665009bcbd4a8c/npm-packages/convex/src/server/pagination.ts?utm_source=openai
- 8: https://docs.convex.dev/understanding/best-practices?utm_source=openai
🏁 Script executed:
cat -n backend/convex/listings.ts | sed -n '320,360p'Repository: codebox-calpoly/PolyBuys
Length of output: 1813
🏁 Script executed:
cat -n backend/convex/listings.ts | sed -n '430,460p'Repository: codebox-calpoly/PolyBuys
Length of output: 1325
🏁 Script executed:
rg -t typescript "by_status_createdAt" backend/convex/Repository: codebox-calpoly/PolyBuys
Length of output: 102
🏁 Script executed:
rg "by_status_createdAt" backend/convex/Repository: codebox-calpoly/PolyBuys
Length of output: 454
Add isHidden to the index range expression to avoid scanning hidden listings.
The isHidden filter is not part of the by_status_createdAt index range expression, so Convex scans all active listings (regardless of hidden status) and then filters out hidden ones. If a significant fraction become hidden, pages may contain fewer items than numItems requested because the filter reduces the result set. Instead, add isHidden: false to the index range:
.withIndex('by_status_createdAt', (q) => q.eq('status', 'active').eq('isHidden', false))
Alternatively, add isHidden to the index definition itself so it can be used in range expressions.
🤖 Prompt for AI Agents
In `@backend/convex/listings.ts` around lines 328 - 338, The pagination branch
that runs when needsPostFiltering is false uses query.filter to remove hidden
listings after fetching, causing scans of hidden items; update the index range
used on the query (the withIndex call that targets by_status_createdAt) to
include isHidden:false in the range expression (i.e., add an .eq('isHidden',
false) alongside .eq('status','active') in the withIndex) so the DB range
excludes hidden listings, or alternatively update the by_status_createdAt index
definition to include isHidden so it can be used in the range expression; ensure
the change affects the same query path that currently calls query.filter and
paginate so pages contain the requested numItems.
| useEffect(() => { | ||
| if (!isLoading && !isAuthenticated) { | ||
| Alert.alert( | ||
| 'Sign In Required', | ||
| 'Please sign in to create a listing', | ||
| [{ text: 'OK', onPress: () => {} }], | ||
| { cancelable: false } | ||
| ); | ||
| router.replace('/auth/login'); | ||
| } | ||
| }, [isLoading, isAuthenticated, router]); |
There was a problem hiding this comment.
Alert may never be seen — router.replace fires immediately.
Alert.alert is non-blocking; router.replace('/auth/login') on line 44 executes immediately after, navigating away before the user can read (or tap) the alert. The alert either flashes momentarily or is dismissed by the navigation.
If the intent is to inform the user, either wait for the alert button press to navigate, or drop the alert entirely since the redirect is the primary action.
Option A: Navigate only on button press (simpler)
useEffect(() => {
if (!isLoading && !isAuthenticated) {
Alert.alert(
'Sign In Required',
'Please sign in to create a listing',
- [{ text: 'OK', onPress: () => {} }],
+ [{ text: 'OK', onPress: () => router.replace('/auth/login') }],
{ cancelable: false }
);
- router.replace('/auth/login');
}
}, [isLoading, isAuthenticated, router]);Option B: Remove the alert, just redirect
useEffect(() => {
if (!isLoading && !isAuthenticated) {
- Alert.alert(
- 'Sign In Required',
- 'Please sign in to create a listing',
- [{ text: 'OK', onPress: () => {} }],
- { cancelable: false }
- );
router.replace('/auth/login');
}
}, [isLoading, isAuthenticated, router]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| if (!isLoading && !isAuthenticated) { | |
| Alert.alert( | |
| 'Sign In Required', | |
| 'Please sign in to create a listing', | |
| [{ text: 'OK', onPress: () => {} }], | |
| { cancelable: false } | |
| ); | |
| router.replace('/auth/login'); | |
| } | |
| }, [isLoading, isAuthenticated, router]); | |
| useEffect(() => { | |
| if (!isLoading && !isAuthenticated) { | |
| Alert.alert( | |
| 'Sign In Required', | |
| 'Please sign in to create a listing', | |
| [{ text: 'OK', onPress: () => router.replace('/auth/login') }], | |
| { cancelable: false } | |
| ); | |
| } | |
| }, [isLoading, isAuthenticated, router]); |
🤖 Prompt for AI Agents
In `@frontend/app/listings/new.tsx` around lines 36 - 46, The Alert.alert is
non-blocking so router.replace('/auth/login') runs immediately and the alert is
never seen; update the useEffect so the navigation happens inside the Alert
button callback or remove the alert entirely: either move the router.replace
call into the Alert.alert button onPress handler (inside the useEffect that
contains useEffect / Alert.alert) to navigate only after the user taps OK, or
delete the Alert.alert call and keep the immediate router.replace in the
useEffect if you want a silent redirect.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/convex/listings.ts (1)
187-243:⚠️ Potential issue | 🟠 MajorUnbounded
collect()on search results is a scalability risk.
searchQuery.collect()at Line 203 fetches all matching documents into memory. For a popular search term matching thousands of listings, this will consume significant memory and may hit Convex function limits. Consider applying.take(MAX_COLLECT)here as well (consistent with the non-search path) and signaling to the client when results are truncated.♻️ Suggested mitigation
- let results = await searchQuery.collect(); + const MAX_SEARCH_RESULTS = 1000; + let results = await searchQuery.take(MAX_SEARCH_RESULTS);
🤖 Fix all issues with AI agents
In `@backend/convex/listings.ts`:
- Around line 45-51: validateTitle currently only checks title.length so a
whitespace-only string like " " can pass; update validateTitle to trim the
input (or check for non-whitespace content) before validating length and throw
the same ConvexError if the trimmed title is empty or outside
PAYLOAD_BOUNDS.TITLE_MIN/TITLE_MAX. Ensure you reference and validate the
trimmed value when computing length and in the error message produced by
validateTitle.
In `@backend/convex/schema.ts`:
- Around line 8-16: The overridden users table defined via defineTable currently
omits Convex Auth required fields and uses the wrong index name; update the
users table schema in the users definition to include image,
emailVerificationTime, phoneVerificationTime, and isAnonymous (with the same
types Convex Auth expects) and rename the index 'by_email' to 'email' so Convex
Auth lookups use the correct index; ensure the existing optional fields (name,
email, phone, emailVerified, createdAt) remain and that the .index calls include
.index('phone', ['phone']) and .index('email', ['email']).
🧹 Nitpick comments (4)
frontend/app/index.tsx (1)
41-44: Nit:filterVersionRefandcurrentFilterVersionRefare always identical — one ref suffices.Both refs are set to the same value on lines 103-104 and never diverge. The stale-result guard works because
queryFilterVersioncaptures the render-time snapshot while the single ref holds the latest version. A second ref adds indirection without benefit.Proposed simplification
- // Filter versioning to prevent stale results - const filterVersionRef = useRef(0); - const currentFilterVersionRef = useRef(0); + // Filter versioning to prevent stale results + const filterVersionRef = useRef(0);Line 65:
- const queryFilterVersion = currentFilterVersionRef.current; + const queryFilterVersion = filterVersionRef.current;Lines 103-104:
filterVersionRef.current += 1; - currentFilterVersionRef.current = filterVersionRef.current;backend/convex/listings.ts (1)
497-531:sellerEmailis never set increateListingbut remains in the schema as optional.The
createListingmutation no longer writessellerEmailto the inserted document (Line 518–530), yet the schema still defines it asv.optional(v.string()). If no other code path populates it, this is a dead field. Consider removing it from the schema to avoid confusion, or if it's needed for migration compatibility, add a comment explaining the plan.backend/convex/users.ts (2)
82-110:getOrCreateUsernever creates — the name is misleading.This mutation throws if the user doesn't exist (Line 92–94) instead of creating one. Since
authTablesmanages user creation during the auth flow, this function only normalizes an existing record. Consider renaming toensureUserNormalizedorsyncUserProfileto match the actual behavior and avoid confusion.
61-66:updateUserProfilewritescreatedAt, andemailVerifiedon every name change.Patching these fields is redundant when the user only wants to update their name. The normalization write (lowercasing email) is a useful side effect on the first call, but on subsequent calls it's wasted work. Consider only patching the fields the caller actually changed, plus a one-time normalization if needed.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/convex/listings.ts (1)
192-249:⚠️ Potential issue | 🟠 MajorUnbounded
collect()on search results risks OOM for popular search terms.Line 208 does
searchQuery.collect()with no limit. If a search term matches thousands of listings, all are loaded into memory on every reactive query evaluation (and re-collected for each paginated page). Consider adding.take(MAX_COLLECT)to cap memory usage, similar to the non-search path.Proposed fix
+ const MAX_SEARCH_COLLECT = 1000; // For search queries, we must collect since search indexes don't support paginate() - let results = await searchQuery.collect(); + let results = await searchQuery.take(MAX_SEARCH_COLLECT);backend/convex/__tests__/listings.test.ts (1)
374-384:⚠️ Potential issue | 🟡 MinorTest exercises
createListinginstead ofupdateListing— copy-paste bug.The test is named "updateListing fails when tag is >20 characters long" but it only calls
createListing(Line 379), neverupdateListing. It should first create a listing with valid tags, then attempt to update it with an oversized tag.Proposed fix
it('updateListing fails when tag is >20 characters long', async () => { const t = convexTest(schema as any, modules); const asOwner = t.withIdentity(ownerIdentity); + const listingId = await asOwner.mutation(api.listings.createListing, baseArgs); + await expect(async () => { - await asOwner.mutation(api.listings.createListing, { - ...baseArgs, + await asOwner.mutation(api.listings.updateListing, { + id: listingId, tags: ['supercalifragilisticexpialidocious'], }); }).rejects.toThrowError('Tags must be 20 characters or less'); });
🧹 Nitpick comments (3)
backend/convex/listings.ts (3)
58-64:validateDescriptiondoesn't trim, unlikevalidateTitle.
validateTitletrims and validates the trimmed length, butvalidateDescriptionvalidates the raw length and doesn't trim. A whitespace-padded description could pass the length check but contain less meaningful content. IncreateListing(Line 525),args.descriptionis stored as-is rather than trimmed.This is a minor inconsistency — consider trimming for parity or documenting why descriptions are treated differently.
254-258:needsPostFilteringlogic has a subtle gap withmaxPrice+ price-sorted queries.When
sortByisprice_ascorprice_descandmaxPriceis set,needsPostFilteringistrue(becausemaxPrice !== undefined). This forces the query into thetake(MAX_COLLECT)path even though you could potentially use the price index's range to apply an upper bound (.lte('price', maxPrice)) directly. This would allow database-level pagination instead of in-memory post-filtering.Not a bug, but a missed optimization opportunity for a common filter combination.
634-647: LegacysearchListingshas no collection limit and no deprecation marker.This endpoint calls
.collect()without any bound, and unlikesearchAndFilterListings, it has no pagination. Consider either:
- Adding
.take(MAX_COLLECT)as a safety bound, or- Marking it as
@deprecatedor removing it ifsearchAndFilterListingsfully replaces it.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@backend/convex/listings.ts`:
- Around line 238-250: The cursor parsing currently uses
parseInt(args.paginationOpts.cursor, 10) which accepts negative numbers and
yields a negative startIndex that makes results.slice count from the end; fix by
parsing and normalizing the cursor to a non-negative integer (e.g., const parsed
= Number.parseInt(args.paginationOpts.cursor, 10); const startIndex =
Number.isNaN(parsed) ? 0 : Math.max(0, parsed);) so startIndex is always >= 0,
and use that startIndex for paginatedResults, hasMore, and nextCursor; apply the
exact same normalization to the other two cursor-parsing sites that use
args.paginationOpts.cursor and results.slice.
- Around line 509-516: The extra check for identity.email inside the handler is
unnecessary because identity.email isn't used when creating a listing (you use
identity.subject as sellerId); either remove the `if (!identity.email)` block
entirely, or replace it with an explicit profile-existence check (e.g., query
the profiles table for `identity.subject`) so the intent—"user must have a
created profile"—is enforced; update the thrown error message accordingly and
keep references to `identity.subject`/`sellerId` in the listing creation logic.
🧹 Nitpick comments (3)
backend/convex/listings.ts (3)
58-64: Description is not trimmed, unlike title.
validateTitletrims input before validation and returns the trimmed result, butvalidateDescriptionvalidatesdescription.lengthon the raw input. Leading/trailing whitespace in descriptions would count toward the limit and be persisted. Consider trimming for consistency.♻️ Suggested change
-function validateDescription(description: string) { - if (description.length > PAYLOAD_BOUNDS.DESCRIPTION_MAX) { +function validateDescription(description: string): string { + const trimmed = description.trim(); + if (trimmed.length > PAYLOAD_BOUNDS.DESCRIPTION_MAX) { throw new ConvexError( `Description must be ${PAYLOAD_BOUNDS.DESCRIPTION_MAX} characters or less` ); } + return trimmed; }
530-542:args.descriptionstored raw whileargs.titleuses validated (trimmed) value.Line 531 uses
validatedTitle(trimmed), but Line 532 persistsargs.descriptionwithout trimming. IfvalidateDescriptionis updated to return a trimmed value, use it here.
667-672:getMyHiddenListingsuses full table scan via.filter().collect().This query filters on
sellerIdandisHiddenwithout an index, scanning all listings. For a small dataset this is fine, but if listings grow, consider adding a compound index (e.g.,by_seller_isHidden).
| // Manual pagination for search results | ||
| const cursor = args.paginationOpts.cursor ? parseInt(args.paginationOpts.cursor, 10) : 0; | ||
| const startIndex = isNaN(cursor) ? 0 : cursor; | ||
| const paginatedResults = results.slice(startIndex, startIndex + args.paginationOpts.numItems); | ||
| const hasMore = startIndex + args.paginationOpts.numItems < results.length; | ||
| const nextCursor = hasMore ? String(startIndex + args.paginationOpts.numItems) : null; | ||
|
|
||
| return { | ||
| page: paginatedResults, | ||
| continueCursor: nextCursor, | ||
| isDone: !hasMore, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Negative or non-numeric cursor values can cause unexpected pagination behavior.
parseInt(args.paginationOpts.cursor, 10) silently accepts negative strings like "-5". A negative startIndex fed to Array.slice counts from the end, returning wrong results. The same issue exists at Lines 378–379 and 480–481.
🛡️ Proposed fix (apply to all three cursor-parsing sites)
- const cursor = args.paginationOpts.cursor ? parseInt(args.paginationOpts.cursor, 10) : 0;
- const startIndex = isNaN(cursor) ? 0 : cursor;
+ const cursor = args.paginationOpts.cursor ? parseInt(args.paginationOpts.cursor, 10) : 0;
+ const startIndex = isNaN(cursor) || cursor < 0 ? 0 : cursor;🤖 Prompt for AI Agents
In `@backend/convex/listings.ts` around lines 238 - 250, The cursor parsing
currently uses parseInt(args.paginationOpts.cursor, 10) which accepts negative
numbers and yields a negative startIndex that makes results.slice count from the
end; fix by parsing and normalizing the cursor to a non-negative integer (e.g.,
const parsed = Number.parseInt(args.paginationOpts.cursor, 10); const startIndex
= Number.isNaN(parsed) ? 0 : Math.max(0, parsed);) so startIndex is always >= 0,
and use that startIndex for paginatedResults, hasMore, and nextCursor; apply the
exact same normalization to the other two cursor-parsing sites that use
args.paginationOpts.cursor and results.slice.
| handler: async (ctx, args) => { | ||
| const identity = await ctx.auth.getUserIdentity(); | ||
| if (!identity) { | ||
| throw new Error('You must be logged in to create a listing'); | ||
| throw new ConvexError('You must be logged in to create a listing'); | ||
| } | ||
| if (!identity.email) { | ||
| throw new ConvexError('Authenticated user email is required to create a listing'); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "identity\.email" backend/convex/Repository: codebox-calpoly/PolyBuys
Length of output: 267
🏁 Script executed:
sed -n '505,540p' backend/convex/listings.tsRepository: codebox-calpoly/PolyBuys
Length of output: 1281
🏁 Script executed:
rg -n "notification" backend/convex/ -i --max-count=20Repository: codebox-calpoly/PolyBuys
Length of output: 50
🏁 Script executed:
rg -n "email" backend/convex/ -B 2 -A 2 | head -60Repository: codebox-calpoly/PolyBuys
Length of output: 3146
🏁 Script executed:
sed -n '70,90p' backend/convex/schema.tsRepository: codebox-calpoly/PolyBuys
Length of output: 734
🏁 Script executed:
sed -n '75,110p' backend/convex/profiles.tsRepository: codebox-calpoly/PolyBuys
Length of output: 1375
Remove the unnecessary identity.email check or clarify its purpose.
The check at line 514 blocks users without an email claim, but identity.email is never used in the listing creation—only identity.subject is stored as sellerId. Email is a required field in the profiles table and validated during profile creation, not listing creation. If this check exists as a pre-condition ensuring users have completed profile setup before listing, that intent should be explicitly enforced; otherwise, remove the validation.
🤖 Prompt for AI Agents
In `@backend/convex/listings.ts` around lines 509 - 516, The extra check for
identity.email inside the handler is unnecessary because identity.email isn't
used when creating a listing (you use identity.subject as sellerId); either
remove the `if (!identity.email)` block entirely, or replace it with an explicit
profile-existence check (e.g., query the profiles table for `identity.subject`)
so the intent—"user must have a created profile"—is enforced; update the thrown
error message accordingly and keep references to `identity.subject`/`sellerId`
in the listing creation logic.
Summary
Adding tests and updating schema
How to Test
Scope
Replaces custom OTP auth wiring with Convex Auth routes (auth.addHttpRoutes) and removes legacy OTP files (otpAuth.ts, email.ts, crypto.ts).
Migrates identity usage to Convex Auth subjects (string) across listings/profiles/reports/messages/conversations.
Adds Convex Auth tables to schema (authSessions, authAccounts, authRefreshTokens, authVerificationCodes, authVerifiers, authRateLimits).
Removes sellerEmail from shared listing types and frontend create/detail UI flow; listing creation now relies on authenticated identity.
Refactors listings queries to pagination object shape (paginationOpts) and adds richer indexed search/filter/sort behavior.
Adds payload validation and consistent ConvexError usage in listings/messages/profiles.
Tightens visibility/security rules (non-owners only see active + non-hidden listings; conversation creation blocked for inactive/hidden listings).
Sanitizes profile responses to avoid exposing PII fields publicly.
Refactors users API to normalize and return auth-backed user records; removes checkEmailExists.
Refactors reports to use identity.subject directly for reporterId.
Adds large backend test coverage expansion: new suites for messages, users, profiles, listings-pagination, plus shared test utilities.
Updates Jest config/setup for auth mocking and broader coverage collection.
Updates CI to verify frontend web export build.
Updates docs and env guidance, including new schema migration doc.
Removes unused backend deps (bcryptjs, @types/bcryptjs) and updates frontend build script to real Expo export.
Checklist
npm run lint)devScreenshots / Demos
(if UI or visible behavior - attach images, videos, or GIFs)
Summary by CodeRabbit
New Features
Changes
Documentation
Tests