job page integrated#132
Conversation
📝 WalkthroughWalkthroughThe PR integrates matched jobs from the backend into the frontend job listing UI. Backend changes add job URL extraction and include full job metadata in Changes
Sequence DiagramsequenceDiagram
participant User
participant JobPage
participant JobService
participant Backend
participant JobCard
User->>JobPage: Page loads with resume
JobPage->>JobService: fetchMatchedJobs()
JobService->>Backend: POST /jobs/match
Backend->>Backend: Extract job URLs, build jobsMap
Backend-->>JobService: Response with job metadata + URL
JobService->>JobService: Map to Job[] objects
JobService-->>JobPage: Resolved jobs array
JobPage->>JobPage: Update jobs state, isLoading=false
JobPage->>JobCard: Render with job={job}
User->>JobCard: Click job
JobCard->>JobPage: handleJobClick(job)
JobPage->>JobPage: Open job.jobUrl in new tab
JobPage->>JobPage: Navigate to job detail
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai post your findings in actionable comments for improving latency |
| const loadJobs = async () => { | ||
| setIsLoading(true); | ||
| setError(null); | ||
|
|
There was a problem hiding this comment.
🐢 Latency: Sequential waterfall — resume check blocks job fetch
The two useEffect hooks create a serial waterfall: the page first resolves checkResume() and sets hasResume, then a second render cycle triggers fetchMatchedJobs. This adds one full network round-trip of latency before jobs are fetched.
Suggestion: Collapse both into a single useEffect and run them in parallel with Promise.all:
useEffect(() => {
const controller = new AbortController();
const init = async () => {
setIsCheckingResume(true);
setIsLoading(true);
try {
const [resumeExists, matchedJobs] = await Promise.all([
checkResumeExists(fetchRef.current),
fetchMatchedJobs(fetchRef.current, { signal: controller.signal }),
]);
setHasResume(resumeExists);
if (resumeExists) setJobs(matchedJobs);
} catch (err) {
if ((err as Error).name !== 'AbortError') setError((err as Error).message);
} finally {
setIsCheckingResume(false);
setIsLoading(false);
}
};
init();
return () => controller.abort();
}, []);This saves one full RTT on every page load.
| * Fetch matched jobs from the backend | ||
| * Returns ranked jobs with match scores based on user's resume | ||
| */ | ||
| export const fetchMatchedJobs = async ( |
There was a problem hiding this comment.
🐢 Latency: No client-side caching — full AI pipeline re-runs on every page visit
fetchMatchedJobs fires POST /jobs/match on every JobsPage mount, even if the user's resume hasn't changed. Since the backend runs the full AI-matching pipeline each time, users pay the full latency penalty on every navigation back to the page.
Suggestion: Cache results in sessionStorage keyed by a resume fingerprint, with a short TTL:
const CACHE_KEY = 'cvpilot_matched_jobs';
const CACHE_TTL_MS = 5 * 60 * 1000; // 5 min
export const fetchMatchedJobs = async (
fetchWithAuth,
{ forceRefresh = false, signal }: { forceRefresh?: boolean; signal?: AbortSignal } = {}
): Promise<Job[]> => {
if (!forceRefresh) {
const cached = sessionStorage.getItem(CACHE_KEY);
if (cached) {
const { jobs, ts } = JSON.parse(cached);
if (Date.now() - ts < CACHE_TTL_MS) return jobs;
}
}
// ... existing fetch logic ...
sessionStorage.setItem(CACHE_KEY, JSON.stringify({ jobs, ts: Date.now() }));
return jobs;
};Pass forceRefresh: true only on manual Refetch Jobs clicks.
| } catch (err) { | ||
| const errorMessage = | ||
| err instanceof Error ? err.message : "Failed to fetch jobs"; | ||
| setError(errorMessage); |
There was a problem hiding this comment.
🐢 Latency / Race-condition: Stale in-flight requests are never cancelled
If the user navigates away while fetchMatchedJobs is in flight, the response arrives and setJobs is called on an unmounted component (React warning + potential state corruption). If they navigate back quickly, two concurrent matching pipelines race to update state.
Suggestion: Wire an AbortController into the effect cleanup and forward its signal to the fetch:
useEffect(() => {
if (!hasResume || isCheckingResume) return;
const controller = new AbortController();
const loadJobs = async () => {
setIsLoading(true);
try {
const matchedJobs = await fetchMatchedJobs(fetchRef.current, {
signal: controller.signal,
});
if (!controller.signal.aborted) setJobs(matchedJobs);
} catch (err) {
if (!controller.signal.aborted && (err as Error).name !== 'AbortError')
setError((err as Error).message);
} finally {
if (!controller.signal.aborted) setIsLoading(false);
}
};
loadJobs();
return () => controller.abort();
}, [hasResume, isCheckingResume]);And update fetchMatchedJobs to accept and forward a signal option to fetchWithAuth.
| duration, | ||
| }); | ||
|
|
||
| // Create a map for quick job lookup |
There was a problem hiding this comment.
🐢 Latency: No server-side caching — full AI matching pipeline runs on every request
Every POST to /jobs/match runs the complete job-fetch + AI-scoring pipeline from scratch. For the same user with an unchanged resume this is pure redundant work and is almost certainly the dominant latency source (potentially several seconds per call).
Suggestion: Cache results server-side (Redis recommended; in-memory LRU as a quick win), keyed by userId + resumeHash, with a short TTL:
import { createHash } from 'crypto';
// Inside matchJobsHandler, after resolving userId and resumeText:
const resumeHash = createHash('sha256').update(resumeText).digest('hex').slice(0, 16);
const cacheKey = `jobs:${userId}:${resumeHash}`;
const cached = await cache.get(cacheKey);
if (cached) return res.json(JSON.parse(cached));
// ... run full pipeline ...
await cache.set(cacheKey, JSON.stringify(responsePayload), { ttl: 600 }); // 10 min
res.json(responsePayload);This can reduce p50 latency from seconds to single-digit milliseconds for repeat visits with the same resume.
| // TODO: Implement API call to refetch jobs based on current resume | ||
| // This would call the backend to re-run the matching algorithm | ||
| setTimeout(() => { | ||
| setError(null); |
There was a problem hiding this comment.
🐢 Latency: Refetch button lacks an early-exit guard against concurrent pipeline calls
While isRefetching is set to true during a request, the button's disabled prop is not tied to isLoading, so overlapping calls from rapid clicks can trigger multiple concurrent POST /jobs/match pipeline runs.
Suggestion: Add an early-exit guard and disable the button across both loading states:
const handleRefetchJobs = async () => {
if (isRefetching || isLoading) return; // guard concurrent calls
setIsRefetching(true);
// ...existing logic...
};<Button
onClick={handleRefetchJobs}
disabled={isRefetching || isLoading}
>
{isRefetching ? 'Refreshing…' : 'Refetch Jobs'}
</Button>|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 19512 🏁 Script executed: Length of output: 2488 🏁 Script executed: Length of output: 3587 🏁 Script executed: Length of output: 21193
The highest-priority fix is #4 (server-side caching) — caching per |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/controller/jobsController.ts`:
- Around line 230-245: The mapping over finalResults currently creates
placeholder records when jobsMap lacks a job for a MatchResult.jobId, leaking
invalid IDs into the API; change the pipeline to first filter finalResults to
only entries whose job exists in jobsMap (e.g., jobsMap.has(r.jobId) or check
that jobsMap.get(r.jobId) is non-null) and then take the first 5 (apply slice
after filtering) before mapping to the response shape; keep the same fields
(title, company, location, description, skills, jobUrl, score, confidence,
reason) and use extractJobUrl(job.rawData) on the validated job.
In `@frontend/src/pages/JobPage.tsx`:
- Around line 115-118: The handleJobClick handler currently calls
window.open(job.jobUrl, "_blank") which leaves window.opener intact; update the
call in the handleJobClick function to pass the third argument with
"noopener,noreferrer" (e.g., window.open(job.jobUrl, "_blank",
"noopener,noreferrer")) so external job URLs (job.jobUrl) are opened without
allowing reverse-tabnabbing via window.opener.
In `@frontend/src/pages/SavedPage.tsx`:
- Around line 20-23: handleJobClick opens job.jobUrl using window.open without
severing the opener reference; update the call in handleJobClick to prevent
window.opener access by adding the noopener and noreferrer flags (e.g. pass
"noopener,noreferrer" as the third argument to window.open) or replace the open
with creating an <a> element that sets rel="noopener noreferrer" before
programmatically clicking it; modify the window.open/job.jobUrl usage so the new
tab cannot access or redirect the original page.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 7c9e65ce-93cf-458c-bbe4-93e10e9c70f1
📒 Files selected for processing (7)
backend/controller/jobsController.tsfrontend/src/components/Jobs/JobCard.tsxfrontend/src/components/Jobs/JobList.tsxfrontend/src/data/MockJobs.tsfrontend/src/pages/JobPage.tsxfrontend/src/pages/SavedPage.tsxfrontend/src/services/jobs.ts
| results: finalResults.slice(0, 5).map((r) => { | ||
| const job = jobsMap.get(r.jobId); | ||
| const jobUrl = extractJobUrl(job?.rawData as any); | ||
| return { | ||
| jobId: r.jobId, | ||
| title: job?.title || "Job Title", | ||
| company: job?.company || "Company", | ||
| location: job?.location || "Remote", | ||
| description: job?.description || "", | ||
| skills: job?.skills || [], | ||
| jobUrl, | ||
| score: r.score, | ||
| confidence: r.confidence, | ||
| reason: r.reason, | ||
| }; | ||
| }), |
There was a problem hiding this comment.
Filter out match results whose jobId is missing from jobsMap.
Line 231 currently turns an unknown matcher ID into a fake "Job Title"/"Company" record, so invalid MatchResult.jobId values leak into the API response instead of being dropped. As per coding guidelines, backend reviews must flag "Incorrect logic or data handling (wrong business logic, data corruption risk, incorrect query behavior)".
Suggested fix
- results: finalResults.slice(0, 5).map((r) => {
- const job = jobsMap.get(r.jobId);
- const jobUrl = extractJobUrl(job?.rawData as any);
- return {
- jobId: r.jobId,
- title: job?.title || "Job Title",
- company: job?.company || "Company",
- location: job?.location || "Remote",
- description: job?.description || "",
- skills: job?.skills || [],
- jobUrl,
- score: r.score,
- confidence: r.confidence,
- reason: r.reason,
- };
- }),
+ results: finalResults
+ .filter((r) => jobsMap.has(r.jobId))
+ .slice(0, 5)
+ .map((r) => {
+ const job = jobsMap.get(r.jobId)!;
+ return {
+ jobId: r.jobId,
+ title: job.title,
+ company: job.company,
+ location: job.location,
+ description: job.description,
+ skills: job.skills || [],
+ jobUrl: extractJobUrl(job.rawData as any),
+ score: r.score,
+ confidence: r.confidence,
+ reason: r.reason,
+ };
+ }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/controller/jobsController.ts` around lines 230 - 245, The mapping
over finalResults currently creates placeholder records when jobsMap lacks a job
for a MatchResult.jobId, leaking invalid IDs into the API; change the pipeline
to first filter finalResults to only entries whose job exists in jobsMap (e.g.,
jobsMap.has(r.jobId) or check that jobsMap.get(r.jobId) is non-null) and then
take the first 5 (apply slice after filtering) before mapping to the response
shape; keep the same fields (title, company, location, description, skills,
jobUrl, score, confidence, reason) and use extractJobUrl(job.rawData) on the
validated job.
| const handleJobClick = (job: Job) => { | ||
| if (job.jobUrl) { | ||
| window.open(job.jobUrl, "_blank"); | ||
| } else { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and inspect JobPage.tsx to verify the code at lines 115-118
fd -t f "JobPage.tsx" frontend/src/Repository: yb175/CVPilot
Length of output: 87
🏁 Script executed:
# Read the relevant section of JobPage.tsx around lines 115-118
cat -n frontend/src/pages/JobPage.tsx | sed -n '110,125p'Repository: yb175/CVPilot
Length of output: 511
🏁 Script executed:
# Also check the jobs service to confirm job.jobUrl source
cat -n frontend/src/services/jobs.ts | head -50Repository: yb175/CVPilot
Length of output: 1737
Add noopener,noreferrer when opening external job URLs to prevent reverse-tabnabbing attacks.
Line 117 opens backend-provided URLs without severing the window.opener connection, allowing the opened page to redirect the original tab to a malicious site. Use the third parameter to add these security flags.
Suggested fix
- window.open(job.jobUrl, "_blank");
+ window.open(job.jobUrl, "_blank", "noopener,noreferrer");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/pages/JobPage.tsx` around lines 115 - 118, The handleJobClick
handler currently calls window.open(job.jobUrl, "_blank") which leaves
window.opener intact; update the call in the handleJobClick function to pass the
third argument with "noopener,noreferrer" (e.g., window.open(job.jobUrl,
"_blank", "noopener,noreferrer")) so external job URLs (job.jobUrl) are opened
without allowing reverse-tabnabbing via window.opener.
| const handleJobClick = (job: Job) => { | ||
| if (job.jobUrl) { | ||
| window.open(job.jobUrl, "_blank"); | ||
| } else { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd frontend && find . -name "SavedPage.tsx" -type fRepository: yb175/CVPilot
Length of output: 82
🏁 Script executed:
cat frontend/src/pages/SavedPage.tsx | head -50Repository: yb175/CVPilot
Length of output: 1823
🏁 Script executed:
cat frontend/src/data/MockJobs.ts | head -100Repository: yb175/CVPilot
Length of output: 5211
🏁 Script executed:
rg "type Job|interface Job" frontend/src/data/MockJobs.ts -A 10Repository: yb175/CVPilot
Length of output: 314
Add noopener,noreferrer when opening job URLs to prevent window.opener access.
The code opens URLs without severing the opener reference. While the current mock data is safe, this pattern should be corrected before connecting to backend data to prevent the opened page from accessing or redirecting the original tab.
Suggested fix
- window.open(job.jobUrl, "_blank");
+ window.open(job.jobUrl, "_blank", "noopener,noreferrer");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/pages/SavedPage.tsx` around lines 20 - 23, handleJobClick opens
job.jobUrl using window.open without severing the opener reference; update the
call in handleJobClick to prevent window.opener access by adding the noopener
and noreferrer flags (e.g. pass "noopener,noreferrer" as the third argument to
window.open) or replace the open with creating an <a> element that sets
rel="noopener noreferrer" before programmatically clicking it; modify the
window.open/job.jobUrl usage so the new tab cannot access or redirect the
original page.
|
@coderabbitai create these 5 different latency based issues assign to none |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 314
|
@coderabbitai please create separate issues for these two |
|
LGTM !! |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainThe two images show:
Creating both issues now! 🏁 Script executed: Length of output: 125
|


Summary by CodeRabbit
New Features
Bug Fixes