Skip to content

feat(agents,skills,rules): add Rust, Java, mobile, DevOps, and performance content#298

Open
janmaaarc wants to merge 5 commits intoaffaan-m:mainfrom
janmaaarc:feat/expand-language-devops-coverage
Open

feat(agents,skills,rules): add Rust, Java, mobile, DevOps, and performance content#298
janmaaarc wants to merge 5 commits intoaffaan-m:mainfrom
janmaaarc:feat/expand-language-devops-coverage

Conversation

@janmaaarc
Copy link

@janmaaarc janmaaarc commented Feb 26, 2026

Summary

Adds 35 new files expanding coverage across languages, platforms, and domains identified as gaps in the repository.

Agents (3): rust-reviewer, java-reviewer, mobile-reviewer
Rules (10): rules/rust/ and rules/java/ (5 each)
Skills (14): rust-patterns, rust-testing, kotlin-patterns, react-native-patterns, flutter-patterns, data-engineering-patterns, mlops-patterns, terraform-patterns, kubernetes-patterns, cicd-patterns, frontend-performance, database-performance, api-performance, e2e-patterns
Context Templates (4): debug, deploy, refactor, planning
Examples (4): 3 codemaps (Next.js, Python API, Go microservice) + coverage config

Type

  • Skill
  • Agent
  • Hook
  • Command

Testing

  • 991/992 tests pass (node tests/run-all.js) — 1 pre-existing failure in hooks.json regex unrelated to this PR
  • All validation scripts pass: validate-agents.js (16 agents), validate-skills.js (64 skills), validate-rules.js (40 rules)

Checklist

  • Follows format guidelines
  • Tested with Claude Code
  • No sensitive info (API keys, paths)
  • Clear descriptions

Summary by CodeRabbit

  • Documentation
    • Added an extensive set of new guides and skill libraries: language- and platform-specific reviewer guides, debugging/deploy/planning/refactor contexts, multiple architecture examples and codemaps, expanded rules/hooks/security/testing/style for Java, Rust, and others, plus broad skill collections covering performance, CI/CD, data engineering, MLOps, frontend/mobile, Kubernetes, and testing best practices.

…e content

- 3 new agents: rust-reviewer, java-reviewer, mobile-reviewer
- 10 new rules: rules/rust/ and rules/java/ (5 each)
- 14 new skills: rust, kotlin, react-native, flutter, data-engineering,
  mlops, terraform, kubernetes, cicd, frontend-performance,
  database-performance, api-performance, e2e-patterns
- 4 new context templates: debug, deploy, refactor, planning
- 4 new examples: 3 codemaps + coverage config
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 26, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5994135 and bdbcc78.

📒 Files selected for processing (3)
  • skills/cicd-patterns/SKILL.md
  • skills/mlops-patterns/SKILL.md
  • skills/terraform-patterns/SKILL.md

📝 Walkthrough

Walkthrough

Adds a large set of new documentation: reviewer agent specs, operational contexts, Java/Kotlin and Rust-specific rules/hooks, many skill pattern guides (performance, infra, data, ML, mobile, frontend), and example codemaps to standardize review, testing, deployment, and architecture practices.

Changes

Cohort / File(s) Summary
Agent Specifications
agents/java-reviewer.md, agents/mobile-reviewer.md, agents/rust-reviewer.md
New reviewer role docs specifying invocation/workflow, prioritized checks (CRITICAL/HIGH/MEDIUM), diagnostic commands, framework/tool checks, and approval criteria for Java/Kotlin, mobile, and Rust reviews.
Operational Contexts
contexts/debug.md, contexts/deploy.md, contexts/planning.md, contexts/refactor.md
New context guides covering debugging, deployment/rollback, planning/decision records, and refactor processes with rules, checklists, and verification steps.
Examples / Codemaps & Coverage
examples/codemap-go-microservice.md, examples/codemap-nextjs.md, examples/codemap-python-api.md, examples/coverage-config.md
Architectural codemaps for Go/Next.js/Python services and a cross-language coverage/configuration reference with CI snippets and thresholds.
Java/Kotlin Rules & Hooks
rules/java/coding-style.md, rules/java/hooks.md, rules/java/patterns.md, rules/java/security.md, rules/java/testing.md
Adds Java/Kotlin coding style, hooks (format/lint/CI), design patterns, security guidance (SQL, XXE, deserialization), and testing guidance (JUnit5/MockK/Testcontainers).
Rust Rules & Hooks
rules/rust/coding-style.md, rules/rust/hooks.md, rules/rust/patterns.md, rules/rust/security.md, rules/rust/testing.md
Adds Rust coding style, hook configs (cargo fmt/clippy), patterns (newtype/typestate), unsafe/dependency policies, security scanning, and Rust testing practices.
Skill Patterns — Performance & Database
skills/api-performance/SKILL.md, skills/database-performance/SKILL.md, skills/frontend-performance/SKILL.md
Comprehensive guides for API, database, and frontend performance patterns with examples, commands, and checklists.
Skill Patterns — CI/CD & Infra
skills/cicd-patterns/SKILL.md, skills/kubernetes-patterns/SKILL.md, skills/terraform-patterns/SKILL.md
CI/CD, Kubernetes, and Terraform pattern guides covering workflows, security, module design, and deployment strategies with examples.
Skill Patterns — Data / ML / E2E
skills/data-engineering-patterns/SKILL.md, skills/mlops-patterns/SKILL.md, skills/e2e-patterns/SKILL.md
Data engineering, MLOps, and end-to-end testing patterns (pipelines, experiment tracking, Playwright) with practical examples.
Skill Patterns — Mobile & Language-specific
skills/flutter-patterns/SKILL.md, skills/react-native-patterns/SKILL.md, skills/kotlin-patterns/SKILL.md, skills/rust-patterns/SKILL.md, skills/rust-testing/SKILL.md
Platform- and language-specific idioms and testing guides: Flutter, React Native, Kotlin, Rust patterns and Rust testing best practices.

Sequence Diagram(s)

(omitted — changes are documentation only; no new multi-component runtime control flow to visualize)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • affaan-m

Poem

🐇
I hopped through pages, line by line,
New guides and patterns, neat design.
Java, Rust, mobile in my paw—
Docs to guard the code we saw.
A rabbit cheers: review and shine!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changes: adding Rust, Java, mobile, DevOps, and performance content across agents, skills, and rules.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (9)
skills/terraform-patterns/SKILL.md-227-236 (1)

227-236: ⚠️ Potential issue | 🟡 Minor

Fix file name inconsistency in plan workflow.

The workflow outputs the plan to plan.tfplan (line 228) but attempts to read from plan.txt (line 236). This mismatch will cause the workflow to fail.

🔧 Proposed fix

Either change the output file:

       - name: Plan
-        run: terraform plan -no-color -out=plan.tfplan
+        run: terraform plan -no-color -out=plan.txt
         env:
           AWS_ROLE_ARN: ${{ secrets.TF_ROLE_ARN }}

Or change the read operation:

           script: |
-            const plan = require('fs').readFileSync('plan.txt', 'utf8');
+            const plan = require('fs').readFileSync('plan.tfplan', 'utf8');
             github.rest.issues.createComment({

Note: .tfplan files are binary, so you may need to convert to text format first with terraform show plan.tfplan > plan.txt or use terraform plan -no-color | tee plan.txt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/terraform-patterns/SKILL.md` around lines 227 - 236, The workflow
currently writes the plan to plan.tfplan in the "Plan" step but the "Comment
Plan" step reads plan.txt, causing a mismatch; fix by making the two steps use
the same artifact: either change the "Plan" step to produce a text plan (e.g.,
run terraform plan -no-color | tee plan.txt) so the "Comment Plan" script can
read plan.txt, or keep plan.tfplan and update the "Comment Plan" step to
convert/read it via terraform show plan.tfplan > plan.txt (or run terraform show
-no-color plan.tfplan and read that output) so the file names and formats align
with the const plan = require('fs').readFileSync(...) in the Comment Plan step.
rules/java/security.md-10-10 (1)

10-10: ⚠️ Potential issue | 🟡 Minor

Use hyphenated compound adjective for consistency.

Use “Java/Kotlin-specific content” instead of “Java/Kotlin specific content.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/java/security.md` at line 10, Update the sentence "This file extends
[common/security.md](../common/security.md) with Java/Kotlin specific content."
to use a hyphenated compound adjective: change it to "This file extends
[common/security.md](../common/security.md) with Java/Kotlin-specific content."
This edits the header/first line in rules/java/security.md to ensure consistent
hyphenation.
rules/rust/patterns.md-9-9 (1)

9-9: ⚠️ Potential issue | 🟡 Minor

Hyphenate “Rust-specific” in the intro sentence.

Line 9 currently uses “Rust specific”; this should be “Rust-specific”.

Suggested text fix
-> This file extends [common/patterns.md](../common/patterns.md) with Rust specific content.
+> This file extends [common/patterns.md](../common/patterns.md) with Rust-specific content.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/rust/patterns.md` at line 9, Update the intro sentence in
rules/rust/patterns.md by replacing the phrase "Rust specific" with the
hyphenated form "Rust-specific" so the sentence reads: "This file extends
[common/patterns.md](../common/patterns.md) with Rust-specific content." Locate
the exact sentence in the file and make the single-word change to add the
hyphen.
rules/rust/hooks.md-9-9 (1)

9-9: ⚠️ Potential issue | 🟡 Minor

Minor wording fix: use “Rust-specific”.

Please hyphenate the compound adjective in Line 9.

Suggested text fix
-> This file extends [common/hooks.md](../common/hooks.md) with Rust specific content.
+> This file extends [common/hooks.md](../common/hooks.md) with Rust-specific content.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/rust/hooks.md` at line 9, Change the phrase "Rust specific" to the
hyphenated compound adjective "Rust-specific" in the sentence that reads "This
file extends [common/hooks.md](../common/hooks.md) with Rust specific content."
Update that exact text in rules/rust/hooks.md so it becomes "This file extends
[common/hooks.md](../common/hooks.md) with Rust-specific content." Ensure only
the hyphenation is changed and wording/links remain identical otherwise.
rules/rust/coding-style.md-9-9 (1)

9-9: ⚠️ Potential issue | 🟡 Minor

Use hyphenated compound adjective (Rust-specific).

Line 9 should read “with Rust-specific content” for correct grammar.

Suggested text fix
-> This file extends [common/coding-style.md](../common/coding-style.md) with Rust specific content.
+> This file extends [common/coding-style.md](../common/coding-style.md) with Rust-specific content.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/rust/coding-style.md` at line 9, The sentence that currently reads
"This file extends [common/coding-style.md](../common/coding-style.md) with Rust
specific content" should use a hyphenated compound adjective; change "Rust
specific" to "Rust-specific" so the sentence becomes "with Rust-specific
content" to correct the grammar.
skills/mlops-patterns/SKILL.md-210-223 (1)

210-223: ⚠️ Potential issue | 🟡 Minor

Drift detection snippet is missing required imports.

pd.DataFrame and np.number are used without importing pandas as pd and numpy as np in this example block.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/mlops-patterns/SKILL.md` around lines 210 - 223, The snippet uses
pandas and numpy types but omits imports; add the missing imports at the top of
this example so pd and np are defined (e.g., import pandas as pd and import
numpy as np) before the existing imports and definitions such as DriftResult and
detect_drift to ensure pd.DataFrame and np.number resolve properly.
rules/rust/testing.md-9-9 (1)

9-9: ⚠️ Potential issue | 🟡 Minor

Hyphenate compound adjective in description sentence.

Use “Rust-specific content” instead of “Rust specific content.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/rust/testing.md` at line 9, Update the description sentence string
"This file extends [common/testing.md](../common/testing.md) with Rust specific
content." to hyphenate the compound adjective, changing "Rust specific content"
to "Rust-specific content" so the sentence reads "This file extends
[common/testing.md](../common/testing.md) with Rust-specific content."
rules/java/testing.md-10-10 (1)

10-10: ⚠️ Potential issue | 🟡 Minor

Use hyphenated compound adjective in prose.

Change “Java/Kotlin specific content” to “Java/Kotlin-specific content” for cleaner grammar.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/java/testing.md` at line 10, Update the prose on the line containing
"This file extends [common/testing.md](../common/testing.md) with Java/Kotlin
specific content" to use a hyphenated compound adjective: change "Java/Kotlin
specific content" to "Java/Kotlin-specific content" so the sentence reads "This
file extends [common/testing.md](../common/testing.md) with Java/Kotlin-specific
content".
rules/java/coding-style.md-10-10 (1)

10-10: ⚠️ Potential issue | 🟡 Minor

Minor wording fix for compound adjective.

Use “Java/Kotlin-specific content” instead of “Java/Kotlin specific content.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/java/coding-style.md` at line 10, Update the wording to use the
compound adjective form: replace the phrase "Java/Kotlin specific content" with
"Java/Kotlin-specific content" in the sentence that reads "This file extends ...
with Java/Kotlin specific content" so the hyphenated form is used consistently
in the document.
🧹 Nitpick comments (6)
skills/terraform-patterns/SKILL.md (1)

11-16: Consider adding "How It Works" and "Examples" sections.

Similar to the rust-patterns skill, this file has "When to Activate" but lacks explicit "How It Works" and "Examples" section headers. While the content is well-organized with examples throughout, the guideline specifies these sections for consistency.

As per coding guidelines: Skills should be formatted with clear sections for When to Use, How It Works, and Examples.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/terraform-patterns/SKILL.md` around lines 11 - 16, Add explicit "How
It Works" and "Examples" section headers to SKILL.md to match the rust-patterns
structure: insert a "How It Works" section after the "When to Activate" header
summarizing the skill's internal approach (e.g., what Terraform
concepts/patterns it leverages and typical workflow) and add a separate
"Examples" section that groups the concrete usage snippets already present;
update any existing dispersed examples to live under the new "Examples" header
and ensure headings match the guideline names ("How It Works", "Examples") for
consistency with other skills.
skills/rust-patterns/SKILL.md (2)

11-16: Consider adding "How It Works" and "Examples" sections.

The skill structure includes "When to Activate" but doesn't have explicit "How It Works" and "Examples" section headers. While the content is comprehensive with examples integrated throughout, the guideline states skills should have "clear sections for When to Use, How It Works, and Examples."

Consider either:

  1. Adding these sections with the current pattern-based content organized under them
  2. Or confirming that pattern-based organization is an accepted alternative structure

As per coding guidelines: Skills should be formatted with clear sections for When to Use, How It Works, and Examples.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rust-patterns/SKILL.md` around lines 11 - 16, The SKILL.md currently
has a "When to Activate" section but lacks explicit "How It Works" and
"Examples" headers; update the document by adding clear "How It Works" and
"Examples" sections (or rename "When to Activate" to "When to Use" to match
guidelines), then reorganize the existing pattern-based content under these
headers so each section contains the relevant descriptions and sample patterns;
alternatively, if you prefer the pattern-based layout, add a brief note at the
top explicitly stating that this file uses a pattern-based organization that
maps to "When to Use", "How It Works", and "Examples" to satisfy the guideline.

210-215: Document panic behavior or handle errors in builder.

The header() method uses .unwrap() which will panic if the key or value are invalid HTTP header strings. While panicking in builder methods is a common pattern in Rust, it's worth documenting this behavior or considering alternatives.

Options:

  1. Document that invalid header names/values will panic
  2. Return Result<Self, Error> for fallible operations
  3. Accept pre-validated HeaderName/HeaderValue types
📝 Example: Document the panic behavior
     pub fn header(mut self, key: &str, value: &str) -> Self {
+        // Panics if key or value are invalid HTTP header strings
         self.headers.insert(
             HeaderName::from_str(key).unwrap(),
             HeaderValue::from_str(value).unwrap(),
         );
         self
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rust-patterns/SKILL.md` around lines 210 - 215, The header() method
currently calls HeaderName::from_str(...).unwrap() and
HeaderValue::from_str(...).unwrap(), which panics on invalid input; change the
str-based header API to be fallible and add a validated variant: update pub fn
header(mut self, key: &str, value: &str) -> Result<Self, Box<dyn
std::error::Error>> and replace the unwraps with the ? operator on
HeaderName::from_str(key) and HeaderValue::from_str(value), and add a second
convenience method pub fn header_pair(mut self, name: HeaderName, value:
HeaderValue) -> Self that inserts directly for callers who already have
validated HeaderName/HeaderValue; document both methods with doc comments
stating the fallible behavior of header() and the safe alternative
header_pair().
skills/kubernetes-patterns/SKILL.md (1)

50-50: Avoid :latest in examples to stay consistent with immutable deployment guidance.

The document says to use immutable deployments, but fluent/fluent-bit:latest and tag: latest contradict that. Prefer pinned, versioned tags in examples.

Also applies to: 340-340

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/kubernetes-patterns/SKILL.md` at line 50, Replace the use of the
floating tag "fluent/fluent-bit:latest" (and any other "tag: latest"
occurrences) with a pinned, versioned tag to match the immutable deployment
guidance; update the example image reference "fluent/fluent-bit:latest" to a
specific release (e.g., a stable semantic version) and change the other instance
noted ("tag: latest") likewise so both examples show immutable, versioned tags.
contexts/refactor.md (1)

20-25: Add a context-window safety guard for large refactors.

Consider adding a rule to stop before the last ~20% of context window and split work into smaller passes.

Based on learnings, "Avoid using the last 20% of context window for large-scale refactoring, feature implementation spanning multiple files, or debugging complex interactions".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@contexts/refactor.md` around lines 20 - 25, Add a new safety rule to the
"Safety Rules" section (the list under "## Safety Rules") that instructs
contributors to stop before using the final ~20% of the model's context window
for large refactors or cross-file changes and to split such work into smaller
passes; update the list to include a concise bullet like "Avoid using the last
20% of the context window for large-scale refactors or multi-file changes—split
work into smaller passes" so the policy is explicit alongside the existing rules
(Tests must pass..., Never change behavior..., etc.).
skills/cicd-patterns/SKILL.md (1)

210-213: Avoid mutable action refs in CI examples.

@main is not reproducible and can drift unexpectedly; use a pinned release tag or commit SHA.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/cicd-patterns/SKILL.md` around lines 210 - 213, Replace the mutable
action ref "uses: trufflesecurity/trufflehog@main" with a pinned release tag or
commit SHA in the CI example (e.g., "uses: trufflesecurity/trufflehog@vX.Y.Z" or
a specific commit hash) so the workflow is reproducible; update the example line
that contains uses: trufflesecurity/trufflehog@main to the chosen stable tag/sha
and, if present elsewhere in SKILL.md, make the same replacement.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@rules/java/hooks.md`:
- Around line 25-29: The two hook command entries containing "command": "if echo
'$TOOL_INPUT' | grep -qE '\\.java$'; then ..." and "command": "if echo
'$TOOL_INPUT' | grep -qE '\\.kt$'; then ..." use single quotes around
$TOOL_INPUT which prevents shell variable expansion; update both command strings
to use double quotes for $TOOL_INPUT (e.g., echo "$TOOL_INPUT") so the variable
expands and the subsequent grep and file extraction (grep -oE '[^ ]+\\.java' /
'[^ ]+\\.kt') receive actual file paths.

In `@rules/rust/hooks.md`:
- Around line 24-29: The hook matcher commands use single quotes around
$TOOL_INPUT so the shell sees the literal string '$TOOL_INPUT' instead of
expanding the variable; update the "command" values (the cargo fmt and cargo
clippy entries) to use double-quoted expansion (e.g. "$TOOL_INPUT") so grep
actually inspects the tool input, making sure to escape quotes correctly for
JSON string boundaries in the "command" property.

In `@skills/api-performance/SKILL.md`:
- Around line 11-301: The document is missing the required top-level sections
"When to Use", "How It Works", and "Examples"; update SKILL.md by (1) adding a
"When to Use" heading and moving/renaming the existing "When to Activate" bullet
list under it, (2) adding a "How It Works" heading that provides a short
conceptual summary of the mechanisms used (HTTP caching/ETag, Redis stampede
prevention, pagination, compression, async processing, rate limiting token
bucket, graceful shutdown, batch APIs, monitoring), and (3) adding an "Examples"
heading that contains the existing code snippets (cacheControl, withETag,
cachedQuery, pagination, compression, BullMQ, rate-limiter Lua, graceful
shutdown, batch API, Prometheus) so the file matches the repository skill
format; ensure headings are top-level Markdown (##) and preserve the code blocks
and interfaces (e.g., cacheControl, withETag, cachedQuery, checkRateLimit) under
Examples.
- Around line 71-87: The cache lock currently writes a constant value and
unconditionally deletes the lock (lockKey) and recursively retries without
bounds; change cachedQuery to generate a unique ownership token (e.g., UUID)
when attempting the lock, pass that token as the value in redis.set(lockKey,
token, 'NX', 'EX', 30) and replace the unconditional redis.del(lockKey) with an
atomic Lua release that checks the token before deleting (use a EVAL script that
compares ARGV[1] to GET(key) then DEL only on match). Also remove the unbounded
recursive retry in cachedQuery and implement a bounded retry loop with a
maxAttempts and exponential/backoff sleep between attempts before falling back
to fetcher, while still respecting ttl and using fetcher for the actual fetch.

In `@skills/cicd-patterns/SKILL.md`:
- Around line 11-286: The SKILL.md is missing the required top-level headings;
add explicit "When to Use", "How It Works", and "Examples" sections and
reorganize existing content accordingly: move the current "When to Activate"
content under the new "When to Use" heading, convert "Core Principles", "GitHub
Actions", "Complete Pipeline", "Deployment Strategies", "Security in CI/CD",
"Caching Strategies", "Monorepo CI", and "Semantic Versioning" into the "How It
Works" section (keeping subsections and snippets), and place one or more
practical pipeline YAML excerpts (e.g., the reusable workflow and complete
pipeline) under the "Examples" section; ensure headings are top-level Markdown
headers and the file matches the repository skill format.

In `@skills/data-engineering-patterns/SKILL.md`:
- Around line 80-86: The incremental_sync function is vulnerable to SQL
injection because it interpolates the table and watermark.column identifiers
directly into the query; instead, validate or whitelist the table and column
names before use (e.g., check against a known schema or
allowed_tables/allowed_columns list returned/used by get_watermark), and build
the query using safe identifier quoting utilities from your DB library or an
explicit mapping from allowed names to safe identifiers; keep the parameterized
value for the watermark.value passed as a bound parameter to
source_conn.execute; update incremental_sync (and any callers like
get_watermark/load_records) to perform the whitelist/validation and use the safe
identifier, not raw user input.
- Around line 74-78: The get_watermark function currently interpolates the table
name into SQL using "%" which allows SQL injection via the table parameter;
change it to build the query with a safe identifier instead of string formatting
— for example use the DB driver's identifier compositing (e.g., psycopg2.sql.SQL
and psycopg2.sql.Identifier) or validate/whitelist the table name before use,
then execute the composed query and keep returning Watermark(table=table,
column='updated_at', value=result[0]).

In `@skills/database-performance/SKILL.md`:
- Around line 11-283: The document is missing the required explicit headings;
add top-level sections "When to Use", "How It Works", and "Examples" and
reorganize existing content under them: move the current "When to Activate"
items into "When to Use"; create a "How It Works" section summarizing the key
techniques (EXPLAIN ANALYZE, Index Selection, N+1 detection/ORM fixes,
Connection Pooling, Caching, Materialized Views, Partitioning, Monitoring) with
brief explanatory sentences; create an "Examples" section containing the
concrete snippets (the EXPLAIN ANALYZE query, index examples, N+1 bad/good
snippets, pool and cache functions, pagination examples, materialized view SQL,
partitioning SQL, monitoring queries). Ensure the new headings are Markdown
H2/H3 as appropriate and keep existing code blocks unchanged.

In `@skills/e2e-patterns/SKILL.md`:
- Around line 11-370: Update the top-level headings in SKILL.md to match the
mandated template: rename the "When to Activate" section header to "When to
Use", insert a new "How It Works" section summarizing the core principles and
configuration (you can repurpose the existing "Core Principles" and
"Configuration" content under this new header), and add an "Examples" section
that consolidates representative snippets from the existing Common Test
Scenarios / Visual Regression / Mobile Testing blocks; keep the existing
Playwright config and page-object examples but move them under "How It Works" or
"Examples" as appropriate so the file contains explicit "When to Use", "How It
Works", and "Examples" top-level headings.
- Line 66: The BasePage constructor currently declares the Playwright Page as a
protected member (constructor(protected page: Page) {}), but tests access
loginPage.page and authenticatedPage.page directly; change the constructor
parameter to be public and readonly (constructor(public readonly page: Page) {})
so .page is externally accessible and immutable; update any derived classes or
usages that relied on protected visibility to use the new public readonly page
property (e.g., references in loginPage.page.getByText(...) and
authenticatedPage.page.toHaveURL(...)).

In `@skills/flutter-patterns/SKILL.md`:
- Around line 11-375: The SKILL.md currently uses "When to Activate" and "Core
Principles" but lacks the standardized sections required by the guidelines; add
explicit top-level Markdown sections titled "When to Use", "How It Works", and
"Examples" (replace or move content from the existing "When to Activate" into
"When to Use", move the current "Core Principles" and short explanations into
"How It Works"), and consolidate the concrete code snippets (UserCard, AppState,
Riverpod provider/UserNotifier, AuthBloc, GoRouter, BatteryService,
PaginatedListView, tests, etc.) into a new "Examples" section with brief
captions for each so the file follows the required template and clearly
separates intent, explanation, and sample usage.
- Around line 297-302: The snippet in build() uses an undefined variable items
causing a compile error; fix by referencing the correct data source or declaring
the list in scope (e.g., use the widget property or a state field like
List<Item> items) and ensure ItemTile consumes the same item type; update the
ListView.builder call in the build method (and, if needed, initialize or pass
_items into the widget/state) so itemCount and items[index] refer to a defined
symbol.

In `@skills/frontend-performance/SKILL.md`:
- Around line 11-313: Replace the current top-level "When to Activate" header
with "When to Use", add a new top-level "How It Works" section that concisely
explains the performance principles covered (e.g., LCP, INP, CLS, React
rendering optimization, bundle splitting, caching, virtualization, font loading
and measurement) and why they improve UX, and create a top-level "Examples"
section that contains the existing code snippets (move the LCP/INP/CLS, React
Optimization, Bundle Optimization, Virtualization, Font Loading, Caching, and
Measuring Performance code blocks under this "Examples" header); ensure headings
are Markdown top-level (## or ### as appropriate) and update any internal
references from "When to Activate" to "When to Use".
- Around line 283-290: The SWR config in the useSWR call uses the invalid option
staleTime (in the call assigning { revalidateOnFocus, dedupingInterval,
staleTime }) so remove staleTime and replace it with an appropriate SWR option
such as revalidateIfStale, revalidateOnMount, or refreshInterval depending on
desired behavior; update the object passed to useSWR (the options for useSWR in
the same component/function) to use one of these valid keys and remove staleTime
so SWR will honor the intended caching/revalidation logic.

In `@skills/kotlin-patterns/SKILL.md`:
- Around line 11-343: Rename the top-level "When to Activate" header to "When to
Use", add a new "How It Works" section that summarizes the core principles (Null
Safety, Immutability, Extension Functions, Coroutines, Sealed Classes/State
Machines, DSLs, Value Classes, Scope Functions, Error Handling) and references
the existing conceptual items (e.g., functions like getUserCity, process,
withdraw, fetchDashboard, syncAll, observePrices, CartViewModel, ApiResult.fold,
OrderStatus.canCancel, html/ServerConfig builders, UserId/Money value classes)
to guide readers, and create an "Examples" section that moves all concrete code
blocks (the Kotlin snippets under Null Safety, Immutability, Extension
Functions, Coroutine Patterns, DSL Builders, Value Classes, Scope Functions,
Error Handling, and Testing with MockK) into that section so the doc matches the
required top-level structure When to Use / How It Works / Examples while
preserving the existing examples and snippets.

In `@skills/kubernetes-patterns/SKILL.md`:
- Around line 11-360: Rename and restructure the SKILL.md to the required
top-level headings: replace "When to Activate" with "When to Use", add a new
top-level "How It Works" section that consolidates "Core Principles" and the
explanatory notes (e.g., Declarative configuration, Immutable deployments,
Health checks, Resource limits, Pod Design Patterns, Deployment Strategies,
Autoscaling, Secrets Management, Security, Observability, Helm Chart Patterns),
and move all YAML snippets and concrete examples under a top-level "Examples"
section (keeping subsections like Sidecar Pattern, Init Containers, Rolling
Update, Canary with Argo Rollouts, HPA, KEDA, External Secrets, Network Policy,
ServiceMonitor, PrometheusRule, Helm usage). Ensure headings are top-level (##)
and the document order is: When to Use, How It Works, Examples, preserving
existing content and examples verbatim.

In `@skills/mlops-patterns/SKILL.md`:
- Around line 11-283: The SKILL.md lacks the required standardized headings; add
top-level Markdown sections titled "When to Use", "How It Works", and
"Examples". Move or duplicate the existing "When to Activate" content under the
new "When to Use" heading (or rename it), create a "How It Works" section
summarizing core principles (Reproducibility, Versioning, Monitoring,
Automation) and the typical workflow (seed/config management, experiment
tracking, training → validation → deployment → monitoring) referencing
ExperimentConfig, set_all_seeds, tracked_experiment, and EarlyStopping, and
create an "Examples" section that collects the concrete code snippets
(reproducibility seed code, MLflow tracked_experiment, Optuna objective,
EarlyStopping, FastAPI predict, get_model_variant, detect_drift, check_gates,
and tests) so readers can find hands-on examples in one place.
- Around line 128-137: The should_stop implementation incorrectly instantiates a
new EarlyStopping and always returns False on improvement; instead update the
instance state: in EarlyStopping.should_stop, check improvement with the
existing logic (e.g., if score < self.best_score - self.min_delta for a "lower
is better" metric), then set self.best_score = score and self.counter = 0 and
return False; otherwise do self.counter += 1 and return self.counter >=
self.patience. Modify the method on the EarlyStopping class (should_stop,
self.best_score, self.counter, self.min_delta, self.patience) to maintain and
use instance state rather than creating a new object.

In `@skills/react-native-patterns/SKILL.md`:
- Around line 11-324: The document uses non-standard top-level headings (e.g.,
"When to Activate") and is missing the required sections; change the top-level
headings to "When to Use", add a new "How It Works" section that consolidates
"Core Principles", "Navigation Patterns", "State Management", and "Performance
Patterns" content, and add an "Examples" section that contains the sample code
blocks (FlatList, Zustand, React Query, Navigation, Animations, Image caching,
Testing, Detox). Specifically replace the "When to Activate" heading with "When
to Use", move or rename the "Core Principles" and subsequent pattern subsections
under a single "How It Works" header, and group all runnable code snippets under
a top-level "Examples" header to match the required SKILL structure while
keeping existing text and code intact.

In `@skills/rust-testing/SKILL.md`:
- Around line 11-344: Rename the top-level heading "When to Activate" to "When
to Use" and add a new "How It Works" heading that summarizes the testing
workflow and tooling (move or condense the TDD Workflow, Unit Test Module, Async
Testing, Property-Based Testing, Doc Tests, and Benchmarking sections under it),
then create an explicit "Examples" heading and place the concrete code blocks
(the RED/GREEN/REFACTOR TDD snippets, Unit Test Module, Integration Tests,
Shared Test Helpers, Mock Traits, proptest examples, Doc Tests, Async Testing,
and Benchmarking examples) beneath it; ensure headings like "TDD Workflow",
"Unit Test Module", and "Integration Tests" are promoted or folded under the
appropriate required sections so the file contains clear top-level "When to
Use", "How It Works", and "Examples" sections.

---

Minor comments:
In `@rules/java/coding-style.md`:
- Line 10: Update the wording to use the compound adjective form: replace the
phrase "Java/Kotlin specific content" with "Java/Kotlin-specific content" in the
sentence that reads "This file extends ... with Java/Kotlin specific content" so
the hyphenated form is used consistently in the document.

In `@rules/java/security.md`:
- Line 10: Update the sentence "This file extends
[common/security.md](../common/security.md) with Java/Kotlin specific content."
to use a hyphenated compound adjective: change it to "This file extends
[common/security.md](../common/security.md) with Java/Kotlin-specific content."
This edits the header/first line in rules/java/security.md to ensure consistent
hyphenation.

In `@rules/java/testing.md`:
- Line 10: Update the prose on the line containing "This file extends
[common/testing.md](../common/testing.md) with Java/Kotlin specific content" to
use a hyphenated compound adjective: change "Java/Kotlin specific content" to
"Java/Kotlin-specific content" so the sentence reads "This file extends
[common/testing.md](../common/testing.md) with Java/Kotlin-specific content".

In `@rules/rust/coding-style.md`:
- Line 9: The sentence that currently reads "This file extends
[common/coding-style.md](../common/coding-style.md) with Rust specific content"
should use a hyphenated compound adjective; change "Rust specific" to
"Rust-specific" so the sentence becomes "with Rust-specific content" to correct
the grammar.

In `@rules/rust/hooks.md`:
- Line 9: Change the phrase "Rust specific" to the hyphenated compound adjective
"Rust-specific" in the sentence that reads "This file extends
[common/hooks.md](../common/hooks.md) with Rust specific content." Update that
exact text in rules/rust/hooks.md so it becomes "This file extends
[common/hooks.md](../common/hooks.md) with Rust-specific content." Ensure only
the hyphenation is changed and wording/links remain identical otherwise.

In `@rules/rust/patterns.md`:
- Line 9: Update the intro sentence in rules/rust/patterns.md by replacing the
phrase "Rust specific" with the hyphenated form "Rust-specific" so the sentence
reads: "This file extends [common/patterns.md](../common/patterns.md) with
Rust-specific content." Locate the exact sentence in the file and make the
single-word change to add the hyphen.

In `@rules/rust/testing.md`:
- Line 9: Update the description sentence string "This file extends
[common/testing.md](../common/testing.md) with Rust specific content." to
hyphenate the compound adjective, changing "Rust specific content" to
"Rust-specific content" so the sentence reads "This file extends
[common/testing.md](../common/testing.md) with Rust-specific content."

In `@skills/mlops-patterns/SKILL.md`:
- Around line 210-223: The snippet uses pandas and numpy types but omits
imports; add the missing imports at the top of this example so pd and np are
defined (e.g., import pandas as pd and import numpy as np) before the existing
imports and definitions such as DriftResult and detect_drift to ensure
pd.DataFrame and np.number resolve properly.

In `@skills/terraform-patterns/SKILL.md`:
- Around line 227-236: The workflow currently writes the plan to plan.tfplan in
the "Plan" step but the "Comment Plan" step reads plan.txt, causing a mismatch;
fix by making the two steps use the same artifact: either change the "Plan" step
to produce a text plan (e.g., run terraform plan -no-color | tee plan.txt) so
the "Comment Plan" script can read plan.txt, or keep plan.tfplan and update the
"Comment Plan" step to convert/read it via terraform show plan.tfplan > plan.txt
(or run terraform show -no-color plan.tfplan and read that output) so the file
names and formats align with the const plan = require('fs').readFileSync(...) in
the Comment Plan step.

---

Nitpick comments:
In `@contexts/refactor.md`:
- Around line 20-25: Add a new safety rule to the "Safety Rules" section (the
list under "## Safety Rules") that instructs contributors to stop before using
the final ~20% of the model's context window for large refactors or cross-file
changes and to split such work into smaller passes; update the list to include a
concise bullet like "Avoid using the last 20% of the context window for
large-scale refactors or multi-file changes—split work into smaller passes" so
the policy is explicit alongside the existing rules (Tests must pass..., Never
change behavior..., etc.).

In `@skills/cicd-patterns/SKILL.md`:
- Around line 210-213: Replace the mutable action ref "uses:
trufflesecurity/trufflehog@main" with a pinned release tag or commit SHA in the
CI example (e.g., "uses: trufflesecurity/trufflehog@vX.Y.Z" or a specific commit
hash) so the workflow is reproducible; update the example line that contains
uses: trufflesecurity/trufflehog@main to the chosen stable tag/sha and, if
present elsewhere in SKILL.md, make the same replacement.

In `@skills/kubernetes-patterns/SKILL.md`:
- Line 50: Replace the use of the floating tag "fluent/fluent-bit:latest" (and
any other "tag: latest" occurrences) with a pinned, versioned tag to match the
immutable deployment guidance; update the example image reference
"fluent/fluent-bit:latest" to a specific release (e.g., a stable semantic
version) and change the other instance noted ("tag: latest") likewise so both
examples show immutable, versioned tags.

In `@skills/rust-patterns/SKILL.md`:
- Around line 11-16: The SKILL.md currently has a "When to Activate" section but
lacks explicit "How It Works" and "Examples" headers; update the document by
adding clear "How It Works" and "Examples" sections (or rename "When to
Activate" to "When to Use" to match guidelines), then reorganize the existing
pattern-based content under these headers so each section contains the relevant
descriptions and sample patterns; alternatively, if you prefer the pattern-based
layout, add a brief note at the top explicitly stating that this file uses a
pattern-based organization that maps to "When to Use", "How It Works", and
"Examples" to satisfy the guideline.
- Around line 210-215: The header() method currently calls
HeaderName::from_str(...).unwrap() and HeaderValue::from_str(...).unwrap(),
which panics on invalid input; change the str-based header API to be fallible
and add a validated variant: update pub fn header(mut self, key: &str, value:
&str) -> Result<Self, Box<dyn std::error::Error>> and replace the unwraps with
the ? operator on HeaderName::from_str(key) and HeaderValue::from_str(value),
and add a second convenience method pub fn header_pair(mut self, name:
HeaderName, value: HeaderValue) -> Self that inserts directly for callers who
already have validated HeaderName/HeaderValue; document both methods with doc
comments stating the fallible behavior of header() and the safe alternative
header_pair().

In `@skills/terraform-patterns/SKILL.md`:
- Around line 11-16: Add explicit "How It Works" and "Examples" section headers
to SKILL.md to match the rust-patterns structure: insert a "How It Works"
section after the "When to Activate" header summarizing the skill's internal
approach (e.g., what Terraform concepts/patterns it leverages and typical
workflow) and add a separate "Examples" section that groups the concrete usage
snippets already present; update any existing dispersed examples to live under
the new "Examples" header and ensure headings match the guideline names ("How It
Works", "Examples") for consistency with other skills.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d70bab8 and 63b3c1f.

📒 Files selected for processing (35)
  • agents/java-reviewer.md
  • agents/mobile-reviewer.md
  • agents/rust-reviewer.md
  • contexts/debug.md
  • contexts/deploy.md
  • contexts/planning.md
  • contexts/refactor.md
  • examples/codemap-go-microservice.md
  • examples/codemap-nextjs.md
  • examples/codemap-python-api.md
  • examples/coverage-config.md
  • rules/java/coding-style.md
  • rules/java/hooks.md
  • rules/java/patterns.md
  • rules/java/security.md
  • rules/java/testing.md
  • rules/rust/coding-style.md
  • rules/rust/hooks.md
  • rules/rust/patterns.md
  • rules/rust/security.md
  • rules/rust/testing.md
  • skills/api-performance/SKILL.md
  • skills/cicd-patterns/SKILL.md
  • skills/data-engineering-patterns/SKILL.md
  • skills/database-performance/SKILL.md
  • skills/e2e-patterns/SKILL.md
  • skills/flutter-patterns/SKILL.md
  • skills/frontend-performance/SKILL.md
  • skills/kotlin-patterns/SKILL.md
  • skills/kubernetes-patterns/SKILL.md
  • skills/mlops-patterns/SKILL.md
  • skills/react-native-patterns/SKILL.md
  • skills/rust-patterns/SKILL.md
  • skills/rust-testing/SKILL.md
  • skills/terraform-patterns/SKILL.md

- Rename 'When to Activate' to 'When to Use' in all 14 skills
- Add required 'How It Works' and 'Examples' section headers
- Fix SQL injection in data-engineering-patterns (psycopg2.sql.Identifier + allowlist)
- Fix EarlyStopping logic bug in mlops-patterns (mutable state updates)
- Fix cache stampede in api-performance (UUID lock token + atomic Lua release)
- Fix SWR staleTime to refreshInterval in frontend-performance
- Fix protected→public readonly page in e2e-patterns
- Fix unsafe .unwrap() to ? operator in rust-patterns
- Fix terraform plan filename consistency (plan.txt)
- Pin versions: trufflehog@v3.88.0, fluent-bit:3.2
- Fix hyphenation: Rust-specific, Java/Kotlin-specific across all rules
- Add missing pandas/numpy imports in mlops-patterns drift detection
- Add context window awareness warning to refactor context
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

♻️ Duplicate comments (5)
skills/kotlin-patterns/SKILL.md (1)

114-117: ⚠️ Potential issue | 🟠 Major

Nest example topics under the Examples section.

## Examples at Line 114 is immediately followed by another ## heading at Line 116, so the examples are siblings, not clearly inside the required section. Please make example topics subheadings (e.g., ###) under ## Examples.

As per coding guidelines, "Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/kotlin-patterns/SKILL.md` around lines 114 - 117, The Examples section
currently has sibling-level headings (e.g., "## Coroutine Patterns") so move all
example topics under the main "## Examples" heading by demoting their headings
to subheadings (change "## Coroutine Patterns" and any other example-level "##"
headings to "###" or appropriate subheading level), ensuring "When to Use" and
"How It Works" remain top-level sections and that every example topic is nested
as a subsection of the "## Examples" heading.
skills/kubernetes-patterns/SKILL.md (1)

121-124: ⚠️ Potential issue | 🟠 Major

Keep all concrete patterns nested under ## Examples.

At Line 121 and Line 123, both headings are level-2, which makes Pod Design Patterns a peer of Examples instead of content within it. Please demote example subsections to ### under ## Examples.

As per coding guidelines, "Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/kubernetes-patterns/SKILL.md` around lines 121 - 124, The "Pod Design
Patterns" heading is currently a level-2 peer of "## Examples" instead of being
nested inside it; change the "## Pod Design Patterns" heading to a level-3
heading ("### Pod Design Patterns") so it and any concrete pattern subsections
are nested under the existing "## Examples" section, and similarly demote any
other example-level "##" headings under Examples to "###" to comply with the
Skills Markdown structure.
skills/rust-testing/SKILL.md (1)

23-26: ⚠️ Potential issue | 🟠 Major

Place TDD and test topics inside ## Examples (as subheadings).

## Examples (Line 23) is immediately followed by another level-2 heading (Line 25), so examples are not structurally nested under the required section.

As per coding guidelines, "Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rust-testing/SKILL.md` around lines 23 - 26, The "## TDD Workflow for
Rust" heading (and any test-related headings) must be nested under the existing
"## Examples" section; change those level-2 headings to level-3 (e.g., rename
"## TDD Workflow for Rust" to "### TDD Workflow for Rust") and move any test
topic content so they appear as subsections under "## Examples" to satisfy the
required When to Use / How It Works / Examples structure.
rules/java/hooks.md (1)

32-37: ⚠️ Potential issue | 🟠 Major

Quote $TOOL_INPUT in hook commands to avoid path splitting bugs.

Line 32 and Line 36 should wrap $TOOL_INPUT in double quotes; otherwise paths with spaces/globs break and commands may lint/format the wrong files.

Suggested fix
- "command": "if echo $TOOL_INPUT | grep -q '\\.java'; then cd $(git rev-parse --show-toplevel) && mvn spotless:apply -q 2>/dev/null || gradle spotlessApply -q 2>/dev/null; fi"
+ "command": "if echo \"$TOOL_INPUT\" | grep -q '\\.java'; then cd \"$(git rev-parse --show-toplevel)\" && mvn spotless:apply -q 2>/dev/null || gradle spotlessApply -q 2>/dev/null; fi"

- "command": "if echo $TOOL_INPUT | grep -q '\\.kt'; then cd $(git rev-parse --show-toplevel) && ktlint --format $(echo $TOOL_INPUT | grep -o '[^ ]*\\.kt') 2>/dev/null; fi"
+ "command": "if echo \"$TOOL_INPUT\" | grep -q '\\.kt'; then cd \"$(git rev-parse --show-toplevel)\" && ktlint --format $(echo \"$TOOL_INPUT\" | grep -o '[^ ]*\\.kt') 2>/dev/null; fi"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/java/hooks.md` around lines 32 - 37, Wrap the TOOL_INPUT variable in
double quotes in both hook command strings to prevent path-splitting and
globbing: update the Java command's usages of $TOOL_INPUT (both in the echo and
grep pipeline inside the "command" value around mvn/gradle) and the Kotlin
command's usages of $TOOL_INPUT (the echo piped into ktlint and the grep -o
extraction) so each echo or grep reads "$TOOL_INPUT" instead of $TOOL_INPUT;
keep the rest of the commands and quoting/escaping intact.
skills/data-engineering-patterns/SKILL.md (1)

34-52: ⚠️ Potential issue | 🟠 Major

SQL injection risk in the idempotent load example.

The load_data function uses f-string interpolation to build SQL with untrusted table names and column names from the DataFrame (lines 39-52). While this may be intended as a simplified conceptual example, it demonstrates an unsafe pattern that could be copied into production code.

Consider either:

  1. Adding a clear warning comment that this is unsafe without input validation
  2. Showing the safe version with identifier escaping and validation
🔒 Suggested improvement with validation
 # CORRECT: Idempotent upsert pattern
-def load_data(df: pd.DataFrame, table: str, key_columns: list[str]) -> None:
+def load_data(df: pd.DataFrame, table: str, key_columns: list[str], allowed_tables: frozenset[str]) -> None:
+    # Validate table name against allowlist to prevent SQL injection
+    if table not in allowed_tables:
+        raise ValueError(f"Table {table!r} is not in allowed_tables")
+    
     staging_table = f"{table}_staging"

Or add a comment at the top:

 # CORRECT: Idempotent upsert pattern
+# WARNING: This example omits table/column validation for brevity.
+# In production, validate table names against an allowlist before use.
 def load_data(df: pd.DataFrame, table: str, key_columns: list[str]) -> None:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/data-engineering-patterns/SKILL.md` around lines 34 - 52, The SQL
construction in load_data (merge_sql, staging_table) interpolates untrusted
table/column names via f-strings and is vulnerable to SQL injection; replace
that by either (A) adding a clear top-of-file and function-level warning that
this example is unsafe for production unless identifiers are validated/escaped,
or (B) implement a safe variant: validate table and column names against an
allowlist or regex, quote identifiers using the engine/dialect identifier
preparer (or psycopg2.sql.Identifier for Postgres) when building staging_table
and merge_sql, and only use bind parameters (not f-strings) for values; update
load_data, merge_sql, and the conn.execute calls to use the safe identifier
quoting/validation approach instead of raw f-string interpolation.
🧹 Nitpick comments (3)
skills/database-performance/SKILL.md (1)

493-496: Prefer SCAN over KEYS for cache invalidation patterns.

Using redis.keys('products:list:*') can block Redis on large keyspaces. Use cursor-based SCAN (or tracked index keys) in this example to model production-safe behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/database-performance/SKILL.md` around lines 493 - 496, The code uses
redis.keys('products:list:*') which blocks on large keyspaces; replace this with
a cursor-based SCAN loop using redis.scan to iterate matching keys for pattern
'products:list:*', accumulate matches (or stream them) and call redis.del in
controlled batches (e.g., 100-1000 keys per del) to avoid large argument lists;
update references to listKeys and the redis.keys(...) call to use the new
scan-based logic and ensure the deletion uses batched redis.del calls.
skills/e2e-patterns/SKILL.md (1)

1202-1207: Avoid fixed sleeps in examples; they undermine the anti-flake guidance.

The doc recommends avoiding sleep, but these examples still use waitForTimeout. Prefer waiting on UI/network conditions (toBeVisible, toHaveURL, waitForResponse, etc.).

Also applies to: 1325-1325

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/e2e-patterns/SKILL.md` around lines 1202 - 1207, Replace the fixed
sleeps (page.waitForTimeout) used before taking screenshots with deterministic
waits on UI/network conditions: instead of page.waitForTimeout(300) before
expect(page).toHaveScreenshot('products-tablet.png') and before the mobile
block, wait for a stable DOM or network state such as using
page.waitForLoadState('networkidle'), waiting for a specific locator to be
visible via locator.waitFor() or expect(locator).toBeVisible(), or waiting for
an important API via page.waitForResponse(); keep the page.setViewportSize call
but remove the sleeps and use these condition-based waits around
expect(page).toHaveScreenshot and after page.setViewportSize so screenshots run
only after the UI is ready.
skills/data-engineering-patterns/SKILL.md (1)

817-817: Minor: Simplify boolean comparison in Spark filter.

Line 817 compares F.col("is_valid") == True. In PySpark, when a column is already boolean, you can use it directly: F.col("is_valid").

♻️ Suggested simplification
     cleaned = (
         raw
-        .filter(F.col("is_valid") == True)
+        .filter(F.col("is_valid"))
         .dropDuplicates(["event_id"])
         .withColumn("processed_at", F.current_timestamp())
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/data-engineering-patterns/SKILL.md` at line 817, The filter uses an
explicit boolean comparison with F.col("is_valid") == True; change it to use the
boolean column directly by replacing the expression in the .filter call (the
F.col("is_valid") == True usage) with just F.col("is_valid") so Spark treats the
column as a boolean predicate without an unnecessary equality check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@rules/java/testing.md`:
- Around line 67-102: The snippet is missing several imports so the example
compiles when copy-pasted; add the appropriate imports for JUnit and AssertJ
symbols used (e.g., org.junit.jupiter.api.DisplayName,
org.junit.jupiter.params.ParameterizedTest,
org.junit.jupiter.params.provider.CsvSource,
org.junit.jupiter.params.provider.MethodSource,
org.junit.jupiter.params.provider.EnumSource,
org.junit.jupiter.api.extension.ExtendWith if used), plus
java.util.stream.Stream and org.junit.jupiter.params.provider.Arguments, and the
assertion import org.assertj.core.api.Assertions.assertThat (or static import
for assertThat); also add any Spring/testing imports referenced elsewhere like
org.springframework.boot.test.context.SpringBootTest and
org.springframework.test.context.DynamicPropertyRegistry/DynamicPropertySource
where those symbols appear so classes such as EmailValidatorTest and methods
like provideOrderStatuses compile.

In `@skills/api-performance/SKILL.md`:
- Around line 196-210: The getOrSetWithLock function calls crypto.randomUUID()
but crypto is not imported; add the appropriate Node crypto import (e.g., import
crypto from 'crypto' or import { randomUUID } from 'crypto' and call
randomUUID()) at the top of the module so getOrSetWithLock can generate
lockToken without runtime failure; update any callsite in getOrSetWithLock that
references crypto.randomUUID to match the chosen import form.
- Around line 1074-1085: The current health calculation makes 'degraded'
unreachable because checks only have 'pass'/'fail'; update the logic so
'degraded' is used when there are non-passing non-failing checks (e.g., 'warn')
or when not allPassing and no anyFailing; specifically add a predicate like
anyWarning = Object.values(checks).some(c => c.status === 'warn') (or use
!allPassing && !anyFailing) and then set health.status = allPassing ? 'healthy'
: anyFailing ? 'unhealthy' : 'degraded'; finally adjust statusCode assignment to
treat 'degraded' as 200 (use health.status to derive code) and ensure variables
referenced are checks, health and statusCode.

In `@skills/cicd-patterns/SKILL.md`:
- Around line 114-119: The examples violate the doc's own guidance: replace
every use of the literal "aquasecurity/trivy-action@master" (and any other
actions using branch tags) with pinned action references using commit SHAs, and
replace any static AWS credential usage (the static AWS_ACCESS_KEY_ID /
AWS_SECRET_ACCESS_KEY examples) with OIDC-based authentication patterns; locate
occurrences of "aquasecurity/trivy-action@master" and the static AWS key
snippets (e.g., the block around the static AWS keys shown near lines 1091-1093)
and update them to use a SHA-pinned action ref and the repository-level OIDC
setup (request id-token permissions and use the OIDC token exchange) so all
examples follow SHA pinning and keyless OIDC auth.

In `@skills/data-engineering-patterns/SKILL.md`:
- Around line 126-190: The code block is missing imports for pandas and datetime
used in APIExtractor.extract; add "import pandas as pd" and "from datetime
import datetime" at the top of the snippet so pd.DataFrame and datetime.utcnow()
resolve correctly and update any other references to pd or datetime in
Extractor/APIExtractor if needed.
- Around line 278-332: The code calls logger.info()/logger.warning() in
run_backfill but never sets up logging; add an import for the logging module and
initialize a module-level logger (e.g., assign logger =
logging.getLogger(__name__)) near the top of the snippet so run_backfill can use
logger; update the BackfillConfig/generate_partitions/run_backfill block to
reference that logger when handling checkpointing and retry logging.
- Around line 196-272: The code uses datetime in the Watermark dataclass and
update_watermark method but never imports it; fix this by adding the missing
import (e.g., from datetime import datetime) at the top of the snippet so
references in Watermark and update_watermark resolve correctly.

In `@skills/mlops-patterns/SKILL.md`:
- Around line 798-813: pd.read_parquet does not accept a chunksize and the
current loop using pd.read_parquet(config.input_path,
chunksize=config.batch_size) will fail; replace the chunked reading with
PyArrow's ParquetFile.iter_batches to stream batches by batch_size and selected
columns (use pyarrow.parquet.ParquetFile and iter_batches with
columns=list(config.feature_columns)), convert each batch to a pandas DataFrame
before calling model.predict(features) (keep using model.predict and
chunk.assign to add prediction, model_path and scored_at), append to results and
then concat into output_df and write with
output_df.to_parquet(config.output_path, index=False).

In `@skills/rust-patterns/SKILL.md`:
- Around line 125-128: The headings under Examples are at the same level (##)
which prevents the patterns from being nested; change the subsequent "## Error
Handling Patterns" (and any other pattern blocks intended as examples) to "###
Error Handling Patterns" so they become subsections of the existing "##
Examples" section, and ensure each pattern block uses "###" for its title while
leaving "## When to Use" and "## How It Works" as top-level section headings
elsewhere in SKILL.md.

In `@skills/terraform-patterns/SKILL.md`:
- Around line 147-150: The "## Examples" heading should be the parent section;
update subordinate example headings (currently using "##") to "###" so they
become children under the "## Examples" section; locate the headings named "##
Examples" and any other same-level example topic headings in SKILL.md (e.g.,
other "## ..." headings that are actual example entries) and change them to
"###" and adjust any adjacent content indentation if needed to ensure the
Examples section contains When to Use / How It Works / individual examples as
subheadings.
- Around line 858-861: The git diff call that detects changed environments uses
unavailable variables for PR events (git diff --name-only ${{
github.event.before }} HEAD); update the detection to use the pull request SHAs
instead by replacing ${{ github.event.before }} with ${{
github.event.pull_request.base.sha }} and HEAD with ${{
github.event.pull_request.head.sha }} (keep the surrounding logic that builds
the environments variable and the for loop over dev staging production so
behavior remains the same).

---

Duplicate comments:
In `@rules/java/hooks.md`:
- Around line 32-37: Wrap the TOOL_INPUT variable in double quotes in both hook
command strings to prevent path-splitting and globbing: update the Java
command's usages of $TOOL_INPUT (both in the echo and grep pipeline inside the
"command" value around mvn/gradle) and the Kotlin command's usages of
$TOOL_INPUT (the echo piped into ktlint and the grep -o extraction) so each echo
or grep reads "$TOOL_INPUT" instead of $TOOL_INPUT; keep the rest of the
commands and quoting/escaping intact.

In `@skills/data-engineering-patterns/SKILL.md`:
- Around line 34-52: The SQL construction in load_data (merge_sql,
staging_table) interpolates untrusted table/column names via f-strings and is
vulnerable to SQL injection; replace that by either (A) adding a clear
top-of-file and function-level warning that this example is unsafe for
production unless identifiers are validated/escaped, or (B) implement a safe
variant: validate table and column names against an allowlist or regex, quote
identifiers using the engine/dialect identifier preparer (or
psycopg2.sql.Identifier for Postgres) when building staging_table and merge_sql,
and only use bind parameters (not f-strings) for values; update load_data,
merge_sql, and the conn.execute calls to use the safe identifier
quoting/validation approach instead of raw f-string interpolation.

In `@skills/kotlin-patterns/SKILL.md`:
- Around line 114-117: The Examples section currently has sibling-level headings
(e.g., "## Coroutine Patterns") so move all example topics under the main "##
Examples" heading by demoting their headings to subheadings (change "##
Coroutine Patterns" and any other example-level "##" headings to "###" or
appropriate subheading level), ensuring "When to Use" and "How It Works" remain
top-level sections and that every example topic is nested as a subsection of the
"## Examples" heading.

In `@skills/kubernetes-patterns/SKILL.md`:
- Around line 121-124: The "Pod Design Patterns" heading is currently a level-2
peer of "## Examples" instead of being nested inside it; change the "## Pod
Design Patterns" heading to a level-3 heading ("### Pod Design Patterns") so it
and any concrete pattern subsections are nested under the existing "## Examples"
section, and similarly demote any other example-level "##" headings under
Examples to "###" to comply with the Skills Markdown structure.

In `@skills/rust-testing/SKILL.md`:
- Around line 23-26: The "## TDD Workflow for Rust" heading (and any
test-related headings) must be nested under the existing "## Examples" section;
change those level-2 headings to level-3 (e.g., rename "## TDD Workflow for
Rust" to "### TDD Workflow for Rust") and move any test topic content so they
appear as subsections under "## Examples" to satisfy the required When to Use /
How It Works / Examples structure.

---

Nitpick comments:
In `@skills/data-engineering-patterns/SKILL.md`:
- Line 817: The filter uses an explicit boolean comparison with
F.col("is_valid") == True; change it to use the boolean column directly by
replacing the expression in the .filter call (the F.col("is_valid") == True
usage) with just F.col("is_valid") so Spark treats the column as a boolean
predicate without an unnecessary equality check.

In `@skills/database-performance/SKILL.md`:
- Around line 493-496: The code uses redis.keys('products:list:*') which blocks
on large keyspaces; replace this with a cursor-based SCAN loop using redis.scan
to iterate matching keys for pattern 'products:list:*', accumulate matches (or
stream them) and call redis.del in controlled batches (e.g., 100-1000 keys per
del) to avoid large argument lists; update references to listKeys and the
redis.keys(...) call to use the new scan-based logic and ensure the deletion
uses batched redis.del calls.

In `@skills/e2e-patterns/SKILL.md`:
- Around line 1202-1207: Replace the fixed sleeps (page.waitForTimeout) used
before taking screenshots with deterministic waits on UI/network conditions:
instead of page.waitForTimeout(300) before
expect(page).toHaveScreenshot('products-tablet.png') and before the mobile
block, wait for a stable DOM or network state such as using
page.waitForLoadState('networkidle'), waiting for a specific locator to be
visible via locator.waitFor() or expect(locator).toBeVisible(), or waiting for
an important API via page.waitForResponse(); keep the page.setViewportSize call
but remove the sleeps and use these condition-based waits around
expect(page).toHaveScreenshot and after page.setViewportSize so screenshots run
only after the UI is ready.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 63b3c1f and 451a087.

📒 Files selected for processing (25)
  • contexts/refactor.md
  • rules/java/coding-style.md
  • rules/java/hooks.md
  • rules/java/patterns.md
  • rules/java/security.md
  • rules/java/testing.md
  • rules/rust/coding-style.md
  • rules/rust/hooks.md
  • rules/rust/patterns.md
  • rules/rust/security.md
  • rules/rust/testing.md
  • skills/api-performance/SKILL.md
  • skills/cicd-patterns/SKILL.md
  • skills/data-engineering-patterns/SKILL.md
  • skills/database-performance/SKILL.md
  • skills/e2e-patterns/SKILL.md
  • skills/flutter-patterns/SKILL.md
  • skills/frontend-performance/SKILL.md
  • skills/kotlin-patterns/SKILL.md
  • skills/kubernetes-patterns/SKILL.md
  • skills/mlops-patterns/SKILL.md
  • skills/react-native-patterns/SKILL.md
  • skills/rust-patterns/SKILL.md
  • skills/rust-testing/SKILL.md
  • skills/terraform-patterns/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (7)
  • rules/rust/security.md
  • skills/react-native-patterns/SKILL.md
  • rules/rust/testing.md
  • contexts/refactor.md
  • rules/java/security.md
  • skills/frontend-performance/SKILL.md
  • rules/rust/patterns.md

Comment on lines +196 to +210
export async function getOrSetWithLock<T>(
key: string,
fetchFn: () => Promise<T>,
ttlSeconds: number,
maxRetries: number = 5
): Promise<T> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const cached = await redis.get(key)
if (cached !== null) {
return (JSON.parse(cached) as CacheEntry<T>).data
}

const lockKey = `lock:${key}`
const lockToken = crypto.randomUUID()
const lockAcquired = await redis.set(lockKey, lockToken, 'EX', 10, 'NX')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

crypto is missing in the lock example and causes a runtime failure.

Line 209 uses crypto.randomUUID() but this snippet never imports crypto.

Minimal fix
+import crypto from 'node:crypto'
 import Redis from 'ioredis'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/api-performance/SKILL.md` around lines 196 - 210, The getOrSetWithLock
function calls crypto.randomUUID() but crypto is not imported; add the
appropriate Node crypto import (e.g., import crypto from 'crypto' or import {
randomUUID } from 'crypto' and call randomUUID()) at the top of the module so
getOrSetWithLock can generate lockToken without runtime failure; update any
callsite in getOrSetWithLock that references crypto.randomUUID to match the
chosen import form.

- Restructure all 14 skills: demote headings after ## Examples to ###
  so pattern sections are properly nested under Examples
- Add missing imports: crypto in api-performance, pandas/datetime/logging
  in data-engineering, pyarrow in mlops, Spring/JUnit in java testing
- Fix unreachable 'degraded' health status (count-based logic)
- Pin trivy-action@0.29.0 (was @master), add OIDC preference comment
- Fix terraform PR detection: github.event.before → PR base/head SHAs
- Quote $TOOL_INPUT in java hooks to prevent path splitting
- Replace redis.keys() with cursor-based SCAN in db-performance
- Replace pd.read_parquet chunksize with PyArrow iter_batches
- Replace waitForTimeout with waitForLoadState in e2e-patterns
- Add SQL injection warning to load_data example
- Simplify PySpark boolean filter
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (2)
skills/cicd-patterns/SKILL.md (2)

1088-1095: Consider using WRONG/CORRECT format for the anti-pattern example.

The NOTE at lines 1088-1090 correctly directs users to prefer OIDC authentication, but the static credentials example that follows might confuse readers about which pattern to use. The document's "Fast Feedback" section (lines 29-68) uses a clearer "WRONG/CORRECT" format for anti-patterns.

Consider reformatting this section to explicitly label the static credentials as "WRONG" and reference the OIDC section as "CORRECT" for improved clarity.

Optional: Reformat as explicit anti-pattern comparison
-    # NOTE: Prefer OIDC authentication over static AWS credentials.
-    # See the "OIDC Authentication" section in this file for the recommended pattern
-    # using aws-actions/configure-aws-credentials with role-to-assume.
-    - uses: docker/login-action@v3
-      with:
-        registry: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com
-        username: ${{ secrets.AWS_ACCESS_KEY_ID }}
-        password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+    # WRONG: Static AWS credentials (use OIDC instead)
+    # - uses: docker/login-action@v3
+    #   with:
+    #     registry: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com
+    #     username: ${{ secrets.AWS_ACCESS_KEY_ID }}
+    #     password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+    #
+    # CORRECT: Use OIDC authentication (see "OIDC Authentication" section, lines 701-735)
+    # First configure AWS credentials with OIDC, then login to ECR without static keys
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/cicd-patterns/SKILL.md` around lines 1088 - 1095, Reformat the static
AWS credentials example (the docker/login-action@v3 block) into an explicit
WRONG/CORRECT anti-pattern pair: mark the current snippet that uses username:
${{ secrets.AWS_ACCESS_KEY_ID }} and password: ${{ secrets.AWS_SECRET_ACCESS_KEY
}} as "WRONG" and add a "CORRECT" pointer that references the existing OIDC
Authentication section (the NOTE that recommends
aws-actions/configure-aws-credentials with role-to-assume). Keep the NOTE text
but add clear labels and a short one-line CORRECT hint showing OIDC as the
recommended pattern to match the Fast Feedback WRONG/CORRECT style used earlier.

75-119: Inconsistent action versioning: guidance recommends SHA pinning but examples use version tags.

Lines 75-76 state "Pin all action versions to SHA for reproducibility and security" and demonstrate SHA-pinned actions (lines 77-78, 87). However, the Security Gates example at line 114 uses aquasecurity/trivy-action@0.29.0 (a version tag, not a SHA).

For consistency with the stated best practice, either:

  1. Update line 114 (and similar instances throughout the document) to use SHA pinning, or
  2. Revise the guidance to acknowledge that version tags are acceptable alternatives
Option 1: Update to SHA pinning for consistency
     - name: Container scanning
-      uses: aquasecurity/trivy-action@0.29.0
+      uses: aquasecurity/trivy-action@6e7b7d1fd3e4fef0c5fa8cce1229c54b2c9bd0d8  # v0.29.0
       with:
         image-ref: '${{ env.IMAGE_NAME }}:${{ github.sha }}'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/cicd-patterns/SKILL.md` around lines 75 - 119, The Security Gates
example is inconsistent with the guidance to "Pin all action versions to SHA":
replace the version-tagged usage of aquasecurity/trivy-action@0.29.0 in the
"Security Gates" example with its corresponding commit SHA (the same format used
for actions/checkout and actions/setup-node earlier) so all action usages in
SKILL.md consistently use SHA pinning; update any other version-tagged actions
in the document (e.g., trufflesecurity/trufflehog@v3.88.0,
github/codeql-action/analyze@v3) to their SHA equivalents as well to match the
stated best practice.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@rules/java/hooks.md`:
- Line 32: The hook command currently uses "mvn ... || gradle ..." which masks
real Maven failures and incorrectly falls back; change the logic in the command
string to explicitly detect the build tool by checking for the presence of
pom.xml (run mvn spotless:apply) or build.gradle / build.gradle.kts (run gradle
spotlessApply) in the repository root (use git rev-parse --show-toplevel to
locate the repo root) and run only the appropriate formatter; do not rely on ||
fallback and avoid suppressing stderr for the actual build invocation so genuine
errors surface.
- Line 47: Update the PreToolUse Bash hook's "command" regex so it uses POSIX
ERE-compatible syntax and doesn't treat "./gradlew" as any-character match: in
the "command" string that currently contains
'(mvn|gradle|./gradlew).*?(install|build|test|verify|package)', replace the
non-greedy '.*?' with '.*' and escape the dot in './gradlew' (e.g., use
'\./gradlew') so the pattern becomes something like
'(mvn|gradle|\.\/gradlew).*(install|build|test|verify|package)' to correctly
match intended tool names.
- Line 36: The command string that builds and runs ktlint is using an unquoted
command substitution $(echo "$TOOL_INPUT" | grep -o '[^ ]*\.kt') which causes
word-splitting when TOOL_INPUT contains multiple files; update the "command" to
either quote the substitution results or pipe the grep output into xargs so
ktlint receives each filename safely (e.g., use quoted expansion of the
substitution or use grep -o ... | xargs --no-run-if-empty ktlint --format) and
ensure TOOL_INPUT and the grep output are properly quoted to avoid splitting and
globbing issues.

In `@skills/database-performance/SKILL.md`:
- Around line 635-636: In the TypeScript code block where the SQL snippet
appears, replace SQL-style comments that start with `--` with
TypeScript/JavaScript single-line comments `//`; specifically change `--
Supporting index for cursor-based pagination` and the commented `-- CREATE INDEX
idx_orders_user_cursor ON orders (user_id, created_at DESC, id DESC);` to use
`//` so the `CREATE INDEX idx_orders_user_cursor ON orders (user_id, created_at
DESC, id DESC);` line is not left as an invalid SQL-style comment inside a TS
snippet.

In `@skills/mlops-patterns/SKILL.md`:
- Around line 799-815: When no batches are read, pd.concat(results, ...) will
raise ValueError; update the code around parquet_file.iter_batches and results
to handle an empty results list: after the loop check if results is empty and if
so create an empty DataFrame with the expected columns (feature columns from
config.feature_columns plus "prediction", "model_path", and "scored_at") or
otherwise skip concat, then proceed to write that DataFrame to
config.output_path using output_df.to_parquet; reference the variables/functions
parquet_file.iter_batches, results, model.predict, config.feature_columns, and
output_df.to_parquet when making the change.
- Around line 987-1021: The MAE/RMSE calculation in get_metrics currently slices
self._predictions[:len(self._actuals)], which can mismatch because
record_prediction only appends to self._actuals for some requests; fix by
tracking paired prediction/actuals together: update record_prediction to append
the prediction to a new paired list (e.g., self._paired_predictions) whenever
actual is provided (alongside appending to self._actuals), and then change
get_metrics to compute mae/rmse from np.array(self._actuals) and
np.array(self._paired_predictions) instead of slicing self._predictions;
reference record_prediction, get_metrics, self._actuals, self._predictions, and
introduce self._paired_predictions.
- Around line 236-242: The return type annotation on init_wandb_experiment is
incorrect (uses lowercase wandb.run); update the signature to use the public Run
type—either change the annotation to wandb.Run or import Run from wandb and
annotate -> Run—so the function returning wandb.init(...) is typed correctly;
adjust any imports accordingly.

In `@skills/terraform-patterns/SKILL.md`:
- Around line 835-861: The workflow assumes pull-request SHAs; change the
detect-changes step (id: changes) to set BASE_SHA and HEAD_SHA conditionally
based on github.event_name: when github.event_name == 'pull_request' use
github.event.pull_request.base.sha and github.event.pull_request.head.sha,
otherwise (push) use github.event.before as BASE_SHA and github.sha as HEAD_SHA;
then use git diff --name-only $BASE_SHA $HEAD_SHA (the current git diff
invocation) so the job works for both pull_request and push events without
referencing undefined pull_request fields.
- Around line 979-987: The script currently treats any non-2 exit code
(including 1) as "no drift" and also suppresses failures with continue-on-error;
change the logic so terraform plan -detailed-exitcode is handled explicitly:
capture exit_code=${PIPESTATUS[0]}, if exit_code == 2 write
"drift_detected=true" to GITHUB_OUTPUT, elif exit_code == 0 write
"drift_detected=false" to GITHUB_OUTPUT, else print an error with the exit code
and exit with that code (do not mark drift as false); also remove or set
continue-on-error to false so real terraform errors (exit code 1 or others) fail
the job rather than being silenced. Ensure you reference the terraform plan
command, exit_code variable, and GITHUB_OUTPUT when making these changes.

---

Nitpick comments:
In `@skills/cicd-patterns/SKILL.md`:
- Around line 1088-1095: Reformat the static AWS credentials example (the
docker/login-action@v3 block) into an explicit WRONG/CORRECT anti-pattern pair:
mark the current snippet that uses username: ${{ secrets.AWS_ACCESS_KEY_ID }}
and password: ${{ secrets.AWS_SECRET_ACCESS_KEY }} as "WRONG" and add a
"CORRECT" pointer that references the existing OIDC Authentication section (the
NOTE that recommends aws-actions/configure-aws-credentials with role-to-assume).
Keep the NOTE text but add clear labels and a short one-line CORRECT hint
showing OIDC as the recommended pattern to match the Fast Feedback WRONG/CORRECT
style used earlier.
- Around line 75-119: The Security Gates example is inconsistent with the
guidance to "Pin all action versions to SHA": replace the version-tagged usage
of aquasecurity/trivy-action@0.29.0 in the "Security Gates" example with its
corresponding commit SHA (the same format used for actions/checkout and
actions/setup-node earlier) so all action usages in SKILL.md consistently use
SHA pinning; update any other version-tagged actions in the document (e.g.,
trufflesecurity/trufflehog@v3.88.0, github/codeql-action/analyze@v3) to their
SHA equivalents as well to match the stated best practice.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 451a087 and 299e756.

📒 Files selected for processing (16)
  • rules/java/hooks.md
  • rules/java/testing.md
  • skills/api-performance/SKILL.md
  • skills/cicd-patterns/SKILL.md
  • skills/data-engineering-patterns/SKILL.md
  • skills/database-performance/SKILL.md
  • skills/e2e-patterns/SKILL.md
  • skills/flutter-patterns/SKILL.md
  • skills/frontend-performance/SKILL.md
  • skills/kotlin-patterns/SKILL.md
  • skills/kubernetes-patterns/SKILL.md
  • skills/mlops-patterns/SKILL.md
  • skills/react-native-patterns/SKILL.md
  • skills/rust-patterns/SKILL.md
  • skills/rust-testing/SKILL.md
  • skills/terraform-patterns/SKILL.md
✅ Files skipped from review due to trivial changes (1)
  • skills/api-performance/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (6)
  • rules/java/testing.md
  • skills/react-native-patterns/SKILL.md
  • skills/kubernetes-patterns/SKILL.md
  • skills/data-engineering-patterns/SKILL.md
  • skills/flutter-patterns/SKILL.md
  • skills/rust-patterns/SKILL.md

Comment on lines 799 to 815
parquet_file = pq.ParquetFile(config.input_path)
for batch in parquet_file.iter_batches(batch_size=config.batch_size, columns=list(config.feature_columns)):
chunk = batch.to_pandas()
predictions = model.predict(chunk)

output_chunk = chunk.assign(
prediction=predictions,
model_path=config.model_path,
scored_at=pd.Timestamp.utcnow(),
)
results.append(output_chunk)
total_rows += len(chunk)
logger.info("Scored %d rows (total: %d)", len(chunk), total_rows)

output_df = pd.concat(results, ignore_index=True)
output_df.to_parquet(config.output_path, index=False)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle empty batch input before concatenation.

If no batches are read, pd.concat(results, ...) at Line 813 raises ValueError. This breaks valid empty-input runs.

Suggested fix
-    output_df = pd.concat(results, ignore_index=True)
+    if not results:
+        empty_df = pd.DataFrame(columns=[*config.feature_columns, "prediction", "model_path", "scored_at"])
+        empty_df.to_parquet(config.output_path, index=False)
+        return {
+            "total_rows": 0,
+            "output_path": config.output_path,
+            "prediction_distribution": {"mean": 0.0, "std": 0.0, "min": 0.0, "max": 0.0},
+        }
+
+    output_df = pd.concat(results, ignore_index=True)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/mlops-patterns/SKILL.md` around lines 799 - 815, When no batches are
read, pd.concat(results, ...) will raise ValueError; update the code around
parquet_file.iter_batches and results to handle an empty results list: after the
loop check if results is empty and if so create an empty DataFrame with the
expected columns (feature columns from config.feature_columns plus "prediction",
"model_path", and "scored_at") or otherwise skip concat, then proceed to write
that DataFrame to config.output_path using output_df.to_parquet; reference the
variables/functions parquet_file.iter_batches, results, model.predict,
config.feature_columns, and output_df.to_parquet when making the change.

- rules/java/hooks.md: use explicit pom.xml/build.gradle detection instead
  of mvn||gradle fallback, pipe ktlint args through xargs, fix POSIX regex
- skills/database-performance: change SQL -- comments to // in TS code block
- skills/mlops-patterns: fix wandb.Run return type, guard empty batches in
  batch inference, use paired prediction/actual list for accurate metrics
- skills/terraform-patterns: handle both PR and push events for SHA detection,
  fix terraform plan exit code handling (exit 1 = error, not no-drift)
- skills/cicd-patterns: replace static AWS creds with OIDC pattern,
  SHA-pin actions in security gates section
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
skills/cicd-patterns/SKILL.md (1)

75-79: ⚠️ Potential issue | 🟠 Major

Keep action pinning consistent with your own SHA-pinning rule.

After introducing SHA pinning guidance, several later examples revert to mutable @v* tags. Please keep examples consistent with the rule to avoid supply-chain drift.

Also applies to: 166-169, 263-266, 327-330, 373-377

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/cicd-patterns/SKILL.md` around lines 75 - 79, Several workflow
examples revert to mutable `@v`* tags (e.g., the lines with "uses:
actions/checkout@..." and "uses: actions/setup-node@...") after we introduced
SHA pinning guidance; update every example that uses a version tag to instead
pin to the corresponding commit SHA so examples are consistent with the
SHA-pinning rule (replace occurrences of actions/checkout@v* and
actions/setup-node@v* and any other "uses: <owner>/<repo>@v*" instances with
their verified commit SHAs in the same examples referenced).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@skills/mlops-patterns/SKILL.md`:
- Around line 985-989: The _paired history is an unbounded list causing
potential memory growth; change its type to a bounded deque using the same
window_size as the other buffers (replace self._paired: list[tuple[float,
float]] = [] with a deque(maxlen=window_size) of tuple[float,float]) in the
class __init__ (and apply the same change to the other occurrence around the
_paired initialization at the later block noted), ensuring all history buffers
use window_size for consistent, bounded memory usage.

In `@skills/terraform-patterns/SKILL.md`:
- Around line 897-902: The GitHub Actions job "Terraform Plan" (id: plan) is
currently using continue-on-error: true which masks terraform plan failures;
change this by removing continue-on-error or setting continue-on-error: false so
the job fails on real terraform plan errors and surfaces them on the PR; update
the workflow step referencing run: terraform plan -no-color -out=tfplan and
working-directory: infrastructure/environments/${{ matrix.environment }}
accordingly so failures are not suppressed.

---

Duplicate comments:
In `@skills/cicd-patterns/SKILL.md`:
- Around line 75-79: Several workflow examples revert to mutable `@v`* tags (e.g.,
the lines with "uses: actions/checkout@..." and "uses: actions/setup-node@...")
after we introduced SHA pinning guidance; update every example that uses a
version tag to instead pin to the corresponding commit SHA so examples are
consistent with the SHA-pinning rule (replace occurrences of actions/checkout@v*
and actions/setup-node@v* and any other "uses: <owner>/<repo>@v*" instances with
their verified commit SHAs in the same examples referenced).

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 299e756 and 5994135.

📒 Files selected for processing (5)
  • rules/java/hooks.md
  • skills/cicd-patterns/SKILL.md
  • skills/database-performance/SKILL.md
  • skills/mlops-patterns/SKILL.md
  • skills/terraform-patterns/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • rules/java/hooks.md

Comment on lines 985 to 989
def __init__(self, window_size: int = 1000) -> None:
self._predictions: deque = deque(maxlen=window_size)
self._paired: list[tuple[float, float]] = []
self._latencies: deque = deque(maxlen=window_size)
self._errors: deque = deque(maxlen=window_size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bound _paired history to avoid unbounded memory growth.

_paired is currently an unbounded list while all other buffers use window_size. Over time this can grow without limit.

Suggested fix
 class PredictionMonitor:
     def __init__(self, window_size: int = 1000) -> None:
         self._predictions: deque = deque(maxlen=window_size)
-        self._paired: list[tuple[float, float]] = []
+        self._paired: deque[tuple[float, float]] = deque(maxlen=window_size)
         self._latencies: deque = deque(maxlen=window_size)
         self._errors: deque = deque(maxlen=window_size)

Also applies to: 999-1001

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/mlops-patterns/SKILL.md` around lines 985 - 989, The _paired history
is an unbounded list causing potential memory growth; change its type to a
bounded deque using the same window_size as the other buffers (replace
self._paired: list[tuple[float, float]] = [] with a deque(maxlen=window_size) of
tuple[float,float]) in the class __init__ (and apply the same change to the
other occurrence around the _paired initialization at the later block noted),
ensuring all history buffers use window_size for consistent, bounded memory
usage.

- mlops-patterns: bound _paired to deque(maxlen=window_size) to prevent
  unbounded memory growth
- terraform-patterns: remove continue-on-error from terraform plan step
  so failures surface properly on PRs
- cicd-patterns: add note that examples use @v* tags for readability
  and should be SHA-pinned in production
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant