Role-model checklist . GitHub File Check with Rubrics - chatGPT.
Test your monolith or module's file and give it a score
1. Role-Model Source File Checklist (Yes / No)
You can literally print this and go line by line.
A. File Structure
-
Does the file have a clear top-to-bottom order
(for example: imports → constants → types → internal functions → exported functions)? -
Is everything in the file clearly part of a single module or concern?
-
Is the file short enough that you can scroll through it without feeling lost
(for example, roughly one screenful or a few screens, not hundreds of lines of unrelated stuff)? -
Are there no obvious “god functions” doing too many different jobs?
B. Naming
-
Do function names describe their purpose, not their implementation details?
-
Do variable names reflect domain concepts instead of single letters (except for tiny scopes like loops)?
-
Is naming consistent with the rest of the project (same terms for same concepts)?
C. Interfaces and Behavior
-
For each exported function or class, can you describe in one short sentence what it does?
-
Do exported functions always return the same kind of result for the same kind of input
(no random type changes, no “sometimes null, sometimes list, sometimes error string” surprises)? -
Is error handling consistent (same style of return values, exceptions, or result objects)?
D. Internal Discipline (Cleanliness)
-
Are there no unused variables, dead code blocks, or commented-out “junk” left in the file?
-
Is code duplication minimized (no copy-pasted logic sprinkled around)?
-
Is each function as small as reasonably possible while staying readable?
E. Documentation and Comments
-
Does the file have a short header comment explaining its purpose in the project?
-
Do non-obvious sections have comments explaining why they exist or why the logic is written that way?
-
Are comments up to date (no clearly wrong or misleading comments)?
F. Tests and Examples
-
Are there tests directly related to this file’s exported functions (unit tests, integration tests, etc.)?
-
Do the tests cover normal usage and at least a few edge cases or failure paths?
-
Is there at least one example of typical usage (in tests, documentation, or comments)?
G. Simplicity and Stability
-
Does the file avoid unnecessary abstractions (no extra classes or layers “just in case”)?
-
Has the file’s public interface (exports) remained mostly stable across recent changes
(no constant breakage for users of the module)? -
Is performance reasonable and not obviously wasteful for its purpose?
H. Teachable Shape
-
Could a newcomer reasonably copy this file’s layout and style as a template for a new module?
-
If you left the project, would you feel confident that others could maintain this file easily?
You can do a quick pass: count how many “Yes” answers you get. Then use the detailed rubric below for a more systematic score.
2. Evaluation Rubric (Scoring Existing Files)
Rate each dimension from 0 to 3:
-
0 = Poor
-
1 = Weak
-
2 = Good
-
3 = Role-model
Then sum to get a total score.
Dimension 1: Structure and Cohesion (0–3)
0: File is chaotic; unrelated concerns mixed; no clear order.
1: Some structure but inconsistent; several unrelated responsibilities.
2: Mostly clean order and single main concern, with minor mixing.
3: Very clear, intentional order; file feels like one coherent module.
Dimension 2: Naming and Domain Clarity (0–3)
0: Cryptic names; single letters; inconsistent terminology.
1: Mixed quality; some good names but several confusing or misleading ones.
2: Generally clear names with domain meaning, minor inconsistencies.
3: Names are consistently meaningful, domain-aligned, and easy to understand.
Dimension 3: Interface Design and Predictability (0–3)
0: Exported functions or classes behave unpredictably; inconsistent return types or side effects.
1: Some exports are predictable, but error handling and return types vary a lot.
2: Mostly predictable behavior; consistent patterns with small exceptions.
3: Clean, stable interface; clear contract; consistent types and error handling.
Dimension 4: Internal Discipline and Cleanliness (0–3)
0: Many unused variables, dead code, commented-out blocks, and duplication.
1: Some cleanup done, but noticeable junk remains and duplication is common.
2: Fairly clean; small patches of duplication or forgotten leftovers.
3: Very disciplined; no junk; duplication explicitly refactored; everything present has a clear reason.
Dimension 5: Documentation and Comments (0–3)
0: No useful comments; or comments are misleading or out of date.
1: Some comments, but they mostly repeat what the code says or are sparse and uneven.
2: Good comments on tricky parts and a short explanation of the file’s purpose.
3: Excellent minimal commentary focused on intent and assumptions; the file “explains itself” with just enough text.
Dimension 6: Testing and Example Coverage (0–3)
0: No tests related to this file; behavior unverified.
1: Some basic tests exist but miss core edge cases or main scenarios.
2: Solid tests for main use cases and some edge cases.
3: Comprehensive, well-named tests that document behavior; clear examples of usage and failure scenarios.
Dimension 7: Simplicity and Absence of Over-Engineering (0–3)
0: Overly complex; unnecessary abstractions; hard to follow.
1: Mixed; some simple parts, but several “clever” areas that reduce readability.
2: Mostly simple, with a few rough or complex patches.
3: As simple as it can be while still being robust; minimal abstractions, maximum clarity.
Dimension 8: Stability Over Time (0–3)
0: File changes constantly; its public interface keeps breaking callers.
1: Some churn; interfaces change more often than necessary.
2: Mostly stable; occasional breaking changes with justification.
3: Highly stable interface; internal changes rarely break dependents; changes are carefully managed.
Dimension 9: Developer Experience / Teachable Template (0–3)
0: No one should copy this file as an example; it would spread bad practices.
1: Parts are useful, but copying the whole pattern would carry forward issues.
2: Good enough example with minor caveats.
3: Excellent “gold standard” example; maintainers encourage newcomers to model new files on it.
Dimension 10: Efficiency, Safety, and Ethics of Use (0–3)
0: Code is obviously wasteful, risky (e.g., unsafe patterns, no checks), or careless with resources.
1: Some effort toward efficiency or safety, but many rough edges (no bounds checks, etc.).
2: Reasonable efficiency and safety; potential issues are limited and manageable.
3: Thoughtful about resource usage, safety, and security; avoids unnecessary load and risky patterns.
3. Overall Scores and Interpretation
You can sum the 10 dimensions:
-
Minimum total = 0
-
Maximum total = 30
Suggested interpretation:
-
0–10: Needs rescue
File is actively harmful as a model. Plan a refactor or rewrite. -
11–18: Growing but fragile
Some good parts, but not yet a safe example. Target 2–3 weakest dimensions first. -
19–25: Solid contributor
File is good; can be a model with a bit of polishing in a few areas. -
26–30: Role-model file
This is a “reference” module. Encourage others to copy its structure, naming, and testing style.
Comments
Post a Comment