Role-model checklist . GitHub File Check with Rubrics - chatGPT.

Test your monolith or module's file and give it a score


1. Role-Model Source File Checklist (Yes / No)

You can literally print this and go line by line.

A. File Structure

  1. Does the file have a clear top-to-bottom order
    (for example: imports → constants → types → internal functions → exported functions)?

  2. Is everything in the file clearly part of a single module or concern?

  3. Is the file short enough that you can scroll through it without feeling lost
    (for example, roughly one screenful or a few screens, not hundreds of lines of unrelated stuff)?

  4. Are there no obvious “god functions” doing too many different jobs?

B. Naming

  1. Do function names describe their purpose, not their implementation details?

  2. Do variable names reflect domain concepts instead of single letters (except for tiny scopes like loops)?

  3. Is naming consistent with the rest of the project (same terms for same concepts)?

C. Interfaces and Behavior

  1. For each exported function or class, can you describe in one short sentence what it does?

  2. Do exported functions always return the same kind of result for the same kind of input
    (no random type changes, no “sometimes null, sometimes list, sometimes error string” surprises)?

  3. Is error handling consistent (same style of return values, exceptions, or result objects)?

D. Internal Discipline (Cleanliness)

  1. Are there no unused variables, dead code blocks, or commented-out “junk” left in the file?

  2. Is code duplication minimized (no copy-pasted logic sprinkled around)?

  3. Is each function as small as reasonably possible while staying readable?

E. Documentation and Comments

  1. Does the file have a short header comment explaining its purpose in the project?

  2. Do non-obvious sections have comments explaining why they exist or why the logic is written that way?

  3. Are comments up to date (no clearly wrong or misleading comments)?

F. Tests and Examples

  1. Are there tests directly related to this file’s exported functions (unit tests, integration tests, etc.)?

  2. Do the tests cover normal usage and at least a few edge cases or failure paths?

  3. Is there at least one example of typical usage (in tests, documentation, or comments)?

G. Simplicity and Stability

  1. Does the file avoid unnecessary abstractions (no extra classes or layers “just in case”)?

  2. Has the file’s public interface (exports) remained mostly stable across recent changes
    (no constant breakage for users of the module)?

  3. Is performance reasonable and not obviously wasteful for its purpose?

H. Teachable Shape

  1. Could a newcomer reasonably copy this file’s layout and style as a template for a new module?

  2. If you left the project, would you feel confident that others could maintain this file easily?

You can do a quick pass: count how many “Yes” answers you get. Then use the detailed rubric below for a more systematic score.


2. Evaluation Rubric (Scoring Existing Files)

Rate each dimension from 0 to 3:

  • 0 = Poor

  • 1 = Weak

  • 2 = Good

  • 3 = Role-model

Then sum to get a total score.

Dimension 1: Structure and Cohesion (0–3)

0: File is chaotic; unrelated concerns mixed; no clear order.
1: Some structure but inconsistent; several unrelated responsibilities.
2: Mostly clean order and single main concern, with minor mixing.
3: Very clear, intentional order; file feels like one coherent module.


Dimension 2: Naming and Domain Clarity (0–3)

0: Cryptic names; single letters; inconsistent terminology.
1: Mixed quality; some good names but several confusing or misleading ones.
2: Generally clear names with domain meaning, minor inconsistencies.
3: Names are consistently meaningful, domain-aligned, and easy to understand.


Dimension 3: Interface Design and Predictability (0–3)

0: Exported functions or classes behave unpredictably; inconsistent return types or side effects.
1: Some exports are predictable, but error handling and return types vary a lot.
2: Mostly predictable behavior; consistent patterns with small exceptions.
3: Clean, stable interface; clear contract; consistent types and error handling.


Dimension 4: Internal Discipline and Cleanliness (0–3)

0: Many unused variables, dead code, commented-out blocks, and duplication.
1: Some cleanup done, but noticeable junk remains and duplication is common.
2: Fairly clean; small patches of duplication or forgotten leftovers.
3: Very disciplined; no junk; duplication explicitly refactored; everything present has a clear reason.


Dimension 5: Documentation and Comments (0–3)

0: No useful comments; or comments are misleading or out of date.
1: Some comments, but they mostly repeat what the code says or are sparse and uneven.
2: Good comments on tricky parts and a short explanation of the file’s purpose.
3: Excellent minimal commentary focused on intent and assumptions; the file “explains itself” with just enough text.


Dimension 6: Testing and Example Coverage (0–3)

0: No tests related to this file; behavior unverified.
1: Some basic tests exist but miss core edge cases or main scenarios.
2: Solid tests for main use cases and some edge cases.
3: Comprehensive, well-named tests that document behavior; clear examples of usage and failure scenarios.


Dimension 7: Simplicity and Absence of Over-Engineering (0–3)

0: Overly complex; unnecessary abstractions; hard to follow.
1: Mixed; some simple parts, but several “clever” areas that reduce readability.
2: Mostly simple, with a few rough or complex patches.
3: As simple as it can be while still being robust; minimal abstractions, maximum clarity.


Dimension 8: Stability Over Time (0–3)

0: File changes constantly; its public interface keeps breaking callers.
1: Some churn; interfaces change more often than necessary.
2: Mostly stable; occasional breaking changes with justification.
3: Highly stable interface; internal changes rarely break dependents; changes are carefully managed.


Dimension 9: Developer Experience / Teachable Template (0–3)

0: No one should copy this file as an example; it would spread bad practices.
1: Parts are useful, but copying the whole pattern would carry forward issues.
2: Good enough example with minor caveats.
3: Excellent “gold standard” example; maintainers encourage newcomers to model new files on it.


Dimension 10: Efficiency, Safety, and Ethics of Use (0–3)

0: Code is obviously wasteful, risky (e.g., unsafe patterns, no checks), or careless with resources.
1: Some effort toward efficiency or safety, but many rough edges (no bounds checks, etc.).
2: Reasonable efficiency and safety; potential issues are limited and manageable.
3: Thoughtful about resource usage, safety, and security; avoids unnecessary load and risky patterns.


3. Overall Scores and Interpretation

You can sum the 10 dimensions:

  • Minimum total = 0

  • Maximum total = 30

Suggested interpretation:

  • 0–10: Needs rescue
    File is actively harmful as a model. Plan a refactor or rewrite.

  • 11–18: Growing but fragile
    Some good parts, but not yet a safe example. Target 2–3 weakest dimensions first.

  • 19–25: Solid contributor
    File is good; can be a model with a bit of polishing in a few areas.

  • 26–30: Role-model file
    This is a “reference” module. Encourage others to copy its structure, naming, and testing style.



Comments

Popular posts from this blog

The smallest ℝeal number in Mathematics and Αlpha and Ωmega

Spatial, Temporal and Logarithmic Dimensions of Genesis 1 "said" Formula in Holy Bible.

How to install apache php and mysql on windows 7 with cygwin.