Fix: handle precomputed crops and safe no-grad cleanup in auto3dseg#8803
Fix: handle precomputed crops and safe no-grad cleanup in auto3dseg#8803bluehyena wants to merge 3 commits intoProject-MONAI:devfrom
Conversation
📝 WalkthroughWalkthroughAdded the Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
monai/auto3dseg/analyzer.py (2)
236-252: Misplaced docstring.The docstring block is placed after the validation code. Move it to immediately follow the
def __call__(self, data):line for proper function documentation.Proposed fix
The docstring (lines 236-252) should be moved to right after line 219 (
def __call__(self, data):), before the validation code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/auto3dseg/analyzer.py` around lines 236 - 252, The function-level docstring for __call__ is misplaced after validation; move the entire docstring block currently referencing ImageStatsKeys, SampleOperations, and report_format to immediately follow the def __call__(self, data): signature so it becomes the function's official docstring (before any validation or logic), preserving the existing text and formatting.
271-273: Considerstrict=Truefor zip.Adding
strict=Truecatches length mismatches early if SHAPE and SPACING ever diverge unexpectedly.- report[ImageStatsKeys.SIZEMM] = [ - a * b for a, b in zip(report[ImageStatsKeys.SHAPE][0], report[ImageStatsKeys.SPACING]) - ] + report[ImageStatsKeys.SIZEMM] = [ + a * b for a, b in zip(report[ImageStatsKeys.SHAPE][0], report[ImageStatsKeys.SPACING], strict=True) + ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/auto3dseg/analyzer.py` around lines 271 - 273, The current computation of ImageStatsKeys.SIZEMM uses zip(report[ImageStatsKeys.SHAPE][0], report[ImageStatsKeys.SPACING]) which can silently truncate if lengths differ; update the comprehension in analyzer.py to use zip(..., strict=True) so mismatched lengths raise immediately (ensure environment uses Python 3.10+); reference the report dict keys ImageStatsKeys.SHAPE, ImageStatsKeys.SPACING and the target ImageStatsKeys.SIZEMM when making this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 236-252: The function-level docstring for __call__ is misplaced
after validation; move the entire docstring block currently referencing
ImageStatsKeys, SampleOperations, and report_format to immediately follow the
def __call__(self, data): signature so it becomes the function's official
docstring (before any validation or logic), preserving the existing text and
formatting.
- Around line 271-273: The current computation of ImageStatsKeys.SIZEMM uses
zip(report[ImageStatsKeys.SHAPE][0], report[ImageStatsKeys.SPACING]) which can
silently truncate if lengths differ; update the comprehension in analyzer.py to
use zip(..., strict=True) so mismatched lengths raise immediately (ensure
environment uses Python 3.10+); reference the report dict keys
ImageStatsKeys.SHAPE, ImageStatsKeys.SPACING and the target
ImageStatsKeys.SIZEMM when making this change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fb6ef620-5777-4a34-a6c5-5372ddb90b86
📒 Files selected for processing (2)
monai/auto3dseg/analyzer.pytests/apps/test_auto3dseg.py
|
I noticed a couple of technical differences worth flagging between this approach and the fixes in #8809 and #8810. On nda_croppeds handling: The ternary here has no validation on the pre-computed value , if a caller passes a wrong type or a list with the wrong number of entries, the error surfaces downstream as a confusing shape mismatch rather than a clear message. #8809 adds explicit type and length validation with a descriptive ValueError. On grad state: torch.no_grad() always restores grad to enabled after the block, regardless of what state the caller was in before. If the caller had grad disabled before calling the analyzer, this silently re-enables it , which is a subtle semantic change. The try/finally pattern in #8810 uses torch.is_grad_enabled() to capture and restore the exact pre-call state, so both enabled→enabled and disabled→disabled are handled correctly. |
ericspod
left a comment
There was a problem hiding this comment.
Hi @bluehyena thanks for looking into this. I have minor comments to make, we should also look at @williams145 's PRs though I commented in there that using no_grad should work correctly and so I'm waiting to hear back from them.
| f"Image data under '{self.image_key}' must have at least 3 dimensions, but got shape {image.shape}." | ||
| ) | ||
| # --- End of validation --- | ||
| """ |
There was a problem hiding this comment.
This docstring comment appears spurious here, although it was in the original code please place it at the top of the method and check it's contents are correct.
| if isinstance(data[self.image_key], MetaTensor) | ||
| else [1.0] * min(3, data[self.image_key].ndim) | ||
| ) | ||
| with torch.no_grad(): |
There was a problem hiding this comment.
One other thing to note is that no_grad could be used as a method decorator as well. This might be cleaner for this code but you would have to check it is working as intended.
Signed-off-by: Jun Hyeok Lee <bluehyena123@naver.com>
d9a2bc4 to
c921e78
Compare
|
Hi @ericspod thanks for the review! I moved the I also checked the current PyTorch behavior and local tests to make sure this works as intended: In addition, I incorporated the |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
monai/auto3dseg/analyzer.py (1)
221-242: AddArgs:fordatain the method docstring.The docstring has
Returns/Raises, but noArgssection for the input payload.As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/auto3dseg/analyzer.py` around lines 221 - 242, Add a Google-style "Args:" section to the method docstring that documents the input parameter `data` (the payload) used by this callable (the method that references self.image_key and self.report_format); state that `data` must be a dict containing the image under the key `self.image_key` (value must be numpy.ndarray, torch.Tensor, or MetaTensor), optionally `nda_croppeds` as a list/tuple with one entry per image channel when precomputed, and any other expected keys used by the method (e.g., entries that produce ImageStatsKeys.INTENSITY); include types and brief descriptions and mention that missing/invalid types will raise KeyError/TypeError/ValueError as already listed in Raises.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 261-267: The current check only verifies nda_croppeds is a
sequence of the right length; extend validation to iterate over each element of
nda_croppeds (the variable in analyzer.py) and ensure every entry is non-None
and is array-like (e.g., has a shape/ndim attribute or is an instance of
expected array types used in this module) before proceeding to use them with
ndas; if any entry is invalid raise a clear ValueError referencing the offending
index and expected array-like type so later code paths (lines that consume
nda_croppeds) don't fail with unclear errors.
---
Nitpick comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 221-242: Add a Google-style "Args:" section to the method
docstring that documents the input parameter `data` (the payload) used by this
callable (the method that references self.image_key and self.report_format);
state that `data` must be a dict containing the image under the key
`self.image_key` (value must be numpy.ndarray, torch.Tensor, or MetaTensor),
optionally `nda_croppeds` as a list/tuple with one entry per image channel when
precomputed, and any other expected keys used by the method (e.g., entries that
produce ImageStatsKeys.INTENSITY); include types and brief descriptions and
mention that missing/invalid types will raise KeyError/TypeError/ValueError as
already listed in Raises.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3194f7d4-05c1-4613-9f56-eba2a987cfec
📒 Files selected for processing (1)
monai/auto3dseg/analyzer.py
| if "nda_croppeds" in d: | ||
| nda_croppeds = d["nda_croppeds"] | ||
| if not isinstance(nda_croppeds, (list, tuple)) or len(nda_croppeds) != len(ndas): | ||
| raise ValueError( | ||
| "Pre-computed 'nda_croppeds' must be a list or tuple with one entry per image channel " | ||
| f"(expected {len(ndas)})." | ||
| ) |
There was a problem hiding this comment.
Validate each precomputed crop entry explicitly.
Line 263 only validates container type/length. Malformed entries (e.g., scalars/None) will fail later at Line 275/287 with less clear errors.
Proposed fix
if "nda_croppeds" in d:
nda_croppeds = d["nda_croppeds"]
if not isinstance(nda_croppeds, (list, tuple)) or len(nda_croppeds) != len(ndas):
raise ValueError(
"Pre-computed 'nda_croppeds' must be a list or tuple with one entry per image channel "
f"(expected {len(ndas)})."
)
+ valid_crop_types = (np.ndarray, torch.Tensor, MetaTensor)
+ if not all(isinstance(crop, valid_crop_types) for crop in nda_croppeds):
+ raise ValueError(
+ "Each entry in pre-computed 'nda_croppeds' must be a numpy array, torch.Tensor, or MetaTensor."
+ )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@monai/auto3dseg/analyzer.py` around lines 261 - 267, The current check only
verifies nda_croppeds is a sequence of the right length; extend validation to
iterate over each element of nda_croppeds (the variable in analyzer.py) and
ensure every entry is non-None and is array-like (e.g., has a shape/ndim
attribute or is an instance of expected array types used in this module) before
proceeding to use them with ndas; if any entry is invalid raise a clear
ValueError referencing the offending index and expected array-like type so later
code paths (lines that consume nda_croppeds) don't fail with unclear errors.
Fixes #8802
Also addresses #5889
Description
This pull request fixes two issues in
monai.auto3dseg.analyzer.ImageStats.__call__()now correctly handles the optional precomputednda_croppedsinput path instead of raisingUnboundLocalError.ImageStats,FgImageStats, andLabelStatsnow use an exception-safe no-grad context so analyzer failures do not leak the global PyTorch grad-enabled state, which is related to data analyzer set_grad_enabled #5889.I also added tests covering both behaviors.
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.