Fix for issue #21118: inconsistent behavior across callbacks#21275
Fix for issue #21118: inconsistent behavior across callbacks#21275fchollet merged 10 commits intokeras-team:masterfrom
Conversation
…d logic used in EarlyStopping.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #21275 +/- ##
==========================================
+ Coverage 82.57% 82.60% +0.02%
==========================================
Files 564 565 +1
Lines 54677 54772 +95
Branches 8500 8508 +8
==========================================
+ Hits 45152 45243 +91
- Misses 7435 7439 +4
Partials 2090 2090
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| f"filepath={self.filepath}" | ||
| ) | ||
|
|
||
| def _set_monitor_op(self): |
There was a problem hiding this comment.
Can we refactor this logic into a standalone function that we could reuse across all callbacks that need this functionality?
There was a problem hiding this comment.
I added a MonitorCallback base class for all the callbacks that use this functionality (currently EarlyStopping, ReduceLROnPlateau and ModelCheckpoint). This ensures consistent behavior across callbacks and reduces code duplication.
…ss all callbacks that needs it
| from keras.src.trainers import compile_utils | ||
|
|
||
|
|
||
| @keras_export("keras.callbacks.MonitorCallback") |
There was a problem hiding this comment.
The refactoring into a shared base class makes sense, but please do not export it to the public API.
There was a problem hiding this comment.
I removed it. However, I do think it could be useful as a public API — perhaps in a follow-up PR with some adjustments. It would be helpful for users who want to create custom callbacks that monitor a metric — for example, plotting something whenever the loss decreases.
fchollet
left a comment
There was a problem hiding this comment.
LGTM, thank you for the updates!
#21118