Fix shared callback state in parallel OptunaSearchCV with LightGBM #260
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
When using
OptunaSearchCVwithn_jobs > 1, users may pass stateful callback objects(e.g., LightGBM early stopping callbacks) via
fit_params.Because
fit_paramsis shared across parallel workers, callback objects inside it canbe shared by reference, leading to unintended cross-trial interference.
In practice, this can cause multiple parallel trials to observe and mutate the same
callback state (such as
best_iteration), resulting in incorrect or identical earlystopping behavior across trials.
This issue was reported in #240 and can silently affect optimization results when running
parallel cross-validation with callbacks.
Description of the changes
This PR ensures proper isolation of callback state across parallel trials by:
fit_paramsat objective execution time.callbacksentry (if present) to provide each trial with anindependent callback instance.
fit_paramsunchanged to avoid unnecessary duplicationof large or expensive objects.
The change is localized to the objective execution path and does not affect existing
behavior for single-threaded runs or workflows that do not use callbacks.
This prevents shared mutable state between parallel trials while keeping memory and
performance overhead minimal.