Skip to content

Conversation

@ryxli
Copy link
Contributor

@ryxli ryxli commented Dec 4, 2025

What does this PR do?

Allow subclass to set reward_manager_worker

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request allows subclasses of AgentLoopWorkerBase to provide their own reward_manager_worker instance. The change wraps the default initialization of self.reward_manager_worker in a hasattr check, which is a clean and effective way to allow for this customization. This pattern is already used for server_manager in the same class, making the change consistent with the existing codebase. The modification is straightforward and improves the flexibility of the agent loop. I find no issues with this implementation.

@wuxibin89 wuxibin89 requested a review from yyDing1 December 8, 2025 06:37
@yyDing1
Copy link
Collaborator

yyDing1 commented Dec 8, 2025

Could you share a bit more about the scenarios where you feel it might be necessary to override the reward manager worker, as these workers only maintains the initialization and computation logic for the reward components.

@ryxli
Copy link
Contributor Author

ryxli commented Dec 9, 2025

@yyDing1

This is mainly for flexibility.

RewardLoopWorker is decorated with ray.remote and since it can't be subclassed, maybe if there was a base class.

The alternative is to allow us to define our own. Majority of "computation logic for the reward components" we find that it doesn't need to be changed at all, but we did find the need to want to change the initialization logic.

For example the default one:

    def _init_reward_fn(self):
        input_tokenizer_local_path = copy_to_local(self.config.actor_rollout_ref.model.path)
        self.input_tokenizer = hf_tokenizer(input_tokenizer_local_path, trust_remote_code=True)
        self.reward_model_tokenizer = None
        if self.config.reward_model.enable:
            reward_model_tokenizer_local_path = copy_to_local(self.config.reward_model.model.path)
            self.reward_model_tokenizer = hf_tokenizer(reward_model_tokenizer_local_path, trust_remote_code=True)
        self.reward_fn = get_custom_reward_fn(self.config)
        reward_loop_manager_cls = get_reward_loop_manager_cls(self.config.reward_model.reward_manager)
        self.reward_loop = reward_loop_manager_cls(
            self.config, self.input_tokenizer, self.reward_fn, self.reward_router_address, self.reward_model_tokenizer
        )

Two things:

  1. The omegaconf container requirement for reward_arguments is a bit restricting in get_custom_reward_fn, for example the sandbox fusion arguments can pass concurrent_semaphore which is not a python primitive and omegaconf will refuse to serialize.
  2. and we may want to have more args / kwargs passed to the reward_loop_manager_cls

While these could be added to verl, allowing to override the class of the RewardLoopWorker itself allows us to be unblocked in our code base while reducing the need to modify native verl components and fork away from the main branch.

@yyDing1
Copy link
Collaborator

yyDing1 commented Dec 9, 2025

Understood. Users indeed need a high degree of flexibility for customizing the reward worker.

@yyDing1
Copy link
Collaborator

yyDing1 commented Dec 9, 2025

But I also wonder whether there might be a unified approach that allows the existing reward-loop worker to cover most (all) use cases, while leaving the flexibility primarily to the user-customized reward manager rather than the worker. I consider the concrete scenarios you mention:

  • Semaphores: This is a crucial point. We may create a global Ray remote actor to maintain the semaphore value and pass the actor handle to each reward worker. This would allow us to enforce a global limit on the sandbox’s execution concurrency.
  • I agree with your idea of letting the worker receive args/kwargs and directly forward them to the user-customized reward manager.

Future PRs may start addressing these aspects. Thanks and welcome contribution.

@yyDing1 yyDing1 changed the title feat: allow override for reward_manager_worker in agent loop [trainer] feat: allow override for reward_manager_worker in agent loop Dec 9, 2025
@wuxibin89 wuxibin89 merged commit 7417d88 into volcengine:main Dec 9, 2025
91 of 95 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants