Skip to content

Conversation

@none0663
Copy link
Contributor

@none0663 none0663 commented Dec 4, 2025

What does this PR do?

Optimize the _validate method by conditionally executing tokenizer decode operations only when needed.

Changes:

  • Added conditional check for val_data_dir and generations_to_log to determine if decoding is required
  • Made input_texts and output_texts decoding conditional
  • Removed duplicate val_data_dir definition

Performance Impact:
When val_data_dir is not set and log_val_generations is 0, tokenizer decode operations are skipped entirely, reducing unnecessary computational overhead during validation.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable optimization by making tokenizer decode operations conditional within the _validate method. The logic to skip decoding when val_data_dir is not set and log_val_generations is zero is sound and should reduce computational overhead during validation. The removal of the duplicate val_data_dir definition is also a good cleanup.

I've added a couple of suggestions to further improve performance and code consistency by using tokenizer.batch_decode instead of iterating with tokenizer.decode. This is more efficient and aligns with practices in other parts of the file.

Comment on lines +569 to +570
input_texts = [self.tokenizer.decode(ids, skip_special_tokens=True) for ids in input_ids]
sample_inputs.extend(input_texts)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

For better performance and consistency with other parts of the codebase (e.g., _log_rollout_data), it's recommended to use tokenizer.batch_decode instead of a list comprehension with tokenizer.decode. batch_decode is generally more efficient for decoding batches of sequences. You can also directly extend sample_inputs to make the code more concise.

Suggested change
input_texts = [self.tokenizer.decode(ids, skip_special_tokens=True) for ids in input_ids]
sample_inputs.extend(input_texts)
sample_inputs.extend(self.tokenizer.batch_decode(input_ids, skip_special_tokens=True))

Comment on lines +609 to +610
output_texts = [self.tokenizer.decode(ids, skip_special_tokens=True) for ids in output_ids]
sample_outputs.extend(output_texts)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the input decoding, using tokenizer.batch_decode here is more performant and consistent with other parts of the code. The intermediate output_texts variable can also be removed for conciseness.

Suggested change
output_texts = [self.tokenizer.decode(ids, skip_special_tokens=True) for ids in output_ids]
sample_outputs.extend(output_texts)
sample_outputs.extend(self.tokenizer.batch_decode(output_ids, skip_special_tokens=True))

@wuxibin89 wuxibin89 changed the title feat: Optimize tokenizer decode calls in validation [trainer] feat: Optimize tokenizer decode calls in validation Dec 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants