Skip to content

Conversation

@rikimasan
Copy link
Contributor

Takes all magic numbers used for balancing and combines them into an object that is optionally passable to the performance calculator to enable quick recalculation with different balancing parameters from external programs like osu tools. A quick example program for what is enabled by this is the automated rework balancing tool. Happy to address any concerns. Functionally does nothing on its own in this PR to improve ease of review.

@rikimasan rikimasan force-pushed the rikimasan/osu-difficulty-tuning branch from 0c6b44a to 5f38d33 Compare December 20, 2025 07:20

namespace osu.Game.Rulesets.Osu.Difficulty
{
public record OsuDifficultyTuning
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure about the naming of this. i'd be happier with something like OsuDifficultyConstants (but wait for other's opinions before renaming).

@stanriders
Copy link
Member

I'm not a big fan of having a giant diffcalc config like that - it'd be a really convenient thing tools-wise, but at the same time a massive eyesore code-wise

@rikimasan
Copy link
Contributor Author

rikimasan commented Dec 21, 2025

I'm not a big fan of having a giant diffcalc config like that - it'd be a really convenient thing tools-wise, but at the same time a massive eyesore code-wise

Yeah, I don't think it's a major regression but I understand that it is a regression in ergonomics here. I do think the potential tools it enables are worth the cost. The other approaches for getting these tools online unfortunately fall short.

I see 2 major benefactors of the change.

  1. Rework devs will be able to have a tighter feedback loop between making system changes and evaluating the real effects of the changes post balancing with a more consistent baseline.

  2. I see this as an avenue for enabling better community involvement and with it perception/alignment of the pp system to users in a way that isn't just people arguing about the values themselves. Data could now be collected and used to support a measurable target for how well the pp system is able to align to player expectations.

@Natelytle
Copy link
Contributor

Just wanted to add my 2 cents on this, not conceptually but after having watched it get tested for a bit.

This change is a strong increase in the quality of performance point development. Typically, balancing manually is a multi-week gruelfest of changing values a little bit (while practically shooting in the dark) and waiting for tools to reload, which can be really demotivating when trying to make more drastic changes than 5 liners. One of the biggest problems I've had with this is that humans just don't have a good intuitive sense for these multi dimensional optimization problems, so trying to fix problem maps by hand can cause really unfortunate nerfs to some really good values.

After testing with a really large-scope rework (branch is located here), the workflow shifts towards modelling pp for most of the time, using a parameter optimizer with target values for scores you think are representative, and adding problem maps as they come up in profiles. Iterating becomes about 100x faster, not exaggerating, and that fact came with the review that it would make more drastic structural changes in the attached branch go from insurmountable to fairly easy.

I do really think this has potential, and I'd love to see that taken into consideration instead of disregarding this solely because ergonomics/aesthetics are a little worse.

@tsunyoku tsunyoku moved this to Pending Review in Difficulty calculation changes Dec 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Pending Review

Development

Successfully merging this pull request may close these issues.

4 participants