Skip to content

Conversation

@jodavies
Copy link
Collaborator

@jodavies jodavies commented Oct 9, 2025

This improves performance for things like "gcd_(1-x^20,1-x^1000000)" since FLINT poly has a dense representation whereas mpoly is sparse. "Real life" benchmarks such as forcer or minceex are unchanged, they have dense polynomials.


Any thoughts on this? I have determined the threshold "experimentally" by benchmarking gcd_(1-x^20,1-x^N) for a range of N. Of course different computations probably have a different threshold... this kind of heuristic usually has tricky cases.

@coveralls
Copy link

coveralls commented Oct 9, 2025

Coverage Status

coverage: 53.373% (+0.03%) from 53.34%
when pulling f3f1e23 on jodavies:flint-sparse
into 73a6a14 on form-dev:master.

@tueda
Copy link
Collaborator

tueda commented Oct 9, 2025

I'm not sure if users should be able to choose the threshold, but in principle, it could be a setup parameter that they can adjust if needed.

@jodavies
Copy link
Collaborator Author

jodavies commented Oct 9, 2025

Right, this is an option.

For the record, forcer and minceex are ~25% slower if you just force the use of mpoly always.

@jodavies
Copy link
Collaborator Author

Updated, the term counting was not correct in fact.

I ran a few benchmarks also for mul_ and div_, the optimal threshold for those is not exactly the same (for the tested polynomials) as for gcd_ but the value I chose before is broadly OK.

This improves performance for things like "gcd_(1-x^20,1-x^1000000)"
since FLINT poly has a dense representation whereas mpoly is sparse.
"Real life" benchmarks such as forcer or minceex are unchanged, they
have dense polynomials.
@jodavies
Copy link
Collaborator Author

jodavies commented Jan 8, 2026

Updated, remove the "dummy variable" which forces the use of mpoly in sparse univariate scenarios. It is a performance improvement for mpoly routines to not have the un-used dummy variable in their context.

Edit: actually I had my numbers back to front. Removing the dummy variable before creating flint contexts seems to cause that to take significantly more time, for some reason. For now I will just leave the dummy variable there.

These cases are really not important for real physics computations anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants