Skip to content

Add Caching to Avoid Re-evaluating Solutions #184

@yhamadi75

Description

@yhamadi75

Dear Antonio,

I am using jMetalPy for multi-objective optimization with NSGAII, SPEA2, and HYPE. My app requires calling an external simulator for evaluation. Today, I am filtering out already seen solutions and returning their previously computed evaluation at the app level. However, with several workers, it requires shared memory for the cache, which impacts performances.

Ideally, such a caching mechanism to filter out already seen solutions from solution_list could be implemented at the library level, i.e., before calling any specific evaluator (sequential, multiprocess, dask, spark). Doing that would save from distributed caching problems.

Where is the best place to implement this filtering so it works across different modes (sequential, multiprocess, dask, spark)?

How can we do this without hacking the existing library code? Ideally doing an extension through inheritance?

Not sure evaluator.py is the place to start, I guess that a higher up level in the code, where solution_list is computed would be better. Should be independent from the number of objectives, by design.

Would love to hear your thoughts and happy to help out!

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions