-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Labels
enhancementNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomers
Description
It currently takes around 2 seconds to load the pirl module. Most of the time this isn't a big deal, but due to a combination of:
- We restart each Ray worker after a single call to work around TensorFlow/CUDA brokenness;
- A complete experiment can consist of 100s of calls;
this adds up to a significant overhead when the results are mostly cached. (The overhead is still there for non-cached calls but is negligible compared to the cost of e.g. running an RL/IRL algorithm.)
I'm not sure how much we can realistically shave off. Just importing Ray takes ~0.5s, and we can't avoid that. Importing TensorFlow seems to be a big culprit; we actually can delay that, but we'd need to make imports in pirl.agents and pirl.irl be on-demand. (Currently they all get loaded by the config.)
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomers