-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix LMMAES #56
base: main
Are you sure you want to change the base?
Fix LMMAES #56
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much for the PR! I left some comments and recommendations.
@@ -158,7 +138,7 @@ def ask_strategy( | |||
state.c_d, | |||
state.gen_counter, | |||
) | |||
return x, state | |||
return x, state.replace(z=z) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I am missing it, but where is the z
needed? (popsize, num_dims)
will require quite a lot of memory for many applications. Can we reconstruct this from the sampled x
candidates instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this modification is the largest difference between original evosax's code and numpy code for memory size.
We have to use z
to update p_sigma,M
(pls see the update functions), but in order to get z
from solution vectors we have to calculate inverse matrices and that may causes both computing errors and costs. I think the procedure for getting z from solutions will be like this;
iteratively calculate inverse of (1-c_d[j])*I+c_d[j]*jnp.outer(M[j,:],M[j,:])
like sample
function and select j by checking generation.
Are there any linear algebra tricks to keep away from calculation of inverse matrices? I can't came up with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhmm, I am not sure -- but I guess this is a bit of a problem. I guess it will be hard to scale to neuroevolution even though it is the limited memory version :( I will give this more of a thought.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello. If possible, could you please approve the workflow and run the test before devising the implementation? I am wondering if there are any implementation mistakes at this time.
add update best fitness and best member.
updating best_fitness and best_menber are done in Strategy's tell function so we needn't it.
updating best_fitness and best_menber are done in Strategy's tell function so we needn't it.
fix type check typo
Codecov Report
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## main #56 +/- ##
==========================================
- Coverage 88.86% 88.85% -0.01%
==========================================
Files 76 76
Lines 4796 4793 -3
==========================================
- Hits 4262 4259 -3
Misses 534 534
|
This pull request is for this issue.
Compared with implementation in numpy, the original LM_MA_ES in evosax seems to be incorrect wrt default parameters settings and parameters update expressions so I write a code corresponds to numpy implementation.
I used black linter and tested by evosax/tests/test_strategy_api.py. This is my first contribution, so please tell me if there are any problems.