-
-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The distributed example doesn't work #251
Comments
I did get it to work but indeed its a bit unreliable like the docs say. Would je really excited to see that being picked up however. :) |
Hi, I am the guy who wrote Regarding performance: how much |
In fact I'm trying to optimize a function that takes a few minutes to complete, so running this distributed would really speed this up a lot so I don't need to wait half an hour for one generation. Is Pypy in combination with distributed computing advised? I will try and take a look at the code tomorrow. |
I'm having trouble merging the repositories as I almost never have done it before. The problem is mainly how can I merge this so I can start selecting which code should stay and which shouldn't Is there any other way I can contact you? |
I haven't tested it. In theory it should work as long as you set
For anyone else reading this: I've responded to a separate issue in my fork here. |
The example code at https://github.com/CodeReclaimers/neat-python/blob/master/examples/xor/evolve-feedforward-distributed.py doesn't seem to work and I can't get it to work.
lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object '_ExtendedManager._get_manager_class.<locals>._EvaluatorSyncManager'
It would be really cool to run this on multiple devices and have it train a lot quicker
The text was updated successfully, but these errors were encountered: