-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added makefile #34
base: master
Are you sure you want to change the base?
added makefile #34
Conversation
Hi @de-code Thanks ! You don't need to remove Question: We don't want to use flake8 for undefined variables? |
It failed for me, trying to find a cudas binary. Maybe if we created a Docker container with the cudas binary it would automatically use the CPU if no GPU was found?
I am not sure I understand you correctly here. I usually don't have many flake8 exceptions. I tend to have more pylint exceptions. It should certainly highlight and fail with undefined variables. Sometimes it gets it wrong with third-party libraries. My recommendation is to use the default rules with the least possible exceptions, make the build fail, revise the code or rules to make the checks pass (could be a separate PR to keep the PR small). |
With all CUDA lib installed, I think you need to add as env variable But the CPU fallback works fine for me, I am commonly deploying on server without GPU and there is no issue with the |
Sorry about flake8, I overlooked the corresponding part in the makefile, there is no difference with the current setting in .travis.yml |
This is what I am getting trying to use
Setting |
Yes, should be the same settings at the moment. Going forward |
I looked a bit, It appears that So I guess either we could strongly indicate that DeLFT should be installed with conda, or we need indeed a mechanisms to either install |
Since TensorFlow provides two separate Python packages, it seems to make sense to provide a mechanism to use either of the two. It would have been better if they provided GPU as an extra (e.g. The mechanism seems easy when using delft as an application as the user can just pass it in. When using it as a library it might be slightly more complicated. I wouldn't want to install CUDA libraries globally. Some libraries build from source might use the presence of the libraries as a clue to built for GPU. I never actually used |
Thanks Daniel, that's all make sense indeed. About condas, see this article about tensorflow install: the idea is to work anyway in an environment, so there is no system-wide impact of installing CUDA/ cudnn. Given that we can then simply use the |
Are you suggesting to switch to use to conda for development? Another thing to consider is that someone might have an optimized version of TensorFlow already installed. e.g. Google ML Engine, will already have TensorFlow installed with GPU enabled. Or at least in early times, it recommended to compile the CPU version (not sure if they have backed that in - compiling TF was no fun). Which would be an argument not to have TensorFlow as a direct dependency of delft library itself ( |
Yes apparently it solves the issue of I think we don't want to use an already installed version of TensorFlow? First it would be a bit a miracle to have the right cocktail of delft, tensorflow, cuda, cudnn compatible versions, and if it's the case, it would likely break as soon there is an update somewhere (in particular in delft). I think working only with an environment is the only manageable solution. |
Personally I would be in favour of leaving it more up to the user. It's easy to install TensorFlow. It's more difficult to skip installing it while installing the other dependencies. In any case, in the interest to avoid long running branches / PRs, would you accept the PR if I removed the switch for CPU / GPU version and added that back to |
My plan for this was:
|
This is work-in-progress.
It introduces a makefile which seem to be a good way to encapsulate commands (I would also be in favour of dockerizing it in the future).
I don't have a GPU on my laptop, therefore I moved the
tensorflow_gpu
dependency out.You can run
make dev-venv USE_GPU=1
to install it with that dependency.It might be good to enforce flake8 and pylint and add exceptions where necessary. Otherwise nobody is probably checking the warnings.
Using
.flake8
(and.pylintrc
) instead of command line arguments would also help other editor integrations of those tools to pick up the same rules./cc @kermitt2