-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'HParams' object has no attribute 'builder' #32
Comments
The posted solution is to use the vocoder from AutoVC. They share the same vocoder and thus not included in this repo. |
It is not clear what files must be extracted from AutoVC to speechsplit-master/assets. The readme, your comment above and your solution to #1 mention using 'the vocoder from autovc'. There are many files in AutoVC that fit that description. For instance, AutoVC contains the files:
And the AutoVC README links three 'models':
I have tried running demo.ipynb with different combinations of these files moved to speechsplit-master/assets. I have even tried moving the entirety of the AutoVC contents into speechsplit-master/assets. I ran vocoder.ipynb in the AutoVC-master folder and then tried running the demo with vocoder-checkpoint.ipynb placed in different folders (speechsplit-master, speechsplit-master/assets, speechsplit-master/.ipynb_checkpoints). All have presented me with the same AttributeError. Could you please be clearer about what files must be placed where, or at least post a screenshot of the speechsplit-master and speechsplit-master/assets directories with the required files placed in the appropriate folders? |
First of all, you need to install the appropriate version of r9y9's Wavenet vocoder, which is a large and delicate repo by itself. We did not include it in our repo in order to make our repo simple and clear. In our project, Vocoder is the part that converts spectrogram to audio. There are not many files in AutoVC fitting that description. demo.ipynb has two cells, the first cell does not require the Wavenet vocoder, only the second cell requires the Wavenet vocoder. |
Thank you for the more detailed explanation, however I followed your instructions with a clean install and have received new errors including: #demo conversion:
and #spectrogram to waveform:
I am sorry to be pedantic on the phrasing of these steps, but there seem to be instructions necessary to run the demo that are missing from the README and scattered amongst solved issues in this repo. I would like to help make the README clearer by including the missing steps and clarify the ones that may be difficult to interpret. Again, a screenshot of the directories would be extremely helpful and would provide context to your steps. I am happy to move this to a separate issue if necessary. Here are the steps you’ve written in the speechsplit-master README and what I have inferred from them:“Download pre-trained models to assets”:
“Download the same WaveNet vocoder model as in AutoVC to assets”
In issue #1, you said
Does this mean downloading AutoVC, running 'conversion.ipynb' and 'vocoder.ipynb', and then moving the file ‘vocoder-checkpoint’ into speechsplit-master/assets? Or does this mean moving ‘checkpoint_step001000000_ema.pth’? Those are the only two steps stated in the README before the step that says “Run demo.ipynb”. I can tell I have misunderstood step 2, though there are no steps that mention ‘synthesis.py’ or ‘hparams’ in the README either. So to clarify the other steps:
Thank you for taking the time to answer. I understand that it is difficult to troubleshoot this software for inexperienced users. |
|
I follwed both the steps mentioned by @leijue222:
I am still receiving the same error. Pls help. |
When running demo.ipynb, I am presented with this error:
This has been mentioned before in #1, but there have been no solutions posted. This repo has very few instructions. The ones that exist are vague and lack detail. It would be helpful to have a more comprehensive installation tutorial.
The text was updated successfully, but these errors were encountered: