-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About read data of the model #4
Comments
In fact, it is a common situation that the training data contains noise. Here, we would like to examine the impact of observational noise on different RC methods. If this process is not necessary, we can set the noise intensity to 0. Of course, during the testing process, if there are data without noise, these data should be considered as the ground truth. |
Thank you very much for your response; it has resolved my confusion. Your work is truly outstanding. |
Dear author, |
In fact, when the amount of training data and the dimensions of the reservoir network are sufficient, the randomness of the matrices W and A has a relatively small impact on prediction and detection results, hence we set a random seed. When considering dynamic predictions, we selected multiple starting points for prediction, thus it is the average of multiple predictions. Certainly, for situations with complex and stringent training conditions, considering multiple sets of random W and A for experimentation and then taking the average is also meaningful. This approach can improve the model's robustness to some extent. |
Thank you for your detailed response, I really appreciate it. |
Dear author,
I am deeply thankful for the code you have generously shared, and I have learned a great deal from your paper. However, while learning your code, I encountered a question regarding the data loading process. Specifically, in the read_data function of Model_HoGRC.py, I noticed that noise is added to input_data at the final step. I am unsure about the rationale behind this action and its intended purpose. wouldn't the training process and error computations, which seemingly rely on this noise-added data, ideally compare against the original, pristine input_data for prediction accuracy? I observe similar treatment in Model_RC and Model_PRC.
The text was updated successfully, but these errors were encountered: