-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
input data #26
Comments
Hi, you can refer to the pre-processing code. |
Hi, After trying to interpret the code from sep_HHAR_data.py and I saw the previous comment too, I know that 120 values of training data is come from the result of FFT from interpolation, but I dunno about eval, in the code you select a for one user out, and can generate 200 eval file, can you explain for this part? also if my sampling rate already set fix no matter what the device that I use to collect the data, should I still perform the interpolation for preprocessing? I really hope you will reply to this question, just ignore the previous one. Thank you |
Hi, Data measured with different people holding different devices. In the "one_user_out" mode, the code just pick the data from one particular user as the eval data with the left as the training. For your own data, I think interpolation is still needed. Even if you fix your sampling rate, the timestampe attached on each measurement may not follow the exact sampling rate (due to some background workload on the device). Interpolation can help to get "uniformly sampled" data used for FFT. |
so from the selected user a, it will generate 200 eval files including the null file too will be processed? what is exactly happen in this code: `for idx in range(len(X)): for idx in range(len(evalX)): |
Input data need to have the same shape. Some data samples are short, so we need to pad these samples with zero at the end. And we will mark these zeros during training. |
Thank you for replying, one more thing from data preprocessing: so from user a you have 28 files i.e a-gear_1-bike, a-gear_1-null, and so on, I still can't understand why from 28 files can generate 200 eval files, what is the idea of this preprocessing part? and also do you include file for example a-gear_1-null, a-gear_2-null, etc too to eval? I haven't touched the DeepSense Framework yet because I want to understand how you preprocess the data first, currently I am studying about CNN from coursera, I hope later on if I have questions regarding to the DeepSense framework, you won't mind. |
I have a similar dataset collected at 100Hz, Can you tell me how you preprocess your data? |
Hi,
|
Hi, I want to learn about DeepSense using my own gait data collection but for gait recognition purpose not for activity recognition, recorded with 50Hz in a csv files, with 150 participants, the format is timestamp followed by 3 axis acc and 3 axis gyro. How I should preprocess the data to become like your eval or train dataset?
because my sampling rate is fix to get 50 samples per seconds, should I still perform the time interpolation?
I would really appreciate it if you have some explanation about your dataset. Thank you
The text was updated successfully, but these errors were encountered: