You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Initially the data along with the script were deleted when the HPC-Workflow-Manager job (to which the data are tied to) was deleted by the user. This feature has been disabled because uploading huge datasets is time consuming and users may want to re-use datasets in other jobs. Deleting a job currently removes local data but maintains the data in the remote cluster.
Currently the user can delete their data manually by connecting with SSH.
I have been thinking of adding a different way to upload and download data separately from the scripts. This could be a new window titled "Data Manager" which would have a list of datasets (remote directories) which the user could upload or delete one by one. These would be available for use/re-use by the scripts uploaded separately by the users.
Alternatively a dialog could be added asking the user if they wish to delete the data associated with a job when deleting it. But this solution would not be ideal as they would have to delete data of jobs manually in the case the chose not to the first time.
from NEUBIAS, people asked how long does their data stay with us:
for users' comfort and good feeling, we should add button that clears up user's space on the cluster
The text was updated successfully, but these errors were encountered: