-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zero Input Size #18
Comments
Before images are passed to SMIRK, they are tightly cropped around the face using facial landmarks. The error indicates that this step failed and returned a 0x0 crop box. |
Most of the face, except for one of the ears, is within the frame in the original image, but only half of the face remains in the cropped image. Is it possible to skip these problematic images, or is it possible to improve the effect of cropping? |
Another error occurred: Is this error also the result of bad input data? But I have checked the original image; every part of the face is within the frame, and the face is not blurry and well-lighted. |
@Michael-wzl |
@Michael-wzl |
@stas-polukeev Thanks for the advice! Your instructions are pretty clear to me. But it would be great if you could provide the exact code lines and files so that I can verify my understanding. |
@stas-polukeev Feel free to make a pull request btw, i'll gladly review and merge it. |
@KelianB Thanks, I've submitted a pull request! not very elegant, but it works. May be it's better to add some failure flag to the dict instead of changing it to None, but as the image being non-zero is central to this pipeline, if something is crashed that would be probably better for debugging |
Closing this now as the PR has been merged. @Michael-wzl let me know if you still run into issues. |
When running python train.py --config configs/example_smirk.txt, an error occurred:
Epoch 1, Iter 24: 0%| | 0/45 [00:41<?, ?it/s, loss=4.09]
Traceback (most recent call last):
File "/home/zlwang/SPARK/TrackerAdaptation/train.py", line 344, in
adapt_tracker(args, wrapper, dataset_train, dataset_val, dataset_test)
File "/home/zlwang/SPARK/TrackerAdaptation/train.py", line 108, in adapt_tracker
for views in dataloader_train:
File "/home/zlwang/SPARK/TrackerAdaptation/../MultiFLARE/utils/dataset.py", line 123, in iter
for batch in super().iter():
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 628, in next
data = self._next_data()
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 671, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 58, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torch/utils/data/dataset.py", line 295, in getitem
return self.dataset[self.indices[idx]]
File "/home/zlwang/SPARK/TrackerAdaptation/../MultiFLARE/utils/dataset.py", line 146, in getitem
self.cache[idx] = self.dataset[idx]
File "/home/zlwang/SPARK/TrackerAdaptation/adapt/crop_dataset.py", line 135, in getitem
crop = self.transforms(crop.permute(2,0,1)) # HW3 to 3HW
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 95, in call
img = t(img)
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 346, in forward
return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias)
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torchvision/transforms/functional.py", line 476, in resize
return F_t.resize(img, size=output_size, interpolation=interpolation.value, antialias=antialias)
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torchvision/transforms/functional_tensor.py", line 469, in resize
img = interpolate(img, size=size, mode=interpolation, align_corners=align_corners, antialias=antialias)
File "/home/zlwang/miniforge3/envs/SPARK/lib/python3.9/site-packages/torch/nn/functional.py", line 3950, in interpolate
return torch._C._nn.upsample_bilinear2d(input, output_size, align_corners, scale_factors)
RuntimeError: Input and output sizes should be greater than 0, but got input (H: 0, W: 0) output (H: 224, W: 224)
Why will this happen and how to solve it?
The text was updated successfully, but these errors were encountered: