-
Notifications
You must be signed in to change notification settings - Fork 937
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Layer model expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=uint8>, <tf.Tensor 'IteratorGetNext:1' shape=(None, None, None) dtype=float32>] #380
Comments
Hi, existing dataset files found -> loading.... Epoch 00001: LearningRateScheduler reducing learning rate to 0.001.
|
Hello. I had the same problem and I racked my brain these days. By the way, in my case, |
@bfhaha I tried your fix, but it did not solve the issue for me. Running the code history = model.fit(x=train_generator, still results in ValueError Traceback (most recent call last) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\def_function.py in call(self, *args, **kwds) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\function.py in call(self, *args, **kwargs) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\function.py in _define_function_with_shape_relaxation(self, args, kwargs, flat_args, filtered_flat_args, cache_key_context) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds) c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py in wrapper(*args, **kwargs) ValueError: in user code:
|
@suprateem48 Sorry. I really don't know where is the problem. |
@bfhaha Weird thing, I managed to reproduce your memory issue even on my 8GB RTX 2070 Super, but this error is given only for the first time the kernel runs model.fit(). Every consecutive time model.fit() is rerun on the same kernel, it throws the old tuple-related error. |
@bfhaha thanks for your fix. I've tried it as well. Same here: memory error (32GB RAM Predator, GForce 1070). I gave it a second try with a reduced data set of just 8 images, but same result. |
Hello. I have tried to run the notebook on a Google Compute Engine (E2 series, e2-highmem-16, 16vCPU, 128 GB memory) 80 GB disk. It also crashed... I was running ssd7_training.ipynb, not ssd300. |
I had rented a Google Compute Engine (N2 series, custom 8 vCPU, 640 GB memory, 200 GB Disk) yesterday and showed the following message after running one hour.
|
@bfhaha Yes,this is the exact Memory related issue I faced as well. |
@suprateem48 Have you ever tried this solution? |
Same issue right here, will update if I can find something. Edit: It seems like calling next(val_generator) is infinite ? Not quite sure why. But calling tuple() on an infinite generator will cause a memory error. |
|
@JuliusJacobitz Sorry. I don't understand what you mean "how" I used model.fit. So I just changed |
Hi, The reason was that the return of the data_generator was [batch_X, batch_y_encoded].
FYI: Here is my model.fit.
|
@pirolone888 Thanks. But it showed |
Hello, I also converted the code from tensorflow 1.x to tensorflow 2.4. I fixed the problem you are having by changing the DataGenerator in object_detection_2d_data_generator.py as such: ret = []
I simply changed yield ret to yield tuple(ret). |
@daviddanialy Thanks. So just place the following code under the function
It still showed the original error message (Layer model expects 1 input(s), but it received 2 input tensors...). I have already given up trying this project and trying matterport's mask rcnn for object detection. |
That code is already in the generate function, you just change |
@daviddanialy Thanks. It doesn't work for me. |
Any solutions? Here is my code:
I have tried
|
Tried that. Not seem to work for me... |
I solved the same issue for My generator ouput was: I changed it to: |
hello! Above, I am using customized dataset generator (I modified this python code: https://github.com/wjddyd66/Tensorflow2.0/blob/master/SSD/voc_data.py), where I send generated dataset to the training code below: this way I was able to fix the problem above. Hope it helps to you guys! |
Tensorflow V2 (latest)
Keras (latest)
ssd300_training.ipynb
I have managed to convert most of the V1 code to V2 and successfully run it. I have made changes to all the python files as necessary too. However, this issue occurs on the line
history = model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
Entire error:
Epoch 1/120
Epoch 00001: LearningRateScheduler reducing learning rate to 0.001.
ValueError Traceback (most recent call last)
in
4 steps_per_epoch = 1000
5
----> 6 history = model.fit_generator(generator=train_generator,
7 steps_per_epoch=steps_per_epoch,
8 epochs=final_epoch,
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1844 'will be removed in a future version. '
1845 'Please use
Model.fit
, which supports generators.')-> 1846 return self.fit(
1847 generator,
1848 steps_per_epoch=steps_per_epoch,
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1097 _r=1):
1098 callbacks.on_train_batch_begin(step)
-> 1099 tmp_logs = self.train_function(iterator)
1100 if data_handler.should_sync:
1101 context.async_wait()
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\def_function.py in call(self, *args, **kwds)
782 tracing_count = self.experimental_get_tracing_count()
783 with trace.Trace(self._name) as tm:
--> 784 result = self._call(*args, **kwds)
785 compiler = "xla" if self._experimental_compile else "nonXla"
786 new_tracing_count = self.experimental_get_tracing_count()
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
825 # This is the first call of call, so we have to initialize.
826 initializers = []
--> 827 self._initialize(args, kwds, add_initializers_to=initializers)
828 finally:
829 # At this point we know that the initialization is complete (or less
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\def_function.py in _initialize(self, args, kwds, add_initializers_to)
679 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
680 self._concrete_stateful_fn = (
--> 681 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
682 *args, **kwds))
683
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2995 args, kwargs = None, None
2996 with self._lock:
-> 2997 graph_function, _ = self._maybe_define_function(args, kwargs)
2998 return graph_function
2999
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs)
3387
3388 self._function_cache.missed.add(call_context_key)
-> 3389 graph_function = self._create_graph_function(args, kwargs)
3390 self._function_cache.primary[cache_key] = graph_function
3391
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3222 arg_names = base_arg_names + missing_arg_names
3223 graph_function = ConcreteFunction(
-> 3224 func_graph_module.func_graph_from_py_func(
3225 self._name,
3226 self._python_function,
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
994 _, original_func = tf_decorator.unwrap(python_func)
995
--> 996 func_outputs = python_func(*func_args, **func_kwargs)
997
998 # invariant:
func_outputs
contains only Tensors, CompositeTensors,c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds)
588 xla_context.Exit()
589 else:
--> 590 out = weak_wrapped_fn().wrapped(*args, **kwds)
591 return out
592
c:\users\dolphin48.conda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py in wrapper(*args, **kwargs)
981 except Exception as e: # pylint:disable=broad-except
982 if hasattr(e, "ag_error_metadata"):
--> 983 raise e.ag_error_metadata.to_exception(e)
984 else:
985 raise
ValueError: in user code:
This Stackoverflow post (https://stackoverflow.com/questions/61586981/valueerror-layer-sequential-20-expects-1-inputs-but-it-received-2-input-tensor#) suggests it has something to do with fit() parameter validation_data. It points to a change in structural requirements, which has been changed from lists to tuples across tfv1.x and tfv2.x. However, we are not using a structure at all, but a generator to accomplish our task. I don't understand what is going wrong.
The text was updated successfully, but these errors were encountered: