You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem or challenge?
The FileFormat trait has infer_schema and infer_stats calls that are given ObjectMetas, and then afterwards the create_physical_plan call returns an ExecutionPlan.
First the ObjectMeta has a required size, implying either we externally know the object size or a HEAD request is made, even though we are often able to make a relative request from the end of the object to read stats + metadata.
The infer_schema and infer_stats both independently open and read the file metadata (at least in the Parquet implementation), and without a custom ParquetFileReaderFactory the ParquetExecBuilder will open and read the metadata a third time.
What would be the recommended way to carry the metadata through from the initial infer_schema call and reuse it for infer_stats and inside the execution plan? Should we have a session-scoped cache inside our FileFormat impl keyed by ObjectMeta? What would the recommended cache key be since that type doesn't impl Hash?
Would it be better to pass PartitionedFile into infer_schema and infer_stats so we can stash the metadata inside the extensions field?
Or should we avoid FileFormat entirely and go the route of a custom TableProvider?
Thank you for your thoughts!
Describe the solution you'd like
No response
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem or challenge?
The FileFormat trait has infer_schema and infer_stats calls that are given ObjectMetas, and then afterwards the create_physical_plan call returns an ExecutionPlan.
First the ObjectMeta has a required size, implying either we externally know the object size or a HEAD request is made, even though we are often able to make a relative request from the end of the object to read stats + metadata.
The infer_schema and infer_stats both independently open and read the file metadata (at least in the Parquet implementation), and without a custom
ParquetFileReaderFactory
theParquetExecBuilder
will open and read the metadata a third time.What would be the recommended way to carry the metadata through from the initial infer_schema call and reuse it for infer_stats and inside the execution plan? Should we have a session-scoped cache inside our
FileFormat
impl keyed byObjectMeta
? What would the recommended cache key be since that type doesn't impl Hash?Would it be better to pass
PartitionedFile
into infer_schema and infer_stats so we can stash the metadata inside the extensions field?Or should we avoid FileFormat entirely and go the route of a custom
TableProvider
?Thank you for your thoughts!
Describe the solution you'd like
No response
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: