Tags: boncheolgu/iree
Tags
Merge google -> main (iree-org#6965) * edc2354 Merge pull request iree-org#6963 from not-jenni:main-to-google * 0afdbc2 Synchronize submodules with LLVM at llvm/llvm-project@79d58b4d3017 * 321fd94 Integrate LLVM at llvm/llvm-project@79d58b4d3017 * d0f8307 Integrate LLVM at llvm/llvm-project@25765d860d60 * 42de164 Integrate LLVM at llvm/llvm-project@3f1f08f0ed6a
Merge google -> main (iree-org#6965) * edc2354 Merge pull request iree-org#6963 from not-jenni:main-to-google * 0afdbc2 Synchronize submodules with LLVM at llvm/llvm-project@79d58b4d3017 * 321fd94 Integrate LLVM at llvm/llvm-project@79d58b4d3017 * d0f8307 Integrate LLVM at llvm/llvm-project@25765d860d60 * 42de164 Integrate LLVM at llvm/llvm-project@3f1f08f0ed6a
Merge google -> main (iree-org#6965) * edc2354 Merge pull request iree-org#6963 from not-jenni:main-to-google * 0afdbc2 Synchronize submodules with LLVM at llvm/llvm-project@79d58b4d3017 * 321fd94 Integrate LLVM at llvm/llvm-project@79d58b4d3017 * d0f8307 Integrate LLVM at llvm/llvm-project@25765d860d60 * 42de164 Integrate LLVM at llvm/llvm-project@3f1f08f0ed6a
Merge google -> main (iree-org#6965) * edc2354 Merge pull request iree-org#6963 from not-jenni:main-to-google * 0afdbc2 Synchronize submodules with LLVM at llvm/llvm-project@79d58b4d3017 * 321fd94 Integrate LLVM at llvm/llvm-project@79d58b4d3017 * d0f8307 Integrate LLVM at llvm/llvm-project@25765d860d60 * 42de164 Integrate LLVM at llvm/llvm-project@3f1f08f0ed6a
Merge google -> main (iree-org#6965) * edc2354 Merge pull request iree-org#6963 from not-jenni:main-to-google * 0afdbc2 Synchronize submodules with LLVM at llvm/llvm-project@79d58b4d3017 * 321fd94 Integrate LLVM at llvm/llvm-project@79d58b4d3017 * d0f8307 Integrate LLVM at llvm/llvm-project@25765d860d60 * 42de164 Integrate LLVM at llvm/llvm-project@3f1f08f0ed6a
Merge google -> main (iree-org#6965) * edc2354 Merge pull request iree-org#6963 from not-jenni:main-to-google * 0afdbc2 Synchronize submodules with LLVM at llvm/llvm-project@79d58b4d3017 * 321fd94 Integrate LLVM at llvm/llvm-project@79d58b4d3017 * d0f8307 Integrate LLVM at llvm/llvm-project@25765d860d60 * 42de164 Integrate LLVM at llvm/llvm-project@3f1f08f0ed6a
Fix iree_hal_allocator memory statistics format (iree-org#6954) The memory stats is shown in Bytes. Fix the printout unit. ``` EXEC @simple_mul result[0]: hal.buffer_view 4xf32=1 4 9 16 [[ iree_hal_allocator_t memory statistics ]] HOST_LOCAL: 32B peak / 32B allocated / 32B freed / 0B live DEVICE_LOCAL: 16B peak / 16B allocated / 16B freed / 0B live ```
Plumb dynamic shape support through for Vulkan and VMVX (iree-org#6917) This commit plumbs dynamic shape support through for both Vulkan and VMVX. They rely on 1-D MemRef and running `FlattenMemRefSubspanPass` in advance, instead of MemRef descriptors. In order to enable dynamic shape support, we need to carry the SSA values for dynamic dimensions down the CodeGen pipeline so that we can linearize the index calculation in `FlattenMemRefSubspanPass`. We have such information tightly associated with various ops at the Flow level, but when outlining executables and materializing HAL interface, the association is broken down. Instead, `tie_shape` ops are used to carry such information. It's structurally difficult to maintain and convert. So this commit changes the `hal.interface.binding.subspan` to carry the dynamic dimension SSA values by itself, like many other ops in Flow/HAL. It's a natural change that simplifies lots of analysis and transformation. For example, we don't need to maintain the two step conversion at CPU side (first generating an undefined MemRef descriptor when handling the `subspan` op, and then filling its content when handling the `tie_shape` op). It also makes the intervals of HAL more akin to Flow on this front. Other changes are mostly natural based on that: * `MaterializeInterfaces` picks up the information from `tie_shape` ops and attaches them to `subspan` ops. * `FlattenBindingSubspan` reads the dynamic dimensions to perform index linearization. * `ConvertToLLVM` now generates the full MemRef descriptor from `subspan` ops. * A new pass is added to fold `memref.dim`/`tensor.dim` ops over shape carrying ops. This puts IREE CodeGen dynamic shape support for Vulkan/VMvX in a very nice state: Because we run `FoldSubViewOpsPass` in advance, there won't be intermediate MemRefs (coming from `subview` ops). So load/stores directly take in HAL `subspan` ops. By definition in IREE we have tightly packed buffers so all MemRefs coming from subspans should have strides of the total element count in inner dimensions. So symbolic strides in subspan ops' AffineMaps correspond to SSA values for dimension sizes (or their products). Offsets are attached to subspan ops as SSA values, but then they are "transferred" to load/store ops during memref flattening, by being part of the index linearization calculation.
Merge google -> main (iree-org#6941) * 09b94dd Synchronize submodules with LLVM at llvm/llvm-project@9b6c8132d378 * 825f80d Merge pull request iree-org#6939 from not-jenni:main-to-google * 9c6ea05 Synchronize submodules with LLVM at llvm/llvm-project@b9db70369b77 * 09553cc Integrate LLVM at llvm/llvm-project@9b6c8132d378 * 3a6a1a8 Synchronize submodules with LLVM at llvm/llvm-project@b9db70369b77 * 5591b3d Integrate LLVM at llvm/llvm-project@b9db70369b77 * f65d829 Integrate LLVM at llvm/llvm-project@2dfb66833fd2
NFC: Remove crufty insertion point search. (iree-org#6937) To start creating the memrefs for the output (when bufferizing a `flow.tensor.store`), the result buffer has to be allocated before any uses of it. Replace the crufty insertion point search with just cloning of operations. They get CSE-ed anyway.
PreviousNext