Skip to content
/ LET-NET Public
forked from linyicheng1/LET-NET

LET-NET: A lightweight CNN network for sparse corners extraction and tracking

Notifications You must be signed in to change notification settings

whuhxb/LET-NET

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LET-NET: A lightweight CNN network for sparse corners extraction and tracking

LET-NET implements an extremely lightweight network for feature point extraction and image consistency computation. The network can process a 240 x 320 image on a CPU in about 5ms. Combined with LK optical flow, it breaks the assumption of brightness consistency and performs well on dynamic lighting as well as blurred images.

1. Prerequisites

Notes: After installing ncnn, you need to change the path in CMakeLists.txt

set(ncnn_DIR "<your_path>/install/lib/cmake/ncnn" CACHE PATH "Directory that contains ncnnConfig.cmake")

2. Build

mkdir build && cd build
cmake .. && make -j4

3. Run demo

You can enter the path to a video or two images.

./build/demo <path_param> <path_bin> <path_video>

or

./build/demo <path_param> <path_bin> <path_img_1> <path_img_2>

For example using the data we provide:

./build/demo ../model/model.param ../model/model.bin ../assets/nyu_snippet.mp4

You should see the following output from the NYU sequence snippet:

4. Examples

Dynamic lighting

The left is ours and the right is the original optical flow algorithm.

Underwater

The left is ours and the right is the original optical flow algorithm.

Active light source

TODO

About

LET-NET: A lightweight CNN network for sparse corners extraction and tracking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 95.5%
  • CMake 4.5%