Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using REBVO with realsense camera and librealsense? #20

Open
Userpc1010 opened this issue Aug 14, 2023 · 1 comment
Open

Using REBVO with realsense camera and librealsense? #20

Userpc1010 opened this issue Aug 14, 2023 · 1 comment

Comments

@Userpc1010
Copy link

Userpc1010 commented Aug 14, 2023

I found your project after some experience with ORB-SLAM 3 I came to the idea that its detector is not very resistant to fast movements and it i seemed not a bad alternative solved this problem to use contour detector to improve the result on the car. But at the moment I'm having problems building VideoDecoder::VideoDecoder(AVCodecID codec_id,int w,int h) (undefined reference to avcodec_open2 , ...) seems to be very outdated, also I'm sending data to REBVO:requestCustomCamBuffer() but I don't seem to get any results?

My configure Ubuntu 20.04 intel i7 mobile 12th, 16Gb 3200MHz dual-chanale and mobile 3050Ti 4G.

Spoiler rebvorun
/******************************************************************************

   REBVO: RealTime Edge Based Visual Odometry For a Monocular Camera.
   Copyright (C) 2016  Juan José Tarrio

   Jose Tarrio, J., & Pedre, S. (2015). Realtime Edge-Based Visual Odometry
   for a Monocular Camera. In Proceedings of the IEEE International Conference
   on Computer Vision (pp. 702-710).

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3 of the License, or
   (at your option) any later version.
   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.
   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software Foundation,
   Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301  USA

 *******************************************************************************/

#include <iostream>
#include <vector>
#include <condition_variable>
#include <algorithm>

#include "rebvo/rebvo.h"
#include "archimu.h"
#include "UtilLib/util.h"

#include "visualizer/visualizer.h"

#include <GLFW/glfw3.h>
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include <librealsense2-gl/rs_processing_gl.hpp> // Include GPU-Processing API
#include <librealsense2/rsutil.h>

using namespace rebvo;

void PrintHelp(){

    std::cout<<R"help(
               REBVO Commands:
               q: Quit
               r: Reset
               s: Save KF and Quit
               p: Take Snapshot
               f: Frame by frame
               a: Advance frame
               )help";

}

class callclass{
public:

    bool callFunc(PipeBuffer &p){
        //do something on the timespan of the rebvo third thread
        return true;
    }
};

/*

int main(int argn,char ** argv)
{

    std::string ConfigName(argn>1?argv[1]:"GlobalConfig");

    callclass callme;


    REBVO cf(ConfigName.data());
	
    if(!cf.Init())
		return -1;

    archIMU *imu_dev=nullptr;

    if(cf.getParams().ImuMode==1){


        imu_dev=new archIMU("/dev/ttySAC2",cf);

        if(imu_dev->error()){
            std::cout << "main.cpp: Failed to initialize the imu device\n";

            cf.CleanUp();
            return -1;
        }

    }

    cf.setOutputCallback(&callclass::callFunc,&callme);

    PrintHelp();

    bool run=true;
    bool savekf=false;
    while(run && cf.Running()){


        char c;
        std::cin >> c;

        switch(c){
        case 'q':
            run=false;
            std::cout << "\nExiting ...\n";
            break;
        case 's':
            run=false;
            savekf=true;
            std::cout << "\nExiting ...\n";
            break;
        case 'p':
            cf.TakeSnapshot();
            break;
        case 'r':
            cf.Reset();
            break;
        case 'k':
            cf.toggleKeyFrames();
            break;
        case 'f':
            cf.toggleFrameByFrame();
            cf.advanceFrameByFrame();
            break;
        case 'a':
            cf.advanceFrameByFrame();
            break;
        default:
            PrintHelp();
            break;
        }


    }

    if(savekf){
        std::cout <<"Saving KF: "<<(keyframe::saveKeyframes2File("kf_list.kf",cf.kf_list)?"OK":"Error")<<"\n";
        std::cout <<"Saving PG: "<<(cf.poses.saveToFile("poses_list.ps")?"OK":"Error")<<"\n";

    }

    cf.CleanUp();
    return 0;
}

 */
rs2_stream find_stream_to_align(const std::vector<rs2::stream_profile>& streams);
bool profile_changed(const std::vector<rs2::stream_profile>& current, const std::vector<rs2::stream_profile>& prev);

void interpolateData(const std::vector<double> &vBase_times,
                     std::vector<rs2_vector> &vInterp_data, std::vector<double> &vInterp_times,
                     const rs2_vector &prev_data, const double &prev_time);

rs2_vector interpolateMeasure(const double target_time,
                              const rs2_vector current_data, const double current_time,
                              const rs2_vector prev_data, const double prev_time);

int main(int argn,char ** argv)
{
    std::string ConfigName(argn>1?argv[1]:"GlobalConfig");

    callclass callme;

    REBVO cf(ConfigName.data());

    if(!cf.Init())
                    return -1;

    cf.setOutputCallback(&callclass::callFunc,&callme);

    PrintHelp();

    // Declare RealSense pipeline, encapsulating the actual device and sensors
    rs2::pipeline pipe;

    // Create a configuration for configuring the pipeline with a non default profile
    rs2::config cfg;

    // RGB stream
    cfg.enable_stream(RS2_STREAM_COLOR,848, 480, RS2_FORMAT_RGB8, 30);

    // Depth stream
    // cfg.enable_stream(RS2_STREAM_INFRARED, 1, 640, 480, RS2_FORMAT_Y8, 30);
    cfg.enable_stream(RS2_STREAM_DEPTH,848, 480, RS2_FORMAT_Z16, 30);

    // IMU stream
    cfg.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F, 200);
    cfg.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F, 200);

    Size2D size_im;

    size_im.w = 848;
    size_im.h = 480;

    // IMU callback
    std::mutex imu_mutex;
    std::condition_variable cond_image_rec;

    std::vector<double> v_accel_timestamp;
    std::vector<rs2_vector> v_accel_data;
    std::vector<double> v_gyro_timestamp;
    std::vector<rs2_vector> v_gyro_data;

    double prev_accel_timestamp = 0;
    rs2_vector prev_accel_data;
    double current_accel_timestamp = 0;
    rs2_vector current_accel_data;
    std::vector<double> v_accel_timestamp_sync;
    std::vector<rs2_vector> v_accel_data_sync;

    int width_img, height_img;
    double timestamp_image = -1.0;
    bool image_ready = false;
    int count_im_buffer = 0; // count dropped frames

    // start and stop just to get necessary profile
    rs2::pipeline_profile pipe_profile = pipe.start(cfg);
    pipe.stop();

    // Align depth and RGB frames
    //Pipeline could choose a device that does not have a color stream
    //If there is no color stream, choose to align depth to another stream
    rs2_stream align_to = find_stream_to_align(pipe_profile.get_streams());

    // Create a rs2::align object.
    // rs2::align allows us to perform alignment of depth frames to others frames
    //The "align_to" is the stream type to which we plan to align depth frames.
    rs2::align align(align_to);
    rs2::frameset fsSLAM;

    auto imu_callback = [&](const rs2::frame& frame)
    {
        std::unique_lock<std::mutex> lock(imu_mutex);

        if(rs2::frameset fs = frame.as<rs2::frameset>())
        {
            count_im_buffer++;

            double new_timestamp_image = fs.get_timestamp()*1e-3;
            if(abs(timestamp_image-new_timestamp_image)<0.001){
                count_im_buffer--;
                return;
            }

            if (profile_changed(pipe.get_active_profile().get_streams(), pipe_profile.get_streams()))
            {
                //If the profile was changed, update the align object, and also get the new device's depth scale
                pipe_profile = pipe.get_active_profile();
                align_to = find_stream_to_align(pipe_profile.get_streams());
                align = rs2::align(align_to);
            }

            //Align depth and rgb takes long time, move it out of the interruption to avoid losing IMU measurements
            fsSLAM = fs;

            timestamp_image = fs.get_timestamp()*1e-3;
            image_ready = true;

            while(v_gyro_timestamp.size() > v_accel_timestamp_sync.size())
            {
                int index = v_accel_timestamp_sync.size();
                double target_time = v_gyro_timestamp[index];

                v_accel_data_sync.push_back(current_accel_data);
                v_accel_timestamp_sync.push_back(target_time);
            }

            lock.unlock();
            cond_image_rec.notify_all();
        } else if (rs2::motion_frame m_frame = frame.as<rs2::motion_frame>())
        {
            if (m_frame.get_profile().stream_name() == "Gyro")
            {
                // It runs at 200Hz
                v_gyro_data.push_back(m_frame.get_motion_data());
                v_gyro_timestamp.push_back(m_frame.get_timestamp()*1e-3);
            }
            else if (m_frame.get_profile().stream_name() == "Accel")
            {
                // It runs at 60Hz
                prev_accel_timestamp = current_accel_timestamp;
                prev_accel_data = current_accel_data;

                current_accel_data = m_frame.get_motion_data();
                current_accel_timestamp = m_frame.get_timestamp()*1e-3;

                while(v_gyro_timestamp.size() > v_accel_timestamp_sync.size())
                {
                    int index = v_accel_timestamp_sync.size();
                    double target_time = v_gyro_timestamp[index];

                    rs2_vector interp_data = interpolateMeasure(target_time, current_accel_data, current_accel_timestamp,
                                                                prev_accel_data, prev_accel_timestamp);

                    v_accel_data_sync.push_back(interp_data);
                    v_accel_timestamp_sync.push_back(target_time);
                }
            }
        }
    };


    pipe_profile = pipe.start(cfg, imu_callback);
    rs2::stream_profile cam_stream = pipe_profile.get_stream(RS2_STREAM_COLOR);

    rs2::stream_profile imu_stream = pipe_profile.get_stream(RS2_STREAM_GYRO);
    float* Rbc = cam_stream.get_extrinsics_to(imu_stream).rotation;
    float* tbc = cam_stream.get_extrinsics_to(imu_stream).translation;
    std::cout << "Tbc = " << std::endl;
    for(int i = 0; i<3; i++){
        for(int j = 0; j<3; j++)
            std::cout << Rbc[i*3 + j] << ", ";
        std::cout << tbc[i] << "\n";
    }

    rs2_intrinsics intrinsics_cam = cam_stream.as<rs2::video_stream_profile>().get_intrinsics();
    width_img = intrinsics_cam.width;
    height_img = intrinsics_cam.height;
    std::cout << " fx = " << intrinsics_cam.fx << std::endl;
    std::cout << " fy = " << intrinsics_cam.fy << std::endl;
    std::cout << " cx = " << intrinsics_cam.ppx << std::endl;
    std::cout << " cy = " << intrinsics_cam.ppy << std::endl;
    std::cout << " height = " << intrinsics_cam.height << std::endl;
    std::cout << " width = " << intrinsics_cam.width << std::endl;
    std::cout << " Coeff = " << intrinsics_cam.coeffs[0] << ", " << intrinsics_cam.coeffs[1] << ", " <<intrinsics_cam.coeffs[2] << ", " << intrinsics_cam.coeffs[3] << ", " << intrinsics_cam.coeffs[4] << ", " << std::endl;
    std::cout << " Model = " << intrinsics_cam.model << std::endl;

    double timestamp;

    // Clear IMU vectors
    v_gyro_data.clear();
    v_gyro_timestamp.clear();
    v_accel_data_sync.clear();
    v_accel_timestamp_sync.clear();

    bool run=true;
    bool savekf=false;
    while(run && cf.Running())
    {
        std::vector<rs2_vector> vGyro;
        std::vector<double> vGyro_times;
        std::vector<rs2_vector> vAccel;
        std::vector<double> vAccel_times;
        rs2::frameset fs;
        {
            std::unique_lock<std::mutex> lk(imu_mutex);
            if(!image_ready)
                cond_image_rec.wait(lk);

            fs = fsSLAM;

            if(count_im_buffer>1)
                //cout << count_im_buffer -1 << " dropped frs\n";
            count_im_buffer = 0;

            while(v_gyro_timestamp.size() > v_accel_timestamp_sync.size())
            {
                int index = v_accel_timestamp_sync.size();
                double target_time = v_gyro_timestamp[index];

                rs2_vector interp_data = interpolateMeasure(target_time, current_accel_data, current_accel_timestamp, prev_accel_data, prev_accel_timestamp);

                v_accel_data_sync.push_back(interp_data);
                v_accel_timestamp_sync.push_back(target_time);
            }

            // Copy the IMU data
            vGyro = v_gyro_data;
            vGyro_times = v_gyro_timestamp;
            vAccel = v_accel_data_sync;
            vAccel_times = v_accel_timestamp_sync;

            // Image
            timestamp = timestamp_image;

            // Clear IMU vectors
            v_gyro_data.clear();
            v_gyro_timestamp.clear();
            v_accel_data_sync.clear();
            v_accel_timestamp_sync.clear();

            image_ready = false;

        }

        // Perform alignment here
        auto processed = align.process(fs);

        // Trying to get both other and aligned depth frames
        rs2::video_frame color_frame = processed.first(align_to);
        rs2::depth_frame depth_frame = processed.get_depth_frame();



        for(int i=0; i<vGyro.size(); ++i)
        {
           // ImuData lastPoint(vGyro_times[i], vAccel[i].x, vAccel[i].y, vAccel[i].z, vGyro[i].x, vGyro[i].y, vGyro[i].z);

            //cf.pushIMU(lastPoint);

            static ImuData data;

            data.tstamp=vGyro_times[i]; //*1e-9;
            data.giro[0]=vGyro[i].x;
            data.giro[1]=vGyro[i].y;
            data.giro[2]=vGyro[i].z;
            data.acel[0]=vAccel[i].x;
            data.acel[1]=vAccel[i].y;
            data.acel[2]=vAccel[i].z;


            cf.pushIMU(data);
        }

         Image<RGB24Pixel> buff(size_im);

        
         const uint8_t* data = reinterpret_cast<const uint8_t*>(color_frame.get_data());

        
         memcpy(buff.Data(), data, 848 * 480 * sizeof(RGB24Pixel));

        
         std::shared_ptr<Image<RGB24Pixel>> image = std::make_shared<Image<RGB24Pixel>>(buff);

         cf.requestCustomCamBuffer(image, timestamp);

        char c;
        std::cin >> c;

        switch(c){
        case 'q':
            run=false;
            std::cout << "\nExiting ...\n";
            break;
        case 's':
            run=false;
            savekf=true;
            std::cout << "\nExiting ...\n";
            break;
        case 'p':
            cf.TakeSnapshot();
            break;
        case 'r':
            cf.Reset();
            break;
        case 'k':
            cf.toggleKeyFrames();
            break;
        case 'f':
            cf.toggleFrameByFrame();
            cf.advanceFrameByFrame();
            break;
        case 'a':
            cf.advanceFrameByFrame();
            break;
        default:
            PrintHelp();
            break;
        }



        cf.releaseCustomCamBuffer();

        
        color_frame.keep();
    }

    pipe.stop();

    if(savekf){
        std::cout <<"Saving KF: "<<(keyframe::saveKeyframes2File("kf_list.kf",cf.kf_list)?"OK":"Error")<<"\n";
        std::cout <<"Saving PG: "<<(cf.poses.saveToFile("poses_list.ps")?"OK":"Error")<<"\n";

    }

    cf.CleanUp();

    return EXIT_SUCCESS;
}

rs2_stream find_stream_to_align(const std::vector<rs2::stream_profile>& streams)
{
    //Given a vector of streams, we try to find a depth stream and another stream to align depth with.
    //We prioritize color streams to make the view look better.
    //If color is not available, we take another stream that (other than depth)
    rs2_stream align_to = RS2_STREAM_ANY;
    bool depth_stream_found = false;
    bool color_stream_found = false;
    for (rs2::stream_profile sp : streams)
    {
        rs2_stream profile_stream = sp.stream_type();
        if (profile_stream != RS2_STREAM_DEPTH)
        {
            if (!color_stream_found)         //Prefer color
                align_to = profile_stream;

            if (profile_stream == RS2_STREAM_COLOR)
            {
                color_stream_found = true;
            }
        }
        else
        {
            depth_stream_found = true;
        }
    }

    if(!depth_stream_found)
        throw std::runtime_error("No Depth stream available");

    if (align_to == RS2_STREAM_ANY)
        throw std::runtime_error("No stream found to align with Depth");

    return align_to;
}


bool profile_changed(const std::vector<rs2::stream_profile>& current, const std::vector<rs2::stream_profile>& prev)
{
    for (auto&& sp : prev)
    {
        //If previous profile is in current (maybe just added another)
        auto itr = std::find_if(std::begin(current), std::end(current), [&sp](const rs2::stream_profile& current_sp) { return sp.unique_id() == current_sp.unique_id(); });
        if (itr == std::end(current)) //If it previous stream wasn't found in current
        {
            return true;
        }
    }
    return false;
}

rs2_vector interpolateMeasure(const double target_time,
                              const rs2_vector current_data, const double current_time,
                              const rs2_vector prev_data, const double prev_time)
{

    // If there are not previous information, the current data is propagated
    if(prev_time == 0)
    {
        return current_data;
    }

    rs2_vector increment;
    rs2_vector value_interp;

    if(target_time > current_time) {
        value_interp = current_data;
    }
    else if(target_time > prev_time)
    {
        increment.x = current_data.x - prev_data.x;
        increment.y = current_data.y - prev_data.y;
        increment.z = current_data.z - prev_data.z;

        double factor = (target_time - prev_time) / (current_time - prev_time);

        value_interp.x = prev_data.x + increment.x * factor;
        value_interp.y = prev_data.y + increment.y * factor;
        value_interp.z = prev_data.z + increment.z * factor;

        // zero interpolation
        value_interp = current_data;
    }
    else {
        value_interp = prev_data;
    }

    return value_interp;
}

Screenshot from 2023-08-14 19-31-18

UPD: I think I figured out why nothing happened on the screen, but the speed of the visualizer at a resolution of 1280x720 30fps is depressingly low ((
Screenshot from 2023-08-15 16-48-54

@Userpc1010
Copy link
Author

Test mono on car, the trajectory on the turn seems wrong:

https://youtu.be/UOnfpSzmOYw?si=_Px0dzInyXfXM8ZD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant