Skip to content

jnaor/wizzy

Repository files navigation

Installation

Prerequisites

  1. install ROS1
  2. pip install sklearn scikit-image
  3. xterm (for controlling the chair in simulation) - sudo apt install xterm

Install the code

  1. Clone the repository
  2. Run ./setup.bash

Current Status

Functionality

  1. What works - USB Relay, FLIC button, Joystick, Haptic
  2. Code Complete - Audio
  3. What partially works - LIDAR, cameras, decision maker
  4. What doesn't work - Phone interface, Obstacle classification

Physical Setup (Raspberry, Jetson, Arduino)

  1. A Raspberry PI running the main control logic, the LIDAR, the Haptic feedback and the FLIC emergency stop button. Login: wizzy-aux, Password: 123456. Generates a WIFI network (called WizzyFI; password wizzywizzy)
  2. An nVidia Jetson nano processing the images from four Realsense depth cameras and the audio output. Login: wizzy, Password: wizzy
  3. An arduino controlling LED visual feedback

The system is based on ROS1 melodic (with the Raspberry as ROS_MASTER) and nodes in python2 (except for the LED component which is in C++ on the Arduino). The computers are connected directly by ethernet.

USB Relay

Wizzy CAN reverse-engineered by cyber team. Works

Caretaker Emergency Stop Button

Connected by bluetooth to the raspberry. One click locks, two clicks release. The module works with python3 and therefore does not run as a ROS node but independently. If system is ported to noetic this can be included in main architecture.

LEDs - Idan

works

Haptic

works

Audio

noticeable latency. seems better on the jetson but not really tested yet

Sensing

LIDAR

Partial. Seems to work for obstacles. Not tested enough for stairs

Realsense

Obstacles based on depth only worked last time it was checked (a long time ago).

Decision Making

Not finished

Simulation

In Gazebo

Wizzybug Driver Assist

Proof of Concept Technical Specification

Note: Work in progress

Introduction

When a child begins walking, normally at age 9-16 months, they gain the power to explore, move around and impact their environment. For children with disabilities, it is not unusual to have to wait until the age of 5 years, due to the complicated process of learning to drive an electric wheelchair.

By connecting cameras and other sensors to the electric wheelchair and to the care-giver's application, we hope to close this gap and prevent the developmental deficits related to it, and to allow simple and safe usage of the electric wheelchair for very young children.

ADAS for the Wizzybug Electric Wheelchair

Feature Overview

Learning to drive the Wizzybug is hard. Starting at late age (such as age 5) delays the child's development. Technology can aid in speeding up the learning process, and increase safety.

The system should allow the child to control and move the chair, but simultaneously have the ability to signal and react if the chair is approaching an obstacle such as a person, a wall, or any other object in the wheelchair's path.

The signaling should occur in several domains, because the system interacts with children that have developmental disabilities: Visual, Audio and Haptic.

When the children first try to drive the electric wheelchair, they might bump into walls, toys and people around them. This is painful and problematic, especially when other handicapped children are involved (who mostly do not move fast enough to avoid the incoming wheelchair).

In case the sensory / vocal signal does not help, the system should apply resistance to the wheelchair problematic trajectory, or even bring the wheelchair to a full stop. Continuing the wheelchair's movement will be enabled only in an obstacle free direction.

System Components

  1. Four RealSense 415 depth cameras providing depth and RGB images at 30 frames per second
  2. Inertial Measurement Unit, able to detect the chair's inclination
  3. Controller joystick, modified to read out chair velocity commands
  4. Chair motors – [how do we use them?]
  5. LaserScan LIDAR providing a one-dimensional range map in front of the chair
  6. Jetson Nano as the system central computing module

Software Architecture

The system components:

Figure 1: Software Architecture

The system has four main functional components:

  1. Perception -responsible for understanding the wheelchair's state within its environment and detection of objects surrounding it.
  2. Decision Making - responsible for assessing the safety of the chair state and required measures to be taken to mitigate a dangerous situation if such an instance occurs.
  3. Control - decrease wheelchair speed, or bring it to a complete stop, according to commands from the decision making module.
  4. Human - Machine Interface – indicate the state of the wheelchair, appropriate warnings and required measures to be taken, to the child and attendant adult, using visual, audio and haptic cues.

The middleware software that integrates all the components is the Robot Operating System (ROS), which implements a Publisher-Subscriber architecture that enables synchronized data flow between the different components.

Functionality

Figure 2: ADAS Functionality

The system shall provide the following main functions:

  1. Warn the child of a potential collision with an obstacle
  2. Warn the parent / caretaker of a potential collision with an obstacle
  3. Provide the parent / caretaker with the capability to remotely stop the chair

Safety

  1. The system is a proof of concept prototype to be used solely under adult attendant supervision.
  2. The system shall not have any adverse safety effect on the wheelchair's mobility; it may only decrease velocity in case of obstacle collision avoidance. In all other cases the system is advisory.
  3. The human machine interface shall not provide misleading safety information and shall not have any adverse effect on the child's cognitive capability to control the chair's movement.

Component Design

Perception

Depth Cameras

Four depth and RGB images acquired from four RealSense 415 depth cameras at resolution of 640x480, at 30 frames per second.

  1. Segment the depth images and extract objects within the warning zone. This is immediately passed to the Decision Making module in case immediate action is necessary
  2. Classify the close objects using a Neural Network object detector, and pass this information to the Decision Maker (this might affect non-safety behavior, for example after an emergency stop)

LIDAR

Based on RpLidar type A2. The Lidar transits 360 measurements per cycle, at the speed of 5Hz.

The Lidar is placed perpendicular to the floor, enabling to identify walls and drops in the floor.

IMU

Reads the Wizzybug inclination and passes on to DM.

_Figure 3: Lidar scanning the front of the Wheelchair path_Joystick

From the joystick readings we get a reading of the chair velocity. This is used by the Decision Making module to estimate Time to Collision (TTC) with detected obstacles.

Decision Making

Algorithm Flow Chart:

_ TBD _

Control

The Control module activates Numato Lab's 1 Channel USB Powered Relay Module. Activates DX Control Power Module to enable and disable the Wizzybug's motors.

Figure 4: USB Controlled relay

The USB Relay control the DX2 Power Module of the Wheelchair:

Figure 5: Power Module controlling the Motors

Figure 6: Controlling Speed Limit using DCI Connectors in the Power Module

HMI

The system has two main users: the child and the caregiver. The aim of the design is to provide both types of users a set of tools to assess the situation awareness of the system at any given moment. The bug senses its environment and identifies the obstacles around it.

Educate & Protect - The design enables the child experiment with her environment, but also protects her when needed. We assume two operation areas. These are shown by figure 7. In the first, the inner area (marked in grey), the child is in control. The system should notify her when she approaches an obstacle and the notification should become more alerting as she approaches a problematic obstacle, but the child is in control over the bug. In the second, outer area (marked in white), control is taken from the child. The bug is now in control.

Privacy - Another principle that the design wishes to promote is the one of Privacy . We do not want to impair to adorability of the bug by making it noisy and thereby deter other kids from approaching the child. The system should provide the child the required situation awareness by haptic stimulation in a private manner. Sound feedback (using earcons or auditory icons) and spoken instructions should be used in an appropriate manner, and only when needed. The caregiver will be notified using a visual feedback on a ring of LED, and by the sound and speech delivered to the child.

Both types of feedback can be turned off in the system's set of preferences.

Figure 7. Areas of control

Obstacles classification scheme - The classification scheme refers to three types of obstacles, each of which is defined by the potential harm to the child or the environment:

Type A events - still objects, shorter than 4-5 cm (e.g., soft or hard toys, doorstep, etc.). The bug should inform that it senses these types of obstacles but shouldn't stop before them. The bug could drive over them. Since one cannot distinguish between the caregiver and other grown-ups, all adults are classified as A.

Type B events – These obstacles do not impose a serious harm to the child, but the bug cannot drive over them or through them (e.g., doorframes, walls, ascending staircase, static objects that are higher than 5 cm). The bug should motivate the child to stop , by sending notifications with an increasing sense of urgency as one approaches the obstacle. If the child does not stop the bug, the bug should stop driving just before the obstacle (50 cm will allow the child to change the bug's position in order to pass through a door frame in a smoother move). Experimentation - The Stop and go mechanism will enable the child to continue driving toward the obstacle, if child insists to do so. The purpose of this feature is to teach the child what can, and cannot be done with the bug, yet support her wish to experiment with the world.

Type C events – These obstacles impose harm to the child or the environment (e.g., descending staircase, Ramp, Glass wall, wall with adjacent tall obstacle that impose the danger of flip over, babies, children, animals). The stop & guide/limit control feature makes the bug stop, when type C objects are sensed (50 cm will allow the child to change the bug's position). However, as opposed to B events, in type C events we cannot allow the child to experiment with the environment in a free manner. In this case, the bug will not be able to proceed toward the dangerous area; it will be able to maneuver only to the directions that are defined as non-risky.

Type U- Unknowns – Assuming that the system fails to classify an obstacle that has the potential to harm the child as type C obstacles do, the feedback should appear as C

Multimodal feedback

The set of feedback consists of synchronized multimodal visual-haptic feedback that will give the child a sense of directionality for the obstacles and their level of threat, plus sound alerts when needed.

Haptic Feedback

Vibration alerting is a great alternative to lights and beeps. By mounting them on the body, the child's eyes are free, and the child can drive toward her areas of interest. The motor needs to be strong enough to overcome the damping of soft materials (nappies, coats and jumpers). This can be aided by ensuring the vibrations are directed towards the user. In our system, we will be used coin motors.

Haptic stimulation will be given by actuators creating vibration. These will be placed along the upper seatbelt and the center straps to provide a sense of directionality for rear and front objects notification.

Front

Rear

Left/Rear

Left/Front

Right/Front

Right/Rear

Figure 8. Placement of Haptic Actuators

The potential urgency of the event will be conveyed by manipulating the following features:

  • Intensity of the pulse
  • Pulse duration
  • Period (blinking pace)
  • Number of repetitions

High intensity pulse with sharp attack and decay of the stimuli envelope, short pulses with short intervals and high number of repetitions will give the user a high sense of urgency.

Visual feedback

Visual feedback will be given by marking areas on a LED ring. A 24 LED ring placed around the joystick will be used to mark 6 areas, 4 LED lights per area: Front, Right/Front, Right/Rear, Rear, Left/Rear, and Left/Front.

Figure 9. Visual Feedback via LED ring display

Left/Rear

Right/Rear

Rear

FRONT

Each event category is assigned with a different color. The traffic light metaphor was chosen for simplicity:

T ype A is marked by Green – The bug can drive over the obstacle

Figure 10. Visual Feedback for Type A events

Type B is marked by Yellow – The bug can drive up to a point. No control in the next cycle.

Type C is marked by RED – The bug can drive up to a point. Control over the driving direction in the next cycle

Figure 11. Visual Feedback for Type B & C events

The potential urgency of the event will be conveyed by manipulating the following features:

  • Intensity of the pulse
  • Pulse duration
  • Period (blinking pace)
  • Number of repetitions

Same as described above for the haptic stimuli, high intensity pulse with sharp attack and decay of the stimuli envelope, short pulses with short intervals and high number of repetitions will give the user a high sense of urgency.

Figure 12. Visual Feedback

Sound alerts

Sound alerts will play when the bug stops in type B & C/U events.

Type B - Mild alert will be given while stopping in B events (see Figure 7). The sound would give the child a very subtle sense of bump with the "invisible wall" that made the bug stop.

Type C - In risky scenarios (C), an intensive sound should play just before the bug stops. The sound file will be selectable by the caregiver who could choose between an urgent alert s Vs. a more softer alert sound (preference settings)

Verbal instructions will play after the car stopped to provide the child maneuvering guidance (should be tailored to front/back alerts).

Caretaker Remote Application

The caretaker application is run on a mobile phone and provides notifications, connection monitoring and emergency remote stop of the chair.

Remote Notification

The system shall notify the caretaker of imminent collision with obstacles; the same notifications the child receives (see section ‎3.4) shall be broadcast to the caretaker's phone and induce:

  1. Audible Notification
  2. Vibration
  3. Visual Notification that includes the direction of the object relative to the chair

Remote Emergency Stop

The caretaker application shall provide the capability to remotely stop the chair.

Remote Application Connection Status

The application shall indicate the status of the remote connection to the vehicle and notify the caretaker when there is no connection.

About

wizzybug development repository

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published