Skip to content

Commit

Permalink
Update code version to 0.2.0 for release (#212)
Browse files Browse the repository at this point in the history
* remove advanced installation

* Update link and fix in install.md

* delete unnecessary stuff

* remove home_robot.md

* update docs

* update

* remove dex teleop

* update

* update readme

* update docs

* typo update

* Docs updates, fix dead links

Fixed dead links, markdown bugfixes, renamed install.md, removed some out of date info

* delete some bad tags

* Removed out of date information

Removed old install info from data_collection.md

---------

Co-authored-by: hello-peiqi <[email protected]>
Co-authored-by: bm-hellorobot <[email protected]>
  • Loading branch information
3 people authored Dec 3, 2024
1 parent 310fc74 commit b15cb77
Show file tree
Hide file tree
Showing 16 changed files with 63 additions and 381 deletions.
15 changes: 12 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,8 +122,9 @@ Check out additional documentation for ways to use Stretch AI:
- [DynaMem](docs/dynamem.md) -- Run the LLM agent in dynamic scenes, meaning you can walk around and plce objects as the robot explores
- [Data Collection for Learning from Demonstration](docs/data_collection.md) -- how to collect data for learning from demonstration
- [Learning from Demonstration](docs/learning_from_demonstration.md) -- how to train and evaluate policies with LfD
- [Apps](docs/apps.md) -- list of many different apps that you can run
- [Open-vocabulary mobile manipulation](docs/ovmm.md) -- experimental code which can handle more complex language commands
- [Apps](docs/apps.md) -- list of many different apps that you can run
- [Simple API](docs/simple_api.md) -- how to use the simple API to control the robot over wireless

## Development

Expand All @@ -135,11 +136,19 @@ pip install -e .[dev]
pre-commit install
```

Then follow the quickstart section. See [CONTRIBUTING.md](CONTRIBUTING.md) for more information.
Then follow the quickstart section. See [CONTRIBUTING.md](CONTRIBUTING.md) for more information. There is some information on how to [debug](docs/debug.md) and [update](docs/update.md) the codebase.

You can test out most code in the [simulation](docs/simulation.md) environment, which is a good way to test code without needing a robot.

### Updating Code on the Robot

See the [update guide](docs/update.md) for more information. There is an [update script](scripts.update.sh) which should handle some aspects of this. Code installed from git must be updated manually, including code from this repository.
See the [update guide](docs/update.md) for more information. Code installed from git must be updated manually, including code from this repository.

You can also pull the latest docker image on the robot with the following command:

```bash
./scripts/run_stretch_ai_ros2_bridge_server.sh --update
```

### Building Docker Images

Expand Down
64 changes: 0 additions & 64 deletions docs/about_advanced_installation.md

This file was deleted.

12 changes: 6 additions & 6 deletions docs/apps.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Stretch AI comes with several apps that you can run to test the robot's capabili
- [View Images](#visualization-and-streaming-video) - View images from the robot's cameras.
- [Show Point Cloud](#show-point-cloud) - Show a joint point cloud from the end effector and head cameras.
- [Gripper](#use-the-gripper) - Open and close the gripper.
- [Rerun](#rerun) - Start a [rerun.io](https://rerun.io/)-based web server to visualize data from your robot.
- [Rerun](#rerun-web-server) - Start a [rerun.io](https://rerun.io/)-based web server to visualize data from your robot.
- [LLM Voice Chat](#voice-chat) - Chat with the robot using LLMs.

Advanced:
Expand All @@ -19,10 +19,10 @@ Advanced:
Finally:

- [Dex Teleop data collection](#dex-teleop-for-data-collection) - Dexterously teleoperate the robot to collect demonstration data.
- [Learning from Demonstration (LfD)](docs/learning_from_demonstration.md) - Train SOTA policies using [HuggingFace LeRobot](https://github.com/huggingface/lerobot)
- [Learning from Demonstration (LfD)](learning_from_demonstration.md) - Train SOTA policies using [HuggingFace LeRobot](https://github.com/huggingface/lerobot)
- [Dynamem OVMM system](dynamem.md) - Deploy open vocabulary mobile manipulation system [Dynamem](https://dynamem.github.io)

There are also some apps for [debugging](docs/debug.md).
There are also some apps for [debugging](debug.md).

## List of Apps

Expand Down Expand Up @@ -155,9 +155,9 @@ python -m pip install mediapipe
python -m stretch.app.dex_teleop.ros2_leader -i $ROBOT_IP --teleop-mode base_x --save-images --record-success --task-name default_task
```

[Read the data collection documentation](docs/data_collection.md) for more details.
[Read the data collection documentation](data_collection.md) for more details.

After this, [read the learning from demonstration instructions](docs/learning_from_demonstration.md) to train a policy.
After this, [read the learning from demonstration instructions](learning_from_demonstration.md) to train a policy.

### Grasp an Object

Expand Down Expand Up @@ -189,7 +189,7 @@ Another useful flag when testing is the `--reset` flag, which will reset the rob

### Voxel Map Visualization

You can test the voxel code on a captured pickle file. We recommend trying with the included [hq_small.pkl](src/test/mapping/hq_small.pkl) or [hq_large](src/test/mapping/hq_large.pkl) files, which contain a short and a long captured trajectory from Hello Robot.
You can test the voxel code on a captured pickle file. We recommend trying with the included [hq_small.pkl](../src/test/mapping/hq_small.pkl) or [hq_large](../src/test/mapping/hq_large.pkl) files, which contain a short and a long captured trajectory from Hello Robot.

```bash
python -m stretch.app.read_map -i hq_small.pkl
Expand Down
8 changes: 4 additions & 4 deletions docs/data_collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,10 @@ python -m pip install webcam mediapipe

### On PC:

- Advanced installation is needed if you also want to train/evaluate policies with [LfD](learning_from_demonstration.md), see [advanced installation](../README.md#advanced-installation-pc-only)
-

- Linux instructions: if using a Linux PC, run `install_dex_teleop.sh` to update `udev` rules

```bash
cd /path/to/stretch_ai/scripts
./install_dex_teleop.sh
Expand Down Expand Up @@ -134,6 +134,6 @@ Therefore, it can be useful to tape out a reference position as a basis for vary

Episodes should reflect the way you want the robot to complete the task. Your first couple of runs doing the task will not be consistent, but will improve as you learn the feel for the teleoperation. It is recommended to start collecting demonstrations used for training only after sufficient practice with the task.

| Task direction alignment | Taping for positional reference |
| :----------------------------------: | :-------------------------------: |
| Task direction alignment | Taping for positional reference |
|:------------------------------------:|:---------------------------------:|
| ![](./images/dex_teleop_example.jpg) | ![](./images/robot_alignment.jpg) |
58 changes: 0 additions & 58 deletions docs/dex_teleop.md

This file was deleted.

7 changes: 5 additions & 2 deletions docs/dynamem.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ However, there is some reason why sometimes we should still use AI agent:

_Click to follow the link to YouTube:_

[![Example of Dynamem in the wild](https://i9.ytimg.com/vi/oBHzOfUdRnE/mqdefault.jpg?sqp=CMD0nboG-oaymwEmCMACELQB8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGGUgXShTMA8=&rs=AOn4CLAOlWNMyxe1WcShGRpP1BaH3wK2bg)](https://youtu.be/oBHzOfUdRnE)
[![Example of Dynamem in the wild](images/dynamem.png)](https://youtu.be/oBHzOfUdRnE)

[Above](https://youtu.be/oBHzOfUdRnE) shows Dynamem running in NYU kitchen.

Expand Down Expand Up @@ -85,6 +85,9 @@ python -m stretch.app.run_dynamem --robot_ip $ROBOT_IP --server_ip $WORKSTATION_
`robot_ip` is used to communicate robot and `server_ip` is used to communicate the server where AnyGrasp runs. If you don't run anygrasp (e.g. navigation only or running Stretch AI visual servoing manipulation instead), then set `server_ip` to `127.0.0.1`.
If you plan to run AnyGrasp on the same workstation, we highly recommend you find the ip of this workstation instead of naivly setting `server_ip` to `127.0.0.1`.

Once the robot starts doing OVMM, a rerun window will be popped up to visualize robot's thoughts.
![Example of Dynamem in the wild](images/dynamem_rerun.png)

### Loading from previous semantic memory
Dynamem stores the semantic memory as a pickle file after initial rotation in place and everyt time `navigate(A)` is executed. This allows Dynamem to read from saved pickle file so that it can directly load semantic memory from previous runs without rotating in place and scanning surroundings again.

Expand All @@ -97,7 +100,7 @@ By specifying `intput-path`, the robot will first read semantic memory from spec
The command looks like this

```
python -m stretch.app.run_dynamem --robot_ip $ROBOT_IP --server_ip $WORKSTATION_SERVER_IP --output-path #PICKLE_FILE_PATH --input-path $PICKLE_FILE_PATH
python -m stretch.app.run_dynamem --robot_ip $ROBOT_IP --server_ip $WORKSTATION_SERVER_IP --output-path $PICKLE_FILE_PATH --input-path $PICKLE_FILE_PATH
```

### Manipulation with AnyGrasp
Expand Down
101 changes: 0 additions & 101 deletions docs/home_robot.md

This file was deleted.

3 changes: 3 additions & 0 deletions docs/images/dynamem.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions docs/images/dynamem_rerun.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions docs/install.md → docs/install_details.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Stretch AI Installation

Stretch AI supports Python 3.10. We recommend using \[mamba\]https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) to manage dependencies, or [starting with Docker](./start_with_docker.md).
Stretch AI supports Python 3.10. We recommend using [mamba](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) to manage dependencies, or [starting with Docker](./start_with_docker.md).

If you do not start with docker, follow the [install guide](docs/install.md).
If you do not start with docker, follow the [install guide](docs/install_details.md).

### System Dependencies

Expand Down
Loading

0 comments on commit b15cb77

Please sign in to comment.