Skip to content

Commit

Permalink
Update links after renaming the repo from segment-anything-2 to `sa…
Browse files Browse the repository at this point in the history
…m2` (facebookresearch#341)

This PR update repo links after we renamed the repo from `segment-anything-2` to `sam2`. It also changes `NAME` in setup.py to `SAM-2` (which is already the named used in pip setup since python packages don't allow whitespace)
  • Loading branch information
ronghanghu authored Oct 1, 2024
1 parent 05d9e57 commit 98fcb16
Show file tree
Hide file tree
Showing 9 changed files with 28 additions and 28 deletions.
10 changes: 5 additions & 5 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ I got `ImportError: cannot import name '_C' from 'sam2'`

This is usually because you haven't run the `pip install -e ".[notebooks]"` step above or the installation failed. Please install SAM 2 first, and see the other issues if your installation fails.

In some systems, you may need to run `python setup.py build_ext --inplace` in the SAM 2 repo root as suggested in https://github.com/facebookresearch/segment-anything-2/issues/77.
In some systems, you may need to run `python setup.py build_ext --inplace` in the SAM 2 repo root as suggested in https://github.com/facebookresearch/sam2/issues/77.
</details>

<details>
Expand All @@ -59,7 +59,7 @@ I got `MissingConfigException: Cannot find primary config 'configs/sam2.1/sam2.1

This is usually because you haven't run the `pip install -e .` step above, so `sam2` isn't in your Python's `sys.path`. Please run this installation step. In case it still fails after the installation step, you may try manually adding the root of this repo to `PYTHONPATH` via
```bash
export SAM2_REPO_ROOT=/path/to/segment-anything-2 # path to this repo
export SAM2_REPO_ROOT=/path/to/sam2 # path to this repo
export PYTHONPATH="${SAM2_REPO_ROOT}:${PYTHONPATH}"
```
to manually add `sam2_configs` into your Python's `sys.path`.
Expand All @@ -84,7 +84,7 @@ from sam2.modeling import sam2_base

print(sam2_base.__file__)
```
and check whether the content in the printed local path of `sam2/modeling/sam2_base.py` matches the latest one in https://github.com/facebookresearch/segment-anything-2/blob/main/sam2/modeling/sam2_base.py (e.g. whether your local file has `no_obj_embed_spatial`) to indentify if you're still using a previous installation.
and check whether the content in the printed local path of `sam2/modeling/sam2_base.py` matches the latest one in https://github.com/facebookresearch/sam2/blob/main/sam2/modeling/sam2_base.py (e.g. whether your local file has `no_obj_embed_spatial`) to indentify if you're still using a previous installation.

</details>

Expand Down Expand Up @@ -123,7 +123,7 @@ This usually happens because you have multiple versions of dependencies (PyTorch

In particular, if you have a lower PyTorch version than 2.3.1, it's recommended to upgrade to PyTorch 2.3.1 or higher first. Otherwise, the installation script will try to upgrade to the latest PyTorch using `pip`, which could sometimes lead to duplicated PyTorch installation if you have previously installed another PyTorch version using `conda`.

We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/segment-anything-2/issues/22, https://github.com/facebookresearch/segment-anything-2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0.
We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/sam2/issues/22, https://github.com/facebookresearch/sam2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0.
</details>

<details>
Expand Down Expand Up @@ -168,7 +168,7 @@ You may see error log of:
> unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.
This is probably because your versions of CUDA and Visual Studio are incompatible. (see also https://stackoverflow.com/questions/78515942/cuda-compatibility-with-visual-studio-2022-version-17-10 for a discussion in stackoverflow).<br>
You may be able to fix this by adding the `-allow-unsupported-compiler` argument to `nvcc` after L48 in the [setup.py](https://github.com/facebookresearch/segment-anything-2/blob/main/setup.py). <br>
You may be able to fix this by adding the `-allow-unsupported-compiler` argument to `nvcc` after L48 in the [setup.py](https://github.com/facebookresearch/sam2/blob/main/setup.py). <br>
After adding the argument, `get_extension()` will look like this:
```python
def get_extensions():
Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@
SAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using:

```bash
git clone https://github.com/facebookresearch/segment-anything-2.git
git clone https://github.com/facebookresearch/sam2.git

cd segment-anything-2 & pip install -e .
cd sam2 & pip install -e .
```
If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu.

Expand Down Expand Up @@ -86,9 +86,9 @@ with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
masks, _, _ = predictor.predict(<input_prompts>)
```

Please refer to the examples in [image_predictor_example.ipynb](./notebooks/image_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/image_predictor_example.ipynb)) for static image use cases.
Please refer to the examples in [image_predictor_example.ipynb](./notebooks/image_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb)) for static image use cases.

SAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](./notebooks/automatic_mask_generator_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/automatic_mask_generator_example.ipynb)) for automatic mask generation in images.
SAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](./notebooks/automatic_mask_generator_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/automatic_mask_generator_example.ipynb)) for automatic mask generation in images.

### Video prediction

Expand All @@ -113,7 +113,7 @@ with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
...
```

Please refer to the examples in [video_predictor_example.ipynb](./notebooks/video_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/video_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos.
Please refer to the examples in [video_predictor_example.ipynb](./notebooks/video_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/video_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos.

## Load from 🤗 Hugging Face

Expand Down
2 changes: 1 addition & 1 deletion demo/frontend/src/demo/DemoConfig.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ export const RESEARCH_BY_META_AI = 'By Meta FAIR';
export const DEMO_FRIENDLY_NAME = 'Segment Anything 2 Demo';
export const VIDEO_WATERMARK_TEXT = `Modified with ${DEMO_FRIENDLY_NAME}`;
export const PROJECT_GITHUB_URL =
'https://github.com/facebookresearch/segment-anything-2';
'https://github.com/facebookresearch/sam2';
export const AIDEMOS_URL = 'https://aidemos.meta.com';
export const ABOUT_URL = 'https://ai.meta.com/sam2';
export const EMAIL_ADDRESS = '[email protected]';
Expand Down
6 changes: 3 additions & 3 deletions notebooks/automatic_mask_generator_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
"id": "4290fb06-a63f-4624-a70c-f7c9aae4b5d5",
"metadata": {},
"source": [
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/automatic_mask_generator_example.ipynb\">\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/automatic_mask_generator_example.ipynb\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>"
]
Expand Down Expand Up @@ -82,10 +82,10 @@
" print(\"CUDA is available:\", torch.cuda.is_available())\n",
" import sys\n",
" !{sys.executable} -m pip install opencv-python matplotlib\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/segment-anything-2.git'\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/sam2.git'\n",
"\n",
" !mkdir -p images\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/segment-anything-2/main/notebooks/images/cars.jpg\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/sam2/main/notebooks/images/cars.jpg\n",
"\n",
" !mkdir -p ../checkpoints/\n",
" !wget -P ../checkpoints/ https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt"
Expand Down
10 changes: 5 additions & 5 deletions notebooks/image_predictor_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
"id": "ee822903-7739-4c1b-941a-b292b6e89bcf",
"metadata": {},
"source": [
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/image_predictor_example.ipynb\">\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>"
]
Expand All @@ -51,7 +51,7 @@
"id": "07fabfee",
"metadata": {},
"source": [
"If running locally using jupyter, first install `sam2` in your environment using the [installation instructions](https://github.com/facebookresearch/segment-anything-2#installation) in the repository.\n",
"If running locally using jupyter, first install `sam2` in your environment using the [installation instructions](https://github.com/facebookresearch/sam2#installation) in the repository.\n",
"\n",
"If running from Google Colab, set `using_colab=True` below and run the cell. In Colab, be sure to select 'GPU' under 'Edit'->'Notebook Settings'->'Hardware accelerator'. Note that it's recommended to use **A100 or L4 GPUs when running in Colab** (T4 GPUs might also work, but could be slow and might run out of memory in some cases)."
]
Expand Down Expand Up @@ -81,11 +81,11 @@
" print(\"CUDA is available:\", torch.cuda.is_available())\n",
" import sys\n",
" !{sys.executable} -m pip install opencv-python matplotlib\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/segment-anything-2.git'\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/sam2.git'\n",
"\n",
" !mkdir -p images\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/segment-anything-2/main/notebooks/images/truck.jpg\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/segment-anything-2/main/notebooks/images/groceries.jpg\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/sam2/main/notebooks/images/truck.jpg\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/sam2/main/notebooks/images/groceries.jpg\n",
"\n",
" !mkdir -p ../checkpoints/\n",
" !wget -P ../checkpoints/ https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt"
Expand Down
8 changes: 4 additions & 4 deletions notebooks/video_predictor_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
"id": "a887b90f-6576-4ef8-964e-76d3a156ccb6",
"metadata": {},
"source": [
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/video_predictor_example.ipynb\">\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/video_predictor_example.ipynb\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>"
]
Expand All @@ -55,7 +55,7 @@
"id": "8491a127-4c01-48f5-9dc5-f148a9417fdf",
"metadata": {},
"source": [
"If running locally using jupyter, first install `sam2` in your environment using the [installation instructions](https://github.com/facebookresearch/segment-anything-2#installation) in the repository.\n",
"If running locally using jupyter, first install `sam2` in your environment using the [installation instructions](https://github.com/facebookresearch/sam2#installation) in the repository.\n",
"\n",
"If running from Google Colab, set `using_colab=True` below and run the cell. In Colab, be sure to select 'GPU' under 'Edit'->'Notebook Settings'->'Hardware accelerator'. Note that it's recommended to use **A100 or L4 GPUs when running in Colab** (T4 GPUs might also work, but could be slow and might run out of memory in some cases)."
]
Expand Down Expand Up @@ -85,7 +85,7 @@
" print(\"CUDA is available:\", torch.cuda.is_available())\n",
" import sys\n",
" !{sys.executable} -m pip install opencv-python matplotlib\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/segment-anything-2.git'\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/sam2.git'\n",
"\n",
" !mkdir -p videos\n",
" !wget -P videos https://dl.fbaipublicfiles.com/segment_anything_2/assets/bedroom.zip\n",
Expand Down Expand Up @@ -1047,7 +1047,7 @@
"id": "e023f91f-0cc5-4980-ae8e-a13c5749112b",
"metadata": {},
"source": [
"Note that in addition to clicks or boxes, SAM 2 also supports directly using a **mask prompt** as input via the `add_new_mask` method in the `SAM2VideoPredictor` class. This can be helpful in e.g. semi-supervised VOS evaluations (see [tools/vos_inference.py](https://github.com/facebookresearch/segment-anything-2/blob/main/tools/vos_inference.py) for an example)."
"Note that in addition to clicks or boxes, SAM 2 also supports directly using a **mask prompt** as input via the `add_new_mask` method in the `SAM2VideoPredictor` class. This can be helpful in e.g. semi-supervised VOS evaluations (see [tools/vos_inference.py](https://github.com/facebookresearch/sam2/blob/main/tools/vos_inference.py) for an example)."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion sam2/utils/misc.py
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ def fill_holes_in_mask_scores(mask, max_area):
f"{e}\n\nSkipping the post-processing step due to the error above. You can "
"still use SAM 2 and it's OK to ignore the error above, although some post-processing "
"functionality may be limited (which doesn't affect the results in most cases; see "
"https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).",
"https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).",
category=UserWarning,
stacklevel=2,
)
Expand Down
2 changes: 1 addition & 1 deletion sam2/utils/transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ def postprocess_masks(self, masks: torch.Tensor, orig_hw) -> torch.Tensor:
f"{e}\n\nSkipping the post-processing step due to the error above. You can "
"still use SAM 2 and it's OK to ignore the error above, although some post-processing "
"functionality may be limited (which doesn't affect the results in most cases; see "
"https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).",
"https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).",
category=UserWarning,
stacklevel=2,
)
Expand Down
6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@
from setuptools import find_packages, setup

# Package metadata
NAME = "SAM 2"
NAME = "SAM-2"
VERSION = "1.0"
DESCRIPTION = "SAM 2: Segment Anything in Images and Videos"
URL = "https://github.com/facebookresearch/segment-anything-2"
URL = "https://github.com/facebookresearch/sam2"
AUTHOR = "Meta AI"
AUTHOR_EMAIL = "[email protected]"
LICENSE = "Apache 2.0"
Expand Down Expand Up @@ -79,7 +79,7 @@
"Failed to build the SAM 2 CUDA extension due to the error above. "
"You can still use SAM 2 and it's OK to ignore the error above, although some "
"post-processing functionality may be limited (which doesn't affect the results in most cases; "
"(see https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).\n"
"(see https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).\n"
)


Expand Down

0 comments on commit 98fcb16

Please sign in to comment.