Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into rope
Browse files Browse the repository at this point in the history
  • Loading branch information
irexyc committed Dec 9, 2024
2 parents 2beb46f + 47fa7cf commit d1eb613
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 5 deletions.
2 changes: 1 addition & 1 deletion docs/en/get_started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ pip install lmdeploy
The default prebuilt package is compiled on **CUDA 12**. If CUDA 11+ (>=11.3) is required, you can install lmdeploy by:

```shell
export LMDEPLOY_VERSION=0.6.3
export LMDEPLOY_VERSION=0.6.4
export PYTHON_VERSION=38
pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
```
Expand Down
2 changes: 1 addition & 1 deletion docs/zh_cn/get_started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ pip install lmdeploy
默认的预构建包是在 **CUDA 12** 上编译的。如果需要 CUDA 11+ (>=11.3),你可以使用以下命令安装 lmdeploy:

```shell
export LMDEPLOY_VERSION=0.6.3
export LMDEPLOY_VERSION=0.6.4
export PYTHON_VERSION=38
pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
```
Expand Down
5 changes: 4 additions & 1 deletion lmdeploy/pytorch/models/patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@

import torch
from transformers.configuration_utils import PretrainedConfig
from transformers.modeling_utils import load_state_dict

from lmdeploy.utils import get_logger

Expand Down Expand Up @@ -295,7 +296,9 @@ def add_adapters(model: torch.nn.Module,
for name, path in adapters.items():
adapter_id = adapter_id_map[name]
checkpoint_path = f'{path}/adapter_model.bin'
state_dict = torch.load(checkpoint_path, map_location=device)
if not osp.exists(checkpoint_path):
checkpoint_path = f'{path}/adapter_model.safetensors'
state_dict = load_state_dict(checkpoint_path, map_location=device)

if hasattr(model, 'load_lora_weights'):
model.load_lora_weights(state_dict.items(), adapter_id=adapter_id)
Expand Down
2 changes: 1 addition & 1 deletion lmdeploy/version.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Tuple

__version__ = '0.6.3'
__version__ = '0.6.4'
short_version = __version__


Expand Down
2 changes: 1 addition & 1 deletion requirements/runtime_ascend.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
accelerate>=0.29.3
dlinfer-ascend>=0.1.2
dlinfer-ascend>=0.1.3
einops
fastapi
fire
Expand Down

0 comments on commit d1eb613

Please sign in to comment.