Skip to content

[Bug] can't chat with Mistral-7B-Instruct-v0.3 by using transformer>=4.52.0, raise error: unsupported operand type(s) for *: 'int' and 'NoneType' #3787

@zhulinJulia24

Description

@zhulinJulia24

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

Can't chat with Mistral-7B-Instruct-v0.3 by using transformer >=4.52.0
Both use turbomind and pytorch backend

Reproduction

  1. lmdeploy chat /nvme/qa_test_models/mistralai/Mistral-7B-Instruct-v0.3

  2. lmdeploy chat /nvme/qa_test_models/mistralai/Mistral-7B-Instruct-v0.3 --backend pytorch

Environment

sys.platform: linux
Python: 3.10.18 (main, Jun  5 2025, 13:14:17) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.7, V11.7.64
GCC: gcc (GCC) 10.1.0
PyTorch: 2.6.0+cu118
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 11.8
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=2236df1770800ffea5697b11b0bb0d910b2e59e1, CUDA_VERSION=11.8, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.6.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.21.0+cu118
LMDeploy: 0.9.2+2fbe186
transformers: 4.52.0
gradio: Not Found
fastapi: 0.116.1
pydantic: 2.11.7
triton: 3.2.0
NVIDIA Topology: 
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    CPU Affinity    NUMA Affinity
GPU0     X      NV12    NV12    NV12    NV12    NV12    NV12    NV12    0-27,56-83      0
GPU1    NV12     X      NV12    NV12    NV12    NV12    NV12    NV12    0-27,56-83      0
GPU2    NV12    NV12     X      NV12    NV12    NV12    NV12    NV12    0-27,56-83      0
GPU3    NV12    NV12    NV12     X      NV12    NV12    NV12    NV12    0-27,56-83      0
GPU4    NV12    NV12    NV12    NV12     X      NV12    NV12    NV12    28-55,84-111    1
GPU5    NV12    NV12    NV12    NV12    NV12     X      NV12    NV12    28-55,84-111    1
GPU6    NV12    NV12    NV12    NV12    NV12    NV12     X      NV12    28-55,84-111    1
GPU7    NV12    NV12    NV12    NV12    NV12    NV12    NV12     X      28-55,84-111    1

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

1. lmdeploy chat /nvme/qa_test_models/mistralai/Mistral-7B-Instruct-v0.3
error is:
chat_template_config:
ChatTemplateConfig(model_name='mistral', system=None, meta_instruction=None, eosys=None, user=None, eoh=None, assistant=None, eoa=None, tool=None, eotool=None, separator=None, capability='chat', stop_words=None)
engine_cfg:
TurbomindEngineConfig(dtype='auto', model_format=None, tp=1, dp=1, device_num=None, attn_tp_size=None, attn_dp_size=None, mlp_tp_size=None, mlp_dp_size=None, outer_dp_size=None, session_len=32768, max_batch_size=1, cache_max_entry_count=0.8, cache_chunk_size=-1, cache_block_seq_len=64, enable_prefix_caching=False, quant_policy=0, rope_scaling_factor=0.0, use_logn_attn=False, download_dir=None, revision=None, max_prefill_token_num=8192, num_tokens_per_iter=0, max_prefill_iters=1, devices=None, empty_init=False, communicator='nccl', hf_overrides=None)
Traceback (most recent call last):
  File "/home/zhulin1/miniconda3/envs/v92/bin/lmdeploy", line 8, in <module>
    sys.exit(run())
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/cli/entrypoint.py", line 39, in run
    args.run(args)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/cli/cli.py", line 253, in chat
    run_chat(**kwargs)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/turbomind/chat.py", line 139, in main
    tm_model = tm.TurboMind.from_pretrained(model_path, tokenizer=tokenizer, engine_config=engine_cfg)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 386, in from_pretrained
    return cls(model_path=pretrained_model_name_or_path,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 154, in __init__
    self.model_comm = self._from_hf(model_source=model_source,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 276, in _from_hf
    tm_model = get_tm_model(model_path, self.model_name, self.chat_template_name, engine_config)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/converter.py", line 239, in get_tm_model
    output_model = OUTPUT_MODELS.get(output_model_name)(input_model=input_model,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 61, in __init__
    self.input_model_info = self.input_model.model_info()
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/source_model/llama.py", line 134, in model_info
    rope_param = RopeParam(type='default', base=rope_theta, dim=head_dim)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/pydantic/_internal/_dataclasses.py", line 123, in __init__
    s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
pydantic_core._pydantic_core.ValidationError: 1 validation error for RopeParam
dim
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type

2. lmdeploy chat /nvme/qa_test_models/mistralai/Mistral-7B-Instruct-v0.3 --backend pytorch
error is:
Traceback (most recent call last):
  File "/home/zhulin1/miniconda3/envs/v92/bin/lmdeploy", line 8, in <module>
    sys.exit(run())
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/cli/entrypoint.py", line 39, in run
    args.run(args)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/cli/cli.py", line 244, in chat
    run_chat(args.model_path, engine_config, chat_template_config=chat_template_config)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/chat.py", line 93, in run_chat
    with pipeline(
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/api.py", line 83, in pipeline
    return pipeline_class(model_path,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 286, in __init__
    self._build_pytorch(model_path=model_path, backend_config=backend_config, **kwargs)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 351, in _build_pytorch
    self.engine = Engine.from_pretrained(model_path, tokenizer=self.tokenizer, engine_config=backend_config)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/engine/engine.py", line 430, in from_pretrained
    return cls(model_path=pretrained_model_name_or_path,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/engine/engine.py", line 366, in __init__
    self.executor.init()
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/engine/executor/base.py", line 178, in init
    self.build_model()
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/engine/executor/uni_executor.py", line 56, in build_model
    self.model_agent.build_model()
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 836, in build_model
    self._build_model()
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 821, in _build_model
    patched_model = build_patched_model(self.model_config,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/models/patch.py", line 231, in build_patched_model
    return build_model_from_hf_config(model_config, dtype=dtype, device=device, build_model_ctx=build_model_ctx)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/models/patch.py", line 200, in build_model_from_hf_config
    model = model_cls(model_config, ctx_mgr, dtype=dtype, device=device)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/models/mistral.py", line 306, in __init__
    self.model = MistralModel(config, dtype=dtype, device=device)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/models/mistral.py", line 219, in __init__
    self.layers = nn.ModuleList([
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/models/mistral.py", line 220, in <listcomp>
    MistralDecoderLayer(config, layer_idx, dtype=dtype, device=device)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/models/mistral.py", line 154, in __init__
    self.self_attn = MistralAttention(config, dtype=dtype, device=device)
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/models/mistral.py", line 30, in __init__
    self.qkv_proj = build_qkv_proj(
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/nn/linear/__init__.py", line 301, in build_qkv_proj
    return QKVBaseLinear(in_features=in_features,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/nn/linear/default.py", line 202, in __init__
    QKVMixin.__init__(self,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/nn/linear/utils.py", line 50, in __init__
    qkv_split_section = self._get_qkv_out_features(num_q_heads, num_kv_heads, head_size, head_size_v,
  File "/home/zhulin1/miniconda3/envs/v92/lib/python3.10/site-packages/lmdeploy/pytorch/nn/linear/utils.py", line 72, in _get_qkv_out_features
    all_out_features = (num_q_heads * head_size, num_kv_heads_real * head_size, num_kv_heads_real * head_size_v)
TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions