Skip to content

Convert to GGUF #159

@yanxon

Description

@yanxon

Can you please convert this to gguf?

I tried to use llama.cpp convert.py with the following command:

python convert.py pythia-12b/ --outfile pythia-12b/pythia-12b-f16.gguf --outtype f16

It gives me this error:

Loading model file ../pythia/pythia-hf/pytorch_model-00001-of-00003.bin
Traceback (most recent call last):
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 1483, in <module>
    main()
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 1419, in main
    model_plus = load_some_model(args.model)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 1278, in load_some_model
    models_plus.append(lazy_load_file(path))
                       ^^^^^^^^^^^^^^^^^^^^
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 887, in lazy_load_file
    return lazy_load_torch_file(fp, path)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 843, in lazy_load_torch_file
    model = unpickler.load()
            ^^^^^^^^^^^^^^^^
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 832, in find_class
    return self.CLASSES[(module, name)]
           ~~~~~~~~~~~~^^^^^^^^^^^^^^^^
KeyError: ('torch', 'ByteStorage')

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions