8000 the output logits is a str · Issue #113 · evo-design/evo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
the output logits is a str #113
Open
@xuanwuji

Description

@xuanwuji

For network reasons, I loaded the EVO model locally, and my code is as follows:

`
import os
os.environ['TRANSFORMERS_OFFLINE']="1"
import torch
from transformers import AutoConfig, AutoModelForCausalLM
from stripedhyena.tokenizer import CharLevelTokenizer

model_name = './evo-1-131k-base'
device = "cuda:0"

config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, revision="1.1_fix")
model = AutoModelForCausalLM.from_pretrained(
model_name,
config=config,
trust_remote_code=True,
revision="1.1_fix"
)

tokenizer = CharLevelTokenizer(512)

model.to(device)
sequence = 'ACGT'
input_ids = torch.tensor(
tokenizer.tokenize(sequence),
dtype=torch.int,
).to(device).unsqueeze(0)

with torch.no_grad():
logits, _ = model(input_ids) # (batch, length, vocab)

print('Logits: ', logits)
print('Shape (batch, length, vocab): ', logits.shape)
`

But the logits I get when I run it is a string type, does anyone know why this is happening?

my output log is:
Initializing inference params... Logits: logits

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0