Description
For network reasons, I loaded the EVO model locally, and my code is as follows:
`
import os
os.environ['TRANSFORMERS_OFFLINE']="1"
import torch
from transformers import AutoConfig, AutoModelForCausalLM
from stripedhyena.tokenizer import CharLevelTokenizer
model_name = './evo-1-131k-base'
device = "cuda:0"
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, revision="1.1_fix")
model = AutoModelForCausalLM.from_pretrained(
model_name,
config=config,
trust_remote_code=True,
revision="1.1_fix"
)
tokenizer = CharLevelTokenizer(512)
model.to(device)
sequence = 'ACGT'
input_ids = torch.tensor(
tokenizer.tokenize(sequence),
dtype=torch.int,
).to(device).unsqueeze(0)
with torch.no_grad():
logits, _ = model(input_ids) # (batch, length, vocab)
print('Logits: ', logits)
print('Shape (batch, length, vocab): ', logits.shape)
`
But the logits I get when I run it is a string type, does anyone know why this is happening?
my output log is:
Initializing inference params... Logits: logits