Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor documentation #6

Open
raslenmtg opened this issue May 27, 2022 · 1 comment
Open

Poor documentation #6

raslenmtg opened this issue May 27, 2022 · 1 comment

Comments

@raslenmtg
Copy link

how test it after installation ?

@not-lain
Copy link

leaving this reply to future lurkers :
finished registering the model in huggingface and you can load the TunBERT model using the following code

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("not-lain/TunBERT")
model = AutoModelForSequenceClassification.from_pretrained("not-lain/TunBERT",trust_remote_code=True)

not that you need to install the transformers library beforehand`

how to use the model :

text = "[insert text here]"
inputs = tokenizer(text,return_tensors='pt') # make sure you are using the `return_tensors='pt'` parameter
output = model(**inputs)
print(output.logits)

sadly the original implementation they are using a head with 2 neurons and the bias is on, so the model is showing 4 categories instead of 2, (or maybe i misunderstood how they labeled the data) in all cases, i would love to get an update around this .
picture for reference
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants