SLIM-EMOTIONS

slim-emotions is part of the SLIM ("Structured Language Instruction Model") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.

slim-emotions has been fine-tuned for emotion analysis function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:

    {"emotions": ["proud"]}

SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.

Each slim model has a 'quantized tool' version, e.g., 'slim-emotions-tool'.

Prompt format:

function = "classify"
params = "emotions"
prompt = "<human> " + {text} + "\n" +
                      "<{function}> " + {params} + "</{function}>" + "\n<bot>:"

Transformers Script
model = AutoModelForCausalLM.from_pretrained("llmware/slim-emotions")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-emotions")

function = "classify"
params = "emotions"

text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."  

prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"

inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])

outputs = model.generate(
    inputs.input_ids.to('cpu'),
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
    do_sample=True,
    temperature=0.3,
    max_new_tokens=100
)

output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)

print("output only: ", output_only)  

# here's the fun part
try:
    output_only = ast.literal_eval(llm_string_output)
    print("success - converted to python dictionary automatically")
except:
    print("fail - could not convert to python dictionary automatically - ", llm_string_output)
Using as Function Call in LLMWare
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-emotions")
response = slim_model.function_call(text,params=["emotions"], function="classify")

print("llmware - llm_response: ", response)

Model Card Contact

Darren Oberst & llmware team

Join us on Discord

Downloads last month
130
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for llmware/slim-emotions

Quantizations
2 models

Collection including llmware/slim-emotions