Mubin1917 commited on
Commit
1681022
·
verified ·
1 Parent(s): b783044

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -6
README.md CHANGED
@@ -10,13 +10,59 @@ tags:
10
  - llama
11
  - trl
12
  ---
 
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** Mubin1917
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - llama
11
  - trl
12
  ---
13
+ # This page is work in progress!
14
 
15
+ ## Overview
16
 
17
+ The **Fhi-3.5-mini-instruct** is a fine-tuned version of the [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) model, optimized for function-calling capability. This model provides fast, accurate, and structured responses based on input queries and available APIs. It supports enhanced function-calling features on top of its existing Phi-3.5-mini-instruct's capabilities.
 
 
18
 
19
+ ### Usage
20
 
21
+ Here’s a basic example of how to use function calling with the Fhi-3.5-mini-instruct model:
22
+
23
+ ```python
24
+ def get_current_temperature(location: str) -> float:
25
+ """
26
+ Get the current temperature at a location.
27
+
28
+ Args:
29
+ location: The location to get the temperature for, in the format "City, Country"
30
+ Returns:
31
+ The current temperature at the specified location in the specified units, as a float.
32
+ """
33
+ return 22.
34
+
35
+ # Create the messages list
36
+ messages = [
37
+ {"role": "system", "content": "You are a helpful weather assistant."},
38
+ {"role": "user", "content": "What's the current weather in London and New York? Please use Celsius."}
39
+ ]
40
+
41
+ # Apply the chat template
42
+ prompt = tokenizer.apply_chat_template(
43
+ messages,
44
+ tools=[get_current_temperature], # Pass the custom tool
45
+ add_generation_prompt=True,
46
+ tokenize=False
47
+ )
48
+
49
+ inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
50
+ outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, use_cache=True, temperature=0.001, top_p=1, eos_token_id=[32007])
51
+ resu = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
52
+ print(resu)
53
+ ```
54
+
55
+ The result will look like this:
56
+
57
+ ```python
58
+ [
59
+ {'name': 'get_current_temperature', 'arguments': {'location': 'London, UK'}},
60
+ {'name': 'get_current_temperature', 'arguments': {'location': 'New York, USA'}}
61
+ ]
62
+ ```
63
+
64
+ ## Testing and Benchmarking
65
+ This model is still undergoing testing and evaluation. Use it at your own risk until further validation is complete. Performance on benchmarks like MMLU and MMLU-Pro will be updated soon.
66
+
67
+ ## Credits
68
+ Will be updated soon