suriya7's picture
Update README.md
5de54a7 verified
|
raw
history blame
4.12 kB
metadata
license: mit
datasets:
  - Vezora/Tested-143k-Python-Alpaca
language:
  - en
widget:
  - messages:
      - role: system
        content: >-
          You are a career counselor. The user will provide you with an
          individual looking for guidance in their professional life, and your
          task is to assist them in determining what careers they are most
          suited for based on their skills, interests, and experience. You
          should also conduct research into the various options available,
          explain the job market trends in different industries, and advice on
          which qualifications would be beneficial for pursuing particular
          fields.
      - role: user
        content: Heya!
      - role: assistant
        content: Hi! How may I help you?
      - role: user
        content: >-
          I am interested in developing a career in software engineering. What
          would you recommend me to do?
  - messages:
      - role: system
        content: You are a knowledgeable assistant. Help the user as much as you can.
      - role: user
        content: How to become healthier?
  - messages:
      - role: system
        content: You are a helpful assistant who provides concise responses.
      - role: user
        content: Hi!
      - role: assistant
        content: Hello there! How may I help you?
      - role: user
        content: >-
          I need to build a simple website. Where should I start learning about
          web development?
  - messages:
      - role: system
        content: >-
          You are a very creative assistant. User will give you a task, which
          you should complete with all your knowledge.
      - role: user
        content: >-
          Write the background story of an RPG game about wizards and dragons in
          a sci-fi world.
tags:
  - text-generation-inference
inference:
  parameters:
    max_new_tokens: 250
    do_sample: false
pipeline_tag: text2text-generation

Gemma-2B Fine-Tuned Python Model

Overview

Gemma-2B Fine-Tuned Python Model is a deep learning model based on the Gemma-2B architecture, fine-tuned specifically for Python programming tasks. This model is designed to understand Python code and assist developers by providing suggestions, completing code snippets, or offering corrections to improve code quality and efficiency.

Model Details

  • Model Name: Gemma-2B Fine-Tuned Python Model
  • Model Type: Deep Learning Model
  • Base Model: Gemma-2B
  • Language: Python
  • Task: Python Code Understanding and Assistance

Example Use Cases

  • Code completion: Automatically completing code snippets based on partial inputs.
  • Syntax correction: Identifying and suggesting corrections for syntax errors in Python code.
  • Code quality improvement: Providing suggestions to enhance code readability, efficiency, and maintainability.
  • Debugging assistance: Offering insights and suggestions to debug Python code by identifying potential errors or inefficiencies.

How to Use

  1. Install Gemma Python Package:
     pip install -q -U transformers==4.38.0
     pip install torch
    

Inference

  1. How to use the model in our notebook:
# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("suriya7/Gemma-2B-Finetuned-Python-Model")
model = AutoModelForCausalLM.from_pretrained("suriya7/Gemma-2B-Finetuned-Python-Model")

query = input('enter a query:')
prompt_template = f"""
<start_of_turn>user based on given instruction create a solution\n\nhere are the instruction {query}
<end_of_turn>\n<start_of_turn>model
"""
prompt = prompt_template
encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True).input_ids

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
inputs = encodeds.to(device)


# Increase max_new_tokens if needed
generated_ids = model.generate(inputs, max_new_tokens=1000, do_sample=False, pad_token_id=tokenizer.eos_token_id)
ans = ''
for i in tokenizer.decode(generated_ids[0], skip_special_tokens=True).split('<end_of_turn>')[:2]:
    ans += i

# Extract only the model's answer
model_answer = ans.split("model")[1].strip()
print(model_answer)