Transformers documentation

CodeLlama

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.57.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

이 모델은 2023년 8월 24일에 공개되었으며, 2023년 8월 25일에 Hugging Face Transformers에 추가되었습니다.

PyTorch ">

CodeLlama

Code Llama는 코딩 작업에 특화된 대규모 언어 모델 계열로, Llama 2를 기반으로 개발되었습니다. 일반적인 코드, Python 특화, 명령어(지시) 기반 변형 등 다양한 버전으로 제공되며, 모두 7B, 13B, 34B, 70B 매개변수 크기로 사용할 수 있습니다. Code Llama 모델은 코드를 생성하고 설명하며, 코드의 누락된 부분을 채울 수도 있습니다. 이를 인필링(infilling)이라고 합니다. 16K 토큰 길이로 훈련되었지만, 최대 100K 토큰까지 안정적으로 생성하며 긴 컨텍스트도 처리할 수 있습니다.

Code Llama 컬렉션에서 모든 원본 Code Llama 체크포인트를 찾을 수 있습니다.

다양한 코딩 작업에 Code Llama를 적용하는 더 많은 예시를 보려면 오른쪽 사이드바의 Code Llama 모델을 클릭하세요.

아래 예시는 Pipeline, AutoModel, 그리고 명령줄에서 코드를 생성하는 방법을 보여줍니다.

Pipeline
AutoModel
transformers CLI
import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="meta-llama/CodeLlama-7b-hf",
    torch_dtype=torch.float16,
    device_map=0
)

# 기본 코드 생성
result = pipe("# Function to calculate the factorial of a number\ndef factorial(n):", max_new_tokens=256)
print(result[0]['generated_text'])

# 인필링
infill_result = pipe("def remove_non_ascii(s: str) -> str:\n    \"\"\" <FILL_ME>\n    return result", max_new_tokens=200)
print(infill_result[0]['generated_text'])

양자화는 가중치를 더 낮은 정밀도로 표현하여 대규모 모델의 메모리 부담을 줄입니다. 더 많은 사용 가능한 양자화 백엔드는 양자화 개요를 참조하세요.

아래 예시는 bitsandbytes를 사용하여 가중치를 4비트로만 양자화합니다.

# bitsandbytes를 설치합니다.
import torch
from transformers import AutoModelForCausalLM, CodeLlamaTokenizer, BitsAndBytesConfig

bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True)
tokenizer = CodeLlamaTokenizer.from_pretrained("meta-llama/CodeLlama-34b-hf")
model = AutoModelForCausalLM.from_pretrained(
   "meta-llama/CodeLlama-34b-hf",
   torch_dtype=torch.bfloat16,
   device_map="auto",
   quantization_config=bnb_config
)

prompt = "# Write a Python function to check if a string is a palindrome\ndef is_palindrome(s):"
input_ids = tokenizer(prompt, return_tensors="pt").to("cuda")

output = model.generate(**input_ids, max_new_tokens=200, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))

AttentionMaskVisualizer를 사용하면 모델이 어떤 토큰에 주의를 기울일 수 있고 기울일 수 없는지를 더 잘 이해할 수 있습니다.

from transformers.utils.attention_visualizer import AttentionMaskVisualizer

visualizer = AttentionMaskVisualizer("meta-llama/CodeLlama-7b-hf")
visualizer("""def func(a, b):
  return a + b""")

참고사항

  • 인필링 기능은 7B 및 13B 기반 모델에서만 사용할 수 있으며, Python, Instruct, 34B 또는 70B 모델에서는 사용할 수 없습니다.

  • 코드를 채워 넣고 싶은 부분에 <FILL_ME> 토큰을 사용하세요. 토크나이저는 이 토큰을 분할하여 원본 훈련 패턴 을 따르는 입력 문자열로 변환합니다. 이는 직접 패턴을 준비하는 것보다 더 안정적입니다.

    from transformers import LlamaForCausalLM, CodeLlamaTokenizer
    
    tokenizer = CodeLlamaTokenizer.from_pretrained("meta-llama/CodeLlama-7b-hf")
    model = LlamaForCausalLM.from_pretrained("meta-llama/CodeLlama-7b-hf")
    PROMPT = '''def remove_non_ascii(s: str) -> str:
        """ <FILL_ME>
        return result
    '''
    input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"]
    generated_ids = model.generate(input_ids, max_new_tokens=128)
    
    filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0]
    print(PROMPT.replace("<FILL_ME>", filling))
  • 추가 훈련이나 미세 조정에는 bfloat16을 사용하고 추론에는 float16을 사용하세요.

  • BOS 문자는 접두사나 접미사를 인코딩할 때 인필링 작업에 사용되지 않으며, 각 프롬프트의 맨 앞에서만 사용됩니다.

  • 토크나이저는 SentencePiece를 기반으로 하는 byte-pair 인코딩 모델입니다. 디코딩 과정에서 첫 번째 토큰이 단어의 시작인 경우(예를 들어 “Banana”), 토크나이저는 문자열에 접두사 공백을 추가하지 않습니다.

CodeLlamaTokenizer

class transformers.CodeLlamaTokenizer

< >

( vocab_file unk_token = '<unk>' bos_token = '<s>' eos_token = '</s>' prefix_token = '▁<PRE>' middle_token = '▁<MID>' suffix_token = '▁<SUF>' eot_token = '▁<EOT>' fill_token = '<FILL_ME>' suffix_first = False sp_model_kwargs: typing.Optional[dict[str, typing.Any]] = None add_bos_token = True add_eos_token = False clean_up_tokenization_spaces = False additional_special_tokens = None use_default_system_prompt = False **kwargs )

Parameters

  • vocab_file (str) — Path to the vocabulary file.
  • unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
  • bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
  • eos_token (str, optional, defaults to "</s>") — The end of sequence token.

    When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

  • prefix_token (str, optional, defaults to "▁<PRE>") — Prefix token used for infilling.
  • middle_token (str, optional, defaults to "▁<MID>") — Middle token used for infilling.
  • suffix_token (str, optional, defaults to "▁<SUF>") — Suffix token used for infilling.
  • eot_token (str, optional, defaults to "▁<EOT>") — End of text token used for infilling.
  • fill_token (str, optional, defaults to "<FILL_ME>") — The token used to split the input between the prefix and suffix.
  • suffix_first (bool, optional, defaults to False) — Whether the input prompt and suffix should be formatted with the suffix first.
  • sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set:

    • enable_sampling: Enable subword regularization.

    • nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.

      • nbest_size = {0,1}: No sampling is performed.
      • nbest_size > 1: samples from the nbest_size results.
      • nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.
    • alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.

  • add_bos_token (bool, optional, defaults to True) — Whether to add a beginning of sequence token at the start of sequences.
  • add_eos_token (bool, optional, defaults to False) — Whether to add an end of sequence token at the end of sequences.
  • clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the tokenization spaces.
  • additional_special_tokens (list[str], optional) — Additional special tokens used by the tokenizer.
  • use_default_system_prompt (bool, optional, defaults to False) — Whether or not the default system prompt for Llama should be used.

Construct a CodeLlama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is no padding token in the original model.

The default configuration match that of codellama/CodeLlama-7b-Instruct-hf which supports prompt infilling.

build_inputs_with_special_tokens

< >

( token_ids_0 token_ids_1 = None )

get_special_tokens_mask

< >

( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None already_has_special_tokens: bool = False ) list[int]

Parameters

  • token_ids_0 (list[int]) — List of IDs.
  • token_ids_1 (list[int], optional) — Optional second list of IDs for sequence pairs.
  • already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.

Returns

list[int]

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

create_token_type_ids_from_sequences

< >

( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None ) list[int]

Parameters

  • token_ids_0 (list[int]) — List of ids.
  • token_ids_1 (list[int], optional) — Optional second list of IDs for sequence pairs.

Returns

list[int]

List of token type IDs according to the given sequence(s).

Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT

sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

if token_ids_1 is None, only returns the first portion of the mask (0s).

save_vocabulary

< >

( save_directory filename_prefix: typing.Optional[str] = None ) Tuple(str)

Parameters

  • save_directory (str) — The directory in which to save the vocabulary.

Returns

Tuple(str)

Paths to the files saved.

Save the vocabulary and special tokens file to a directory.

CodeLlamaTokenizerFast

class transformers.CodeLlamaTokenizerFast

< >

( vocab_file = None tokenizer_file = None clean_up_tokenization_spaces = False unk_token = '<unk>' bos_token = '<s>' eos_token = '</s>' prefix_token = '▁<PRE>' middle_token = '▁<MID>' suffix_token = '▁<SUF>' eot_token = '▁<EOT>' fill_token = '<FILL_ME>' additional_special_tokens = None add_bos_token = True add_eos_token = False use_default_system_prompt = False **kwargs )

Parameters

  • vocab_file (str, optional) — SentencePiece file (generally has a .model extension) that contains the vocabulary necessary to instantiate a tokenizer.
  • tokenizer_file (str, optional) — tokenizers file (generally has a .json extension) that contains everything needed to load the tokenizer.
  • clean_up_tokenization_spaces (str, optional, defaults to False) — Whether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces.
  • unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
  • bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
  • eos_token (str, optional, defaults to "</s>") — The end of sequence token.
  • prefix_token (str, optional, defaults to "▁<PRE>") — Prefix token used for infilling.
  • middle_token (str, optional, defaults to "▁<MID>") — Middle token used for infilling.
  • suffix_token (str, optional, defaults to "▁<SUF>") — Suffix token used for infilling.
  • eot_token (str, optional, defaults to "▁<EOT>") — End of text token used for infilling.
  • fill_token (str, optional, defaults to "<FILL_ME>") — The token used to split the input between the prefix and suffix.
  • additional_special_tokens (list[str], optional) — Additional special tokens used by the tokenizer.
  • add_bos_token (bool, optional, defaults to True) — Whether to add a beginning of sequence token at the start of sequences.
  • add_eos_token (bool, optional, defaults to False) — Whether to add an end of sequence token at the end of sequences.
  • use_default_system_prompt (bool, optional, defaults to False) — Whether or not the default system prompt for Llama should be used.

Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding.

This uses notably ByteFallback and no normalization.

>>> from transformers import CodeLlamaTokenizerFast

>>> tokenizer = CodeLlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer")
>>> tokenizer.encode("Hello this is a test")
[1, 15043, 445, 338, 263, 1243]

If you want to change the bos_token or the eos_token, make sure to specify them when initializing the model, or call tokenizer.update_post_processor() to make sure that the post-processing is correctly done (otherwise the values of the first token and final token of an encoded sequence will not be correct). For more details, checkout [post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.

This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. The default configuration match that of meta-llama/CodeLlama-7b-Instruct-hf which supports prompt infilling.

build_inputs_with_special_tokens

< >

( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None ) list[int]

Parameters

  • token_ids_0 (list[int]) — List of IDs to which the special tokens will be added.
  • token_ids_1 (list[int], optional) — Optional second list of IDs for sequence pairs.

Returns

list[int]

list of input IDs with the appropriate special tokens.

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. The special tokens depend on calling set_lang.

An NLLB sequence has the following format, where X represents the sequence:

  • input_ids (for encoder) X [eos, src_lang_code]
  • decoder_input_ids: (for decoder) X [eos, tgt_lang_code]

BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator.

get_special_tokens_mask

< >

( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None already_has_special_tokens: bool = False ) A list of integers in the range [0, 1]

Parameters

  • token_ids_0 (list[int]) — List of ids of the first sequence.
  • token_ids_1 (list[int], optional) — List of ids of the second sequence.
  • already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.

Returns

A list of integers in the range [0, 1]

1 for a special token, 0 for a sequence token.

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

create_token_type_ids_from_sequences

< >

( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None ) list[int]

Parameters

  • token_ids_0 (list[int]) — The first tokenized sequence.
  • token_ids_1 (list[int], optional) — The second tokenized sequence.

Returns

list[int]

The token type ids.

Create the token type IDs corresponding to the sequences passed. What are token type IDs?

Should be overridden in a subclass if the model has a special way of building those.

update_post_processor

< >

( )

Updates the underlying post processor with the current bos_token and eos_token.

save_vocabulary

< >

( save_directory: str filename_prefix: typing.Optional[str] = None )

Update on GitHub