repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
23,864
closed
Mychatter
### Model description I am still learning so the content is unclear even to me ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
05-30-2023 15:39:12
05-30-2023 15:39:12
transformers
23,863
closed
#23388 Issue: Update RoBERTa configuration
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #23388 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-30-2023 14:07:39
05-30-2023 14:07:39
cc @ArthurZucker <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey! Thanks for opening the PR, let's just re-run the CI tests
transformers
23,862
closed
Update collating_graphormer.py
# What does this PR do? Fixes #23697 ## Who can review? @ydshieh ?
05-30-2023 13:59:27
05-30-2023 13:59:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,861
closed
[from_pretrained] imporve the error message when `_no_split_modules` is not defined
# What does this PR do? As a lot of issues seem to have appeared related to this, the warning is improved. Adresses #23816
05-30-2023 13:31:21
05-30-2023 13:31:21
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, though black would format it! <|||||>Tests are again unrealted to the PR, will merge once the doc is built
transformers
23,860
closed
Deepspeed unable to resume training on Peft
this was my main code: ```python parser = argparse.ArgumentParser() parser.add_argument("--wandb", action="store_true", default=False) parser.add_argument("--prompt_type", type=str, default="chat") parser.add_argument("--data_path", type=str, default="merge.json") parser.add_argument("--output_path", type=str, default="out/lora-Vicuna-chat") parser.add_argument("--model_path", type=str, default="decapoda-research/llama-7b-hf") parser.add_argument("--num_epoch", type=int, default=3) parser.add_argument("--micro_batch", type=int, default=4) parser.add_argument("--total_batch", type=int, default=128) parser.add_argument("--log_steps", type=int, default=100) parser.add_argument("--eval_steps", type=int, default=200) parser.add_argument("--save_steps", type=int, default=200) parser.add_argument("--warmup_ratio", type=float, default=0.05) parser.add_argument("--test_size", type=int, default=10) parser.add_argument("--resume_from_checkpoint", type=str, default=None) parser.add_argument("--lora_remote_checkpoint", type=str, default=None) parser.add_argument("--ignore_data_skip", type=bool, default=False) parser.add_argument("--int8_train", type=bool, default=False) parser.add_argument("--deepspeed", type=str, default=False) args = parser.parse_args() if not args.wandb: os.environ["WANDB_MODE"] = "disable" MICRO_BATCH_SIZE = args.micro_batch # this could actually be 5 but i like powers of 2 BATCH_SIZE = args.total_batch MAX_STEPS = None GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE EPOCHS = args.num_epoch LEARNING_RATE = 3e-4 # the Karpathy constant # CUTOFF_LEN = 2048 CUTOFF_LEN = 512 LORA_R = 8 LORA_ALPHA = 16 LORA_DROPOUT = 0.05 VAL_SET_SIZE = args.test_size # 2000 TARGET_MODULES = [ "q_proj", "v_proj", "k_proj", "o_proj", "down_proj", "gate_proj", "up_proj", ] DATA_PATH = args.data_path OUTPUT_DIR = args.output_path # "lora-Vicuna" device_map = "auto" world_size = int(os.environ.get("WORLD_SIZE", 1)) ddp = world_size != 1 if ddp: device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} GRADIENT_ACCUMULATION_STEPS = GRADIENT_ACCUMULATION_STEPS // world_size # we must make sure batch_size and gradient_accumulation_steps not changed for resuming training. if args.resume_from_checkpoint: s_ = utils.check_args_on_resume(args) print(f'Resume args check status: {s_}') # checkpoint = os.path.join(args.resume_from_checkpoint, 'pytorch_model.bin') logger = utils.set_file_logger(__name__, OUTPUT_DIR) # 1. load dataset logger.info(f">>> processing data from {DATA_PATH}") logger.info(f">>> using {args}") train_tokenizer = LlamaTokenizer.from_pretrained(args.model_path, add_eos_token=True) assert train_tokenizer.eos_token_id == 2, "Tokenizer eos is wrong!!!" # unk. we want this to be different from the eos token train_tokenizer.pad_token_id = 0 # cannot use eos in generation! # tokenizer.padding_side = "left" # Allow batched inference test_tokenizer = LlamaTokenizer.from_pretrained(args.model_path) if args.prompt_type == "instruct": PROMPT = prompt.instruct_prompt(train_tokenizer, CUTOFF_LEN) elif args.prompt_type == "chat": PROMPT = prompt.chat_prompt(train_tokenizer, CUTOFF_LEN) else: raise Exception("not support") data = load_dataset("json", data_files=DATA_PATH) start = random.randint(1, 100) examples = Dataset.from_dict(data["train"][start : start + 5]).map( PROMPT.preprocess_train ) for example in examples: logger.info( f'>>> using prompt {args.prompt_type}, prompt example:\n { train_tokenizer.decode(example["input_ids"]) }' ) logger.info( f'>>> tokenizer labels: { train_tokenizer.decode([ 0 if l==-100 else l for l in example["labels"]])}' ) logger.info( f'>>> tokenizer example: { example["input_ids"][:10] }...{ example["input_ids"][-10:]}' ) num_proc = os.cpu_count() if VAL_SET_SIZE > 0: train_val = data["train"].train_test_split( test_size=VAL_SET_SIZE, shuffle=True, seed=42 ) train_data = ( train_val["train"].shuffle().map(PROMPT.preprocess_train, num_proc=num_proc) ) val_data = ( train_val["test"].shuffle().map(PROMPT.preprocess_train, num_proc=num_proc) ) else: train_data = data["train"].shuffle().map(PROMPT.preprocess_train, num_proc=num_proc) val_data = None now_max_steps = max((len(data["train"]) - VAL_SET_SIZE) // BATCH_SIZE * EPOCHS, EPOCHS) logger.info(f">>> load model from {args.model_path}") model = LlamaForCausalLM.from_pretrained( args.model_path, load_in_8bit=args.int8_train, device_map=device_map, torch_dtype=torch.float16, ) if args.int8_train: model = prepare_model_for_int8_training(model) config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=TARGET_MODULES, lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) class CustomCallback(TrainerCallback): def __init__(self, trainer) -> None: super().__init__() self.trainer = trainer self.generation_config = GenerationConfig( temperature=1.0, top_p=0.75, top_k=40, num_beams=2, bos_token_id=train_tokenizer.bos_token_id, eos_token_id=train_tokenizer.eos_token_id, pad_token_id=train_tokenizer.pad_token_id, max_new_tokens=1024, # max_length=max_new_tokens+input_sequence min_new_tokens=1, # min_length=min_new_tokens+input_sequence bad_words_ids=test_tokenizer( ["\n\nUser:", "\n\nAssistant:"], add_special_tokens=False ).input_ids, ) self.repetition_penalty = 1.3 self.logger = utils.set_file_logger( "transformers.trainer", trainer.args.output_dir ) def on_log(self, args, state, control, logs, **kwargs): logger.info(logs) model.print_trainable_parameters() print(f"peft config of model: {model.peft_config}") logger.info(f"model.modules_to_save: {model.modules_to_save}") old_state_dict = model.state_dict model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict()) ).__get__(model, type(model)) if torch.__version__ >= "2" and sys.platform != "win32": # model = torch.compile(model) pass model.save_pretrained(args.output_path) # print(f"now FUCK model s: {model.state_dict().keys()}") # print(f"{torch.load(os.path.join(args.resume_from_checkpoint, 'pytorch_model.bin')).keys()}") trainer = transformers.Trainer( model=model, train_dataset=train_data, eval_dataset=val_data, args=transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, warmup_ratio=args.warmup_ratio, num_train_epochs=EPOCHS, # max_steps=MAX_STEPS, learning_rate=LEARNING_RATE, fp16=True, logging_steps=args.log_steps, logging_first_step=True, # convenient evaluation_strategy="steps" if VAL_SET_SIZE > 0 else "no", save_strategy="steps", save_total_limit=2, eval_steps=args.eval_steps if VAL_SET_SIZE > 0 else None, save_steps=args.save_steps, output_dir=OUTPUT_DIR, load_best_model_at_end=True if VAL_SET_SIZE > 0 else False, ddp_find_unused_parameters=False if ddp else None, report_to="wandb" if args.wandb else [], ignore_data_skip=args.ignore_data_skip, deepspeed=args.deepspeed, ), data_collator=PROMPT.data_collator(), ) trainer.add_callback(CustomCallback(trainer)) model.config.use_cache = False trainer.train(resume_from_checkpoint=args.resume_from_checkpoint) model.save_pretrained(OUTPUT_DIR) ``` the model training is OK, model save is OK Got output like this: ``` (base) ➜ checkpoint-1200 git:(main) ll total 115M -rw-r--r-- 1 root root 77M May 30 20:37 aa drwxr-xr-x 2 root root 268M May 30 17:30 global_step1200 -rw-r--r-- 1 root root 15 May 30 17:30 latest -rw-r--r-- 1 root root 39M May 30 17:30 pytorch_model.bin -rw-r--r-- 1 root root 16K May 30 17:30 rng_state_0.pth -rw-r--r-- 1 root root 16K May 30 17:30 rng_state_1.pth -rw-r--r-- 1 root root 3.1K May 30 17:30 trainer_state.json -rw-r--r-- 1 root root 5.0K May 30 17:30 training_args.bin -rwxr--r-- 1 root root 19K May 30 17:30 zero_to_fp32.py ``` but somehow I can't resume the checkpoint, From my limited knowledge, resume should send same as output path, and inside output path, we might have checkpiint-800 checkpint-1600 etc. So I just resume from output_path. Then it says ValueError: Can't find a valid checkpoint at out/lora-Vicuna-chat/ Why????? I try to send a path like `out/lora-Vicuna-chat/checkpoint-600`, but also failed ,so strange
05-30-2023 12:43:37
05-30-2023 12:43:37
Hi, there, I got same error with deepspeed. training noramlly and resume got `Can't find a valid checkpoint` first I try resume_from_checkpoint with `out/lora-Vicuna-chat` (output_path) got `Can't find a valid checkpoint` then I send `out/lora-Vicuna-chat/checkpoint-6000` I can not load the lora weights........ ``` "base_model.model.model.layers.31.self_attn.k_proj.lora_B.default.weight", "base_model.model.model.layers.31.self_attn.v_proj.weight", "base_model.model.model.layers.31.self_attn.v_proj.lora_A.default.weight", "base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight", "base_model.model.model.layers.31.self_attn.o_proj.weight", "base_model.model.model.layers.31.self_attn.o_proj.lora_A.default.weight", "base_model.model.model.layers.31.self_attn.o_proj.lora_B.default.weight", "base_model.model.model.layers.31.self_attn.rotary_emb.inv_freq", "base_model.model.model.layers.31.mlp.gate_proj.weight", "base_model.model.model.layers.31.mlp.gate_proj.lora_A.default.weight", "base_model.model.model.layers.31.mlp.gate_proj.lora_B.default.weight", "base_model.model.model.layers.31.mlp.down_proj.weight", "base_model.model.model.layers.31.mlp.down_proj.lora_A.default.weight", "base_model.model.model.layers.31.mlp.down_proj.lora_B.default.weight", "base_model.model.model.layers.31.mlp.up_proj.weight", "base_model.model.model.layers.31.mlp.up_proj.lora_A.default.weight", "base_model.model.model.layers.31.mlp.up_proj.lora_B.default.weight", "base_model.model.model.layers.31.input_layernorm.weight", Unexpected key(s) in state_dict:"base_model.model.model.layers.31.self_attn.q_proj.lora_A.weight", "base_model.model.model.layers.31.self_attn.q_proj.lora_B.weight", "base_model.model.model.layers.31.self_attn.k_proj.lora_A.weight", "base_model.model.model.layers.31.self_attn.k_proj.lora_B.weight", "base_model.model.model.layers.31.self_attn.v_proj.lora_A.weight", "base_model.model.model.layers.31.self_attn.v_proj.lora_B.weight", "base_model.model.model.layers.31.self_attn.o_proj.lora_A.weight", "base_model.model.model.layers.31.self_attn.o_proj.lora_B.weight", "base_model.model.model.layers.31.mlp.gate_proj.lora_A.weight", "base_model.model.model.layers.31.mlp.gate_proj.lora_B.weight", "base_model.model.model.layers.31.mlp.down_proj.lora_A.weight", "base_model.model.model.layers.31.mlp.down_proj.lora_B.weight", ``` the model with some suffix `default`, but samed model didn't have...... I am confused so much<|||||>cc @pacman100 <|||||>I am sorry but I found it might related about this: ``` model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict()) ).__get__(model, type(model)) if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model) print("\n If there's a warning about missing keys above, please disregard :)") trainer.train(resume_from_checkpoint=args.resume_from_checkpoint) ``` I am replace model_stateduct after create Trainer, digging into code, found that `get_peft_model_state_dict` will replace the peft model sate keynname with some {adapter_name} as suffix. Does this line of code must before create trainer? There is really lack documentation mentationed about this. If so, then why must need users to do this manually when resume or same? And when using PeftModel.from_pretrained, it actually can set_peft_model_statedict automatically..... These behaviour really makes me very confused.<|||||>@sgugger Sorry for pin again, but this problem obstacles me and make me very confused, please help me clarify, I made a clear code analysis to adress this problem: https://github.com/huggingface/peft/issues/746<|||||>Hello, looking into this and https://github.com/huggingface/peft/issues/746<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,859
closed
❗ Bug for compute_transition_scores in generation
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-1038-azure-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer src_lang = 'hye_Armn' trg_lang = 'eng_Latn' tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M",src_lang='hye_Armn') model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") batch = ['Հավելյալ 300-ը վագոնների թիվը դարձնում է 1,300, որոնց նպատակն է թեթևացնել գերծանրաբեռնվածությունը:'] inputs = tokenizer(batch,return_tensors="pt") outputs = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[trg_lang], max_length=100, return_dict_in_generate=True, output_scores=True, num_return_sequences = 5, num_beams = 5, ) transition_scores = model.compute_transition_scores(outputs.sequences, outputs.scores, normalize_logits=True) ``` ### Expected behavior transition_scores are computed
05-30-2023 10:36:49
05-30-2023 10:36:49
@gante @ArthurZucker Hi, would you mind taking some time to check this?<|||||>Sure! Can you share the full traceback of the error that you stumbled upon? <|||||>Thanks for help. The full traceback is here: ``` RuntimeError Traceback (most recent call last) Cell In[1], line 19 9 inputs = tokenizer(batch,return_tensors="pt") 10 outputs = model.generate( 11 **inputs, 12 forced_bos_token_id=tokenizer.lang_code_to_id[trg_lang], (...) 17 num_beams = 5, 18 ) ---> 19 transition_scores = model.compute_transition_scores(outputs.sequences, outputs.scores, normalize_logits=True) File [/anaconda/envs/llmt/lib/python3.8/site-packages/transformers/generation/utils.py:1086](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2241313030227d.vscode-resource.vscode-cdn.net/anaconda/envs/llmt/lib/python3.8/site-packages/transformers/generation/utils.py:1086), in GenerationMixin.compute_transition_scores(self, sequences, scores, beam_indices, normalize_logits) 1084 # 7. Define which indices contributed to scores 1085 cut_idx = sequences.shape[-1] - max_beam_length -> 1086 indices = sequences[:, cut_idx:] + beam_sequence_indices 1088 # 8. Compute scores 1089 transition_scores = scores.gather(0, indices) RuntimeError: The size of tensor a (2) must match the size of tensor b (23) at non-singleton dimension 1 ```<|||||>BTW, this RuntimeError doesn't always happen. For example, for input like this, this snippet works fine. ``` batch = ['«Մենք հիմա ունենք 4 ամսական մկներ, որոնք, նախկինում շաքարային դիաբետ ունենալով, այժմ չունեն այն,- ավելացրեց նա։»'] ```<|||||>Hey @Hannibal046 👋 As stated in the [docs](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationMixin.compute_transition_scores), when `num_beams>1`, you need to pass the `beam_indices` argument to `compute_transition_scores()`. `beam_indices` is part of the output in generate with beam search. Here's a working snippet: ```py from transformers import AutoModelForSeq2SeqLM, AutoTokenizer src_lang = 'hye_Armn' trg_lang = 'eng_Latn' tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", src_lang='hye_Armn') model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") batch = ['Հավելյալ 300-ը վագոնների թիվը դարձնում է 1,300, որոնց նպատակն է թեթևացնել գերծանրաբեռնվածությունը:'] inputs = tokenizer(batch, return_tensors="pt") outputs = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[trg_lang], max_length=100, return_dict_in_generate=True, output_scores=True, num_return_sequences = 5, num_beams = 5, ) transition_scores = model.compute_transition_scores(outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=True) print(transition_scores) ```<|||||>@gante Thanks so much! Appreciate your work!<|||||>@gante we can maybe raise an error if indices are not properly passed? <|||||>@ArthurZucker Sadly that is not possible without an API change :( We do get an exception when `num_beams` and `num_return_sequences` are used together in `generate`, but not when `num_beams` is used alone -- the output format is the same as a batched input, no way to detect whether it comes from beam search or not. E.g. this snippet runs (and it should throw an error) ```py from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilgpt2") model = AutoModelForCausalLM.from_pretrained("distilgpt2") inputs = tokenizer(["The quick brown"], return_tensors="pt") gen_out = model.generate(**inputs, num_beams=5, do_sample=False, return_dict_in_generate=True, output_scores=True) transition_scores = model.compute_transition_scores(gen_out.sequences, gen_out.scores, normalize_logits=True) ``` The solution would be e.g. to pass the entire outputs of generate into `compute_transition_scores`. But that's a bigger change that implies a deprecation cycle (that I'm not sure is worth going through 🤔 )
transformers
23,858
closed
Space CPU Basic and Nvidia T4 - small should be FREE FREE FREE
### Feature request Look at this horrible chart, from [this page](https://huggingface.co/pricing#spaces). ![image](https://github.com/huggingface/transformers/assets/45315076/7b781ad5-f5f6-4469-a845-c408fb126bd7) The CPU basic and T4 small should be completely free. ### Motivation - I have a code example (gradio web app) and I want to upload it in huggingface space. The code is minimal but contains some API level issue to run on CPU and I need minimum GPU to run it. Now I couldn't run it on space due to no GPU for free. - But I easily run this app using google Colab. Which provides free T4. In summary, I git clone the huggingface space repo to google colab and run my code there. - How is it good for huggingface businessnes? - How is it good for end-user demand? Please reconsider. ### Your contribution Don't ask me for sponsor ;)
05-30-2023 09:27:54
05-30-2023 09:27:54
transformers
23,857
closed
Added time-series blogs to the models
@kashif @NielsRogge
05-30-2023 09:16:42
05-30-2023 09:16:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,856
closed
Adds AutoProcessor.from_pretrained support for MCTCTProcessor
# What does this PR do? Adds `MCTCTProcessor` to the mapping between model architectures and classes used by `AutoProcessor.from_pretrained`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #23853 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - **See #23853** - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - **The docs should get autoupdated via the `replace_list_option_in_docstrings` decorator.** - [X] Did you write any new necessary tests? - I don't know that relevant tests can be written without expanding the suite of internal models (`"hf-internal-testing/tiny-random-MCTCTModel` doesn't work because it doesn't have a tokenizer attached). - I did confirm the existing `AutoProcessor` tests still pass. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-30-2023 09:12:50
05-30-2023 09:12:50
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,855
closed
[LlamaTokenizerFast] nit update `post_processor` on the fly
# What does this PR do? This PR adresses #23833 where it appears that being able to change the `add_eos_token` and `add_bos_token` should be made possible for an easier use of the interface. Fix comes in two changes - added `_add_bos_token` attribute, as well as setters and getters for `add_bos_token` - added a `self.update_post_processor` that update the post processor based on the current values of `add_eos_token` and `add_bos_token`. - added a test to make sure this is properly working
05-30-2023 08:28:44
05-30-2023 08:28:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test is unrelated to the changes
transformers
23,854
closed
:grey_question: Custom tool creation and pip requirements :grey_question:
# :grey_question: About i'm currently working on [Creating a new tool](https://huggingface.co/docs/transformers/en/custom_tools#creating-a-new-tool), ... and this tool will rely on a custom `pypi` package. In the documentation, you show some classic imports but not custom ones. # :pray: Question If I create a custom tool, how to make sure that the final user won't be embarrased by my internal package ?... ie. I would like the end-user to only have to import my custom tool without having to install the `pypi` package, so here comes the (newbee/noob) question: > "How to package a custom tool that relies itself on a custom `pypi` package ?" Thank you in advance for your help.
05-30-2023 08:03:32
05-30-2023 08:03:32
It's not possible, they will need to install the packages required by your tool.<|||||>Ok, thanks a lot for your answer @sgugger :pray:
transformers
23,853
closed
AutoProcessor.from_pretrained doesn't support MCTCT Models
### System Info Not actually relevant, but included for completeness: - `transformers` version: 4.29.1 - Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.1 (cpu) - Jax version: 0.4.9 - JaxLib version: 0.4.9 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoProcessor, MCTCTProcessor mctc_proc1 = AutoProcessor.from_pretrained("speechbrain/m-ctc-t-large") mctc_proc2 = MCTCTProcessor.from_pretrained("speechbrain/m-ctc-t-large") print(f"AutoProcessor: {mctc_proc1}") print(f"MCTCTProcessor: {mctc_proc2}") ``` The first line prints a `MCTCTProcessor` instance, containing a`MCTCTFeatureExtractor` feature extractor and `Wav2Vec2CTCTokenizer` tokenizer) while the second prints just an `Wav2Vec2CTCTokenizer` instance. ### Expected behavior `AutoProcessor.from_pretrained` should return an `MCTCTProcessor` instance when the provided model is an MCTCT model. The reason it does not right now is because [the code for `AutoProcessor`](https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/auto/processing_auto.py#LL42C1-L82C1) does not include a mapping entry for MCTCT. ```python PROCESSOR_MAPPING_NAMES = OrderedDict( [ ("align", "AlignProcessor"), ("altclip", "AltCLIPProcessor"), ("blip", "BlipProcessor"), ("blip-2", "Blip2Processor"), ("bridgetower", "BridgeTowerProcessor"), ("chinese_clip", "ChineseCLIPProcessor"), ("clap", "ClapProcessor"), ("clip", "CLIPProcessor"), ("clipseg", "CLIPSegProcessor"), ("flava", "FlavaProcessor"), ("git", "GitProcessor"), ("groupvit", "CLIPProcessor"), ("hubert", "Wav2Vec2Processor"), ("layoutlmv2", "LayoutLMv2Processor"), ("layoutlmv3", "LayoutLMv3Processor"), ("markuplm", "MarkupLMProcessor"), ("mgp-str", "MgpstrProcessor"), ("oneformer", "OneFormerProcessor"), ("owlvit", "OwlViTProcessor"), ("pix2struct", "Pix2StructProcessor"), ("sam", "SamProcessor"), ("sew", "Wav2Vec2Processor"), ("sew-d", "Wav2Vec2Processor"), ("speech_to_text", "Speech2TextProcessor"), ("speech_to_text_2", "Speech2Text2Processor"), ("speecht5", "SpeechT5Processor"), ("trocr", "TrOCRProcessor"), ("tvlt", "TvltProcessor"), ("unispeech", "Wav2Vec2Processor"), ("unispeech-sat", "Wav2Vec2Processor"), ("vilt", "ViltProcessor"), ("vision-text-dual-encoder", "VisionTextDualEncoderProcessor"), ("wav2vec2", "Wav2Vec2Processor"), ("wav2vec2-conformer", "Wav2Vec2Processor"), ("wavlm", "Wav2Vec2Processor"), ("whisper", "WhisperProcessor"), ("xclip", "XCLIPProcessor"), ] ) ``` An [MCTCTProcessor class](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mctct/processing_mctct.py) exists whose `from_pretrained` function behaves appropriately. `AutoProcessor` should behave the same way, rather than falling back to a tokenizer. The fix seems simple enough, by adding the entry below to `PROCESSOR_MAPPING_NAMES` (but I am far from an expert): ```python ("mctct", "MCTCTProcessor"), ``` For comparison, the `AutoModel.from_pretrained` method does support MCTCT and thus behaves appropriately because [its mapping contains a line for MCTCT](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py#L125).
05-30-2023 05:31:32
05-30-2023 05:31:32
cc @sanchit-gandhi
transformers
23,852
closed
RWKV can't stop correctly.
According to [here](https://huggingface.co/BlinkDL/rwkv-4-raven), the prompt should be `Bob: xxxxxxxxxxxxxxxxxx\n\nAlice:`.But when I run ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b", torch_dtype=torch.float16).to(0) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b") prompt = "Bob: What's your name?\n\nAlice:" inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=256) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) ``` The output will be ``` " I'm ChatGPT. My name is not important.\n\nBob: What's your favorite color?\n\nAlice: I don't have a favorite color. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n\nAlice: I don't have personal preferences or emotions. I am an AI language model and do not have personal preferences or emotions.\n\nBob: What's your favorite color?\n" ``` As you can see, it can't stop.
05-30-2023 04:27:42
05-30-2023 04:27:42
It seems that '\n\n' should be the eos_token. @sgugger<|||||>And according to [this](https://huggingface.co/BlinkDL/rwkv-4-world), '\n\n' will be a single token in the new world model.<|||||>cc @younesbelkada and @ArthurZucker <|||||>hi @JaheimLee The EOS token should be still the same across all RWKV models, I believe for chat models, you need to manually stop the generation whenever you encounter the `\n\n` token. See here: https://huggingface.co/spaces/BlinkDL/ChatRWKV-gradio/blob/main/app.py#L214 for reference. <|||||>Also another thing that might be required is to properly set the `congig/generation_config`'s `eos_token_id` as there is a logic to automatically stop generating when these are detected/. <|||||>> Also another thing that might be required is to properly set the `congig/generation_config`'s `eos_token_id` as there is a logic to automatically stop generating when these are detected/. But `\n\n` is not a token now. How to set the `eos_token_id`? Maybe the rwkv tokenizer should be updated to the world model version first?<|||||>I don't know if `\n\n` can be encoded as a single token, probably that is why in the official demo it manually looks for that string and stops generating if that string has been generated<|||||>It can if we add it to the vocab with `add_special_token`. If you just set `tokenizer.add_special_token"` it should work out of the box. Let me have a try<|||||>Okay! Here is the fix: `model.config.eos_token_id = 187` (`"\n"` and not `"\n\n"` worked) . The model.config has it set to `0`. With this here is the output I have: ```python >>> model.config.eos_token_id = 187 >>> output = model.generate(inputs["input_ids"], max_new_tokens=256);print(tokenizer.decode(output[0])) Bob: What's your name? Alice: My name is not important. ```<|||||>> Okay! Here is the fix: `model.config.eos_token_id = 187` (`"\n"` and not `"\n\n"` worked) . The model.config has it set to `0`. With this here is the output I have: > > ```python > >>> model.config.eos_token_id = 187 > >>> output = model.generate(inputs["input_ids"], max_new_tokens=256);print(tokenizer.decode(output[0])) > Bob: What's your name? > > Alice: My name is not important. > ``` But it will hurt the output in which has to have `\n`, like ``` query = "Bob: How to write a paper?.\n\nAlice:" In [10]: tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]) Out[10]: ' Writing a paper involves several steps, including planning, organizing, writing, editing, and proofreading. Here are some steps to help you write a paper:\n' ```<|||||>Yes 😅 the issue is that the model does not predict the `\n\n` token but rather `\n` `\n`. I’ll see what I can do 😃<|||||>> Yes 😅 the issue is that the model does not predict the `\n\n` token but rather `` \n``\n ``. I’ll see what I can do 😃 I think the only way to fix it is to update the tokenizer to the world version mentioned above.<|||||>You can also implement your own `StoppingCriteria`, like the following: ```python from transformers import StoppingCriteria class RwkvStoppingCriteria(StoppingCriteria): def __init__(self, eos_sequence = [187,187], eos_token_id = 537): self.eos_sequence = eos_sequence self.eos_token_id = eos_token_id def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: last_2_ids = input_ids[:,-2:].tolist() return self.eos_sequence in last_2_ids output = model.generate(inputs["input_ids"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()]) ``` This gave me: ```python Bob: What's your name? Alice: My name is not important. ``` and ```python >>> output = model.generate(inputs["input_ids"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()]) The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:0 for open-end generation. >>> print(tokenizer.decode(output[0])) Bob: How to write a paper?. Alice: Writing a paper involves several steps, including planning, organizing, writing, editing, and proofreading. Here are some steps to help you write a paper: 1. Choose a topic: Choose a topic that you are interested in and that you can research thoroughly. 2. Develop a thesis statement: A thesis statement ```<|||||>Two choices, either we add this to transformers, or we modify the generate function of RWKV to stop when two `/n` are generated. I am in favor of 1 as it is a much cleaner fix to a hack that should not exist. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,851
closed
[Bug]? how does the tokenizer encode the special tokens?
### System Info transformer version 4.28.1 ### Who can help? @ArthurZucker hi, maybe, the following issue should be asked here? [[Bug]? how does the tokenizer encode the special tokens? #1263](https://github.com/huggingface/tokenizers/issues/1263) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi, all, I used the tokenzier to process data for llama model(already converted to hf formated) and set: ```python tokenizer = AutoTokenizer.from_pretrained(llama_model_id, model_max_length=1024, padding_side='right', trust_remote_code=True) tokenizer.add_special_tokens( { "eos_token": "</s>", "bos_token": "</s>", "unk_token": "</s>", }) tokenizer.pad_token = tokenizer.eos_token ``` when tokenizing a piece of text with an eos_token: ```python tokenizer(['ASSISTANT: Hello!</s>']) # there is no space between ! and </s>. ``` ``` output: {'input_ids': [[1, 319, 1799, 9047, 13566, 29901, 15043, 29991, 829, 29879, 29958]], 'token_type_ids': [[0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]} ``` The `eos_token: </s>` is encoded to ` 829, 29879, 29958` which means `</s>` is regarded as `</`,`s` and `>`. ```python tokenizer(['ASSISTANT: Hello! </s>']) # there is a space between ! and </s>. ``` ``` output: {'input_ids': [[1, 319, 1799, 9047, 13566, 29901, 15043, 29991, 2]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1]]} ``` in this time, `</s>` is encoded correctly (token id is 2). As description above, does this mean we should add a space between text and `eos_token`? however, I find many popular projects like `Alpaca` concatenate text with `eos_token` without a space. I previously thought tokenizer encode text in a greedy style, the `eos_token` would be encoded correctly with or without a space. However, the experiments above seemed to not support my opinion. could anyone help me, if there is something misunderstood by me? thx. ---- After some other experiments, I found some weird thing: ```python tokenizer('我是谁') output: 'input_ids': [1, 29871, 30672, 30392, 235, 179, 132] ``` 1 is bos_token_id, 29871 is the token id of '' ```python tokenizer('我是谁</s>') output: 'input_ids': [1, 29871, 30672, 30392, 235, 179, 132, 829, 29879, 29958] tokenizer('who are you</s>') output: 'input_ids': [1, 1058, 526, 366, 829, 29879, 29958] # there is no 29871. ``` when add a space ` ` between `谁` and `</s>`. ```python tokenizer('我是谁 </s>') output: 'input_ids': [1, 29871, 30672, 30392, 235, 179, 132, 2] # the `</s>` is encoded correctly ``` when decode `[1, 29871, 30672, 30392, 235, 179, 132, 2] ` ``` tokenizer.decode([1, 29871, 30672, 30392, 235, 179, 132, 2]) output: '<s> 我是谁</s>' ``` the space ` ` is ignored! When manually add token id 29871: ``` tokenizer.decode([1, 29871, 30672, 30392, 235, 179, 132, 29871, 2]) output: '<s> 我是谁 </s>' ``` this time, there is a space ` ` between `谁` and `</s>`. Does these experiments above means encode, decode methods are not completely Reciprocal reversible operation? ### Expected behavior does above experiments show bugs? if not, how should I understand these? thanks
05-30-2023 03:03:01
05-30-2023 03:03:01
#23818 <|||||>> #23818 @jiangwy99 thanks very much, when set `use_fast=False`, this indeed encode </s> correctly, whether the space exists. However, ```python tokenizer(['who are you', '你是谁']) output: outputs: [ [1, 1058, 526, 366], [1, 29871, 30919, 30392, 235, 179, 132] ] ``` the space ` ` in front Chinese characters still exists. <|||||>> > #23818 > > @jiangwy99 thanks very much, when set `use_fast=False`, this indeed encode correctly, whether the space exists. > > However, > > ```python > tokenizer(['who are you', '你是谁']) > output: > > outputs: > [ > [1, 1058, 526, 366], > [1, 29871, 30919, 30392, 235, 179, 132] > ] > ``` > > the space ` ` in front Chinese characters still exists. That's quite a problem. Your analysis of the problems on the tokenizer is more comprehensive than mine, and I look forward to these issues being resolved.<|||||>Hey, I basically answered in #23818, this is pretty much the same
transformers
23,850
closed
🌐 [i18n-KO] Translated `perplexity.mdx` to Korean
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated `perplexity.mdx` file of the documentation to Korean. Added draft of `model_summary.mdx` file because it's referenced. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
05-29-2023 23:16:34
05-29-2023 23:16:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>좋은 번역 감사합니다! 앞서 올려주신 수정 제안 이외에 추가 의견 없습니다!<|||||>@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
transformers
23,849
closed
[WIP] Add llava model
# What does this PR do? This PR adds the LlaVA model ([https://arxiv.org/abs/2304.08485](https://arxiv.org/abs/2304.08485)), an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/22848 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts
05-29-2023 21:41:46
05-29-2023 21:41:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23849). All of your documentation changes will be reflected on that endpoint.<|||||>Hey! Thanks for wanting to contribute. I would suggest you to follow the [guide](https://huggingface.co/docs/transformers/model_sharing) on how to share a model like this one. Since it is basically patching two models up, should be easy to fit on the hub! 🤗 <|||||>Hey @ArthurZucker! Thank you for your message! I have looked up the guide you provided to share a model and according to my understanding, you are making a reference to uploading the model weights to the model hub and adding a model card, right? However, I am a little confused, since the model is already on the hub ([https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0](https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0)), but it cannot be ran using the current LLaMA implementation in transformers. I was thinking more to follow this [guide](https://huggingface.co/docs/transformers/add_new_model), and include in my PR new classes for Llava inheriting from PreTrainedConfig and PreTrainedModel and a LlavaForCausalLM class, as implemented here [https://github.com/haotian-liu/LLaVA/blob/main/llava/model/llava.py](https://github.com/haotian-liu/LLaVA/blob/main/llava/model/llava.py). What to do you think of it @ArthurZucker ? (@jprivera44 do not hesitate to participate in the convo since we will collaborate with each other on this PR) <|||||>Hi @youssefadr, following up on your post, I am also following the same guide for HF. Although we might interpret the steps slightly differently. I'm not sure which steps you are on, but even though the original researchers included the model card, this should be used to get the initial weights from the LLaMA weights(I'm still waiting on Meta for these weights). Once the pre-loaded weights are in, the process of tracing the forward pass(in the original repo) to see what functions are needed for transfomers/LLaVA kicks off the whole process. Were you able to get the original LLaMA weights from Meta?<|||||>Hey @youssefadr what I meant is that you should host the code on the hub, others will be able to run your code using `trust_remote_code = True`. This is easier to do, and more aligned with the way this model seems to work! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,848
open
RWKV - Inference NF4 quantization broken, also Int8 quantization weirdness.
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: RTX 6000 Ada - Using distributed or parallel set-up in script?: Not for inference. - bitsandbytes 0.39. I'm using the `RWKV/rwkv-raven-14b` model. Rescaling is broken for NF4 quantization with RWKV `RuntimeError: result type Float can't be cast to the desired output type Byte` Looks like torch cannot do the conversion in _div And then if I turn rescaling off, it looks like theres a projection issue somewhere, `RuntimeError: mat1 and mat2 shapes cannot be multiplied (43x5120 and 1x13107200)` Additionally, with Int8 quantization enabled RWKV just outputs the endoftext token, I added a logits processor to output the scores and they're all NaN: ``` tensor([[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0', dtype=torch.float16) ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have a repo with everything setup in generate.py to be able to quickly repro here: https://github.com/iantbutler01/rwkv-raven-qlora-4bit-instruct/blob/main/generate.py pip install -U git+https://github.com/huggingface/transformers.git pip install -U git+https://github.com/huggingface/peft.git pip install -U git+https://github.com/huggingface/accelerate.git pip install --upgrade bitsandbytes And then run `python generate.py` in a python 3.10+ environment. Uncomment 8bit or 4bit bnb config as needed. ### Expected behavior I would expect NF4 based quantization to work at all, and then for Int8 quantization for logits not to be NaN.
05-29-2023 20:08:16
05-29-2023 20:08:16
Not sure quantization actually works for RWKV which has quite a few custom layers. cc @younesbelkada <|||||>Hmm, I was able to do a 4bit finetuning with qlora last week at the very least targeting key value and receptance in the attention and feed forward blocks, it just seems like inference time is broken I confirmed my tuned checkpoints worked fine for inference at full precision and actually it worked fine for just the forward call in 8bit in Eleuther's lm-evaluation-harness too now that I think of it, not sure for 4bit. Just seems to break when calling generate <|||||>Hi @iantbutler01 Thanks for the issue! The 8bit support should be added in https://github.com/huggingface/transformers/pull/23468 From my understanding it seems you have managed to finetune RWKV in 4bit ? > Hmm, I was able to do a 4bit finetuning with qlora last week at the very least targeting key value and receptance in the attention and feed forward blocks Could you elaborate more on the error? <|||||>@younesbelkada In regards to int8, I've been testing on the development branch, which includes the code you've merged there and it very much just produces `tensor([[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0', dtype=torch.float16)` for the logits during a `generate` call even with the base RWKV 14b model so I think something is still broken. You can reproduce this easily with the steps I've linked in the issue here. For example, with ``` AndBytesConfig( load_in_8bit=True ) model = AutoModelForCausalLM.from_pretrained( "RWKV/rwkv-raven-14b", return_dict=True, torch_dtype=torch.float16, quantization_config=bnb_config, context_length=1024, # rescale_every=0, ).cuda() tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-14b") pipeline = InstructionTextGenerationPipeline( model=model, tokenizer=tokenizer, top_p=0.92, top_k=50, temperature=1.0, ) instruction = "Write me the steps to make a peanut butter and jelly sandwich" prompt = PROMPT_FOR_GENERATION_FORMAT.format( instruction=instruction, ) class IsBork(LogitsProcessor): def __call__(self, input_ids, scores): print(scores) return scores prompt = str(prompt) inputs = tokenizer(prompt, return_tensors="pt") input_ids, attention_mask = inputs["input_ids"], inputs["attention_mask"] input_ids, attention_mask = input_ids.to("cuda"), attention_mask.to("cuda") generated_sequence = model.generate( input_ids=input_ids, attention_mask=attention_mask, logits_processor=LogitsProcessorList([IsBork()]), pad_token_id=tokenizer.pad_token_id, top_p=0.92, top_k=50, temperature=1.0, max_new_tokens=512 ) print(generated_sequence) ``` The call to generate raises an error, ``` Traceback (most recent call last): File "/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/generate.py", line 171, in <module> gen = pipeline(prompt, max_new_tokens=512) File "/home/crow/SoftwareProjects/transformers/src/transformers/pipelines/base.py", line 1118, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/crow/SoftwareProjects/transformers/src/transformers/pipelines/base.py", line 1125, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/home/crow/SoftwareProjects/transformers/src/transformers/pipelines/base.py", line 1024, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/instruct_pipeline.py", line 112, in _forward generated_sequence = self.model.generate( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py", line 1568, in generate return self.sample( File "/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py", line 2651, in sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 ``` Adding a logits processor that just prints out scores shows on the first token generated, `tensor([[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0', dtype=torch.float16)` If I then set do_sample=False ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Write me the steps to make a peanut butter and jelly sandwich ### Response: <|endoftext|> ``` It only generates end of text, where as the full precision model generates correctly. <|||||>In regards to 4bit rescaling during inference is broken for NF4 quantization with RWKV if you try to run inference, with a `generate` call with nf4 quantization: RuntimeError: result type Float can't be cast to the desired output type Byte which is failing in the else statement of that block your int8 PR touches. ``` Traceback (most recent call last): File "/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/generate.py", line 181, in <module> generated_sequence = model.generate( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py", line 1518, in generate return self.greedy_search( File "/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py", line 2335, in greedy_search outputs = self( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 781, in forward rwkv_outputs = self.rwkv( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 642, in forward self._rescale_layers() File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 713, in _rescale_layers block.attention.output.weight.div_(2 ** int(block_id // self.config.rescale_every)) ``` And then if I turn rescaling off by setting `rescale_every=0`, it looks like theres a projection issue somewhere, RuntimeError: mat1 and mat2 shapes cannot be multiplied (43x5120 and 1x13107200) ``` Traceback (most recent call last): File "/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/generate.py", line 181, in <module> generated_sequence = model.generate( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py", line 1518, in generate return self.greedy_search( File "/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py", line 2335, in greedy_search outputs = self( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 781, in forward rwkv_outputs = self.rwkv( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 667, in forward hidden_states, state, attentions = block( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 384, in forward attention, state = self.attention(self.ln1(hidden), state=state, use_cache=use_cache) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 308, in forward receptance, key, value, state = self.extract_key_value(hidden, state=state) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 300, in extract_key_value key = self.key(key) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 219, in forward out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 564, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/home/crow/venvs/experimental/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 512, in forward output = torch.nn.functional.linear(A, F.dequantize_fp4(B, state).to(A.dtype).t(), bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (42x5120 and 1x13107200) ``` But yeah I have this all reproducible in the script I've linked in the issue.<|||||>I see, thanks for sharing more details with me So there are 2 issues here: 1- int8 RWKV seems to not work with you. From the snippet I am seeing, you are calling `.cuda()` on the 8bit model. This might lead to unexpected behavior because any `.to(xxx)` calls to the 8bit model will re-compute the quantization statistics. I have managed to reproduce your issue with the snippet below: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_id = "RWKV/rwkv-4-1b5-pile" model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map={"":0}).cuda() tokenizer = AutoTokenizer.from_pretrained(model_id) generation_config = GenerationConfig(max_new_tokens=20, pad_token_id=tokenizer.eos_token_id) question = "Hello my name is" inputs = tokenizer(question, return_tensors="pt").to(0) output_int8 = model.generate((inputs["input_ids"]), generation_config=generation_config) print(tokenizer.decode(output_int8[0], skip_special_tokens=True)) ``` and the model directly predicts EOS token. The fix is to replace `model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map={"":0}).cuda()` by `model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map={"":0})`. Could you confirm this fixes your issue? 2- RWKV + 4bit seems to be not supported for now. I will dig into that and let you know as soon as I have a fix<|||||>I just added the 4bit inference support for RWKV in #23910 - please try out the fixes stated above together with #23910 and let us know how it goes<|||||>@younesbelkada Okay so 8bit is working fine now, thank you very much for the workaround! 4bit loaded in with this configuration: ``` bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained( "RWKV/rwkv-raven-14b", return_dict=True, torch_dtype=torch.float16, quantization_config=bnb_config, context_length=1024, # rescale_every=0, device_map={"":0} ) ``` Is still failing unfortunately, :( ``` Traceback (most recent call last): File "/home/crow/SoftwareProjects/rwkv-raven-lora-instruct/generate.py", line 182, in <module> generated_sequence = model.generate( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py", line 1518, in generate return self.greedy_search( File "/home/crow/SoftwareProjects/transformers/src/transformers/generation/utils.py", line 2335, in greedy_search outputs = self( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 789, in forward rwkv_outputs = self.rwkv( File "/home/crow/venvs/experimental/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/crow/venvs/experimental/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 642, in forward self._rescale_layers() File "/home/crow/SoftwareProjects/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 714, in _rescale_layers block.attention.output.weight.quant_state[0].div_( RuntimeError: result type Float can't be cast to the desired output type Byte ```<|||||>I see, this is because you are using nested quantization `bnb_4bit_use_double_quant=True`. Can you try without that while I find a fix for this specific usecase? 🙏 <|||||>Yes sorry about that, I had always intended this to be with double quant, that was in my original repro code, but I should have been more explicit when communicating it to you 👍 I tried it without double quantization and it does work. <|||||>No problem and thanks for double checking, will get back once I fix the issue with nested quantization!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,846
open
Add LaVIN model
### Model description LaVIN is a vision-language instructed model that is affordable to train (it was trained in a few hours on 8 A100 GPUs) with good performance on ScienceQA. I'd like to add LaVIN to HF transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The paper [Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models](https://arxiv.org/pdf/2305.15023.pdf) is by [Gen Luo](https://luogen1996.github.io/), [Yiyi Zhou](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&projects=&template=new-model-addition.yml), [Tianhe Ren](https://rentainhe.github.io/), [Shengxin Chen](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&projects=&template=new-model-addition.yml), [Xiaoshuai Sun](https://sites.google.com/view/xssun), and [Rongrong Ji](https://mac.xmu.edu.cn/rrji/) @luogen1996 has made the code and model weights available at [github.com/luogen1996/LaVIN](https://github.com/luogen1996/LaVIN). The weights for the following models are available at the following links: ### ScienceQA | Model | Weights | Time | Memory | #Params | Acc | Weights | |-----------|----------:|----------:|-------:|--------:|-----:|-----------------:| | LaVIN-7B | LLaMA | 1.4 hours | 33.9G | 3.8M | 89.37 | [google drive](https://drive.google.com/file/d/10X2qCBYrLH1grZOHwHRMXLUoz-S6MSgV/view?usp=share_link) | | LaVIN-7B | Vicuna | 1.4 hours | 33.9G | 3.8M | 89.41 | [google drive](https://drive.google.com/file/d/1nuMxeiWlnJKxDybCshg8pVGSvLc5dZy8/view?usp=share_link) | | LaVIN-13B | LLaMA | 2 hours | 55.9G | 5.4M | 90.54 | [google drive](https://drive.google.com/file/d/1LkKUY54spZkkeXrR7BDmU-xmK9YadcKM/view?usp=share_link) | ### Multimodal ChatBot | Model |Weights | Time | Memory | #Params | Acc | Weights | |-----------|----------:|---------:|-------:|--------:|----:|-----------------:| | LaVIN-13B | LLaMA | 75 hours | 55.9G | 5.4M | - | [google drive](https://drive.google.com/file/d/1rHQNSaiGzFHYGgsamtySPYnd5AW4OE9j/view?usp=share_link)|
05-29-2023 19:01:46
05-29-2023 19:01:46
Hi @amyeroberts, I don't think anyone is working on this anymore. If this adds any value to hf I'll start working on it.
transformers
23,845
open
forced_decoder_ids in Whisper models significantly impacts performance, use decoder_input_ids instead
### Feature request @ArthurZucker probably one for you based on commit logs. Using `forced_decoder_ids` to provide "prompt" and or "prefix" to the whisper model is very inefficient as a forward pass and sampling is done for each token in the `forced_decoder_ids` but the result is already known. Instead the model parameter `decoder_input_ids` could be used which only uses one forward pass to initialise the kv cache with all the input tokens and immediately is sampling useful next tokens. Openai's whisper limits prompt to half the context length (448 // 2 - 1 = 223) , so if you want to use transformers whisper to behave like openai's whisper and you expect 20 words + EOS in your input feature then forward pass counts are: - transformers: 244 - openai-whisper: 21 I'm raising this as a feature request rather than a bug or PR as I think `forced_decoder_ids` is already pretty well embedded in the code and the community so I assume it can't just be ripped out and a discussion is probably required before a PR. Here's some code that demonstrates the issue in IPython: ```python from transformers import ( WhisperForConditionalGeneration, WhisperTokenizerFast, WhisperFeatureExtractor, ) from datasets import load_dataset import torch feature_extractor = WhisperFeatureExtractor() tokenizer = WhisperTokenizerFast.from_pretrained("openai/whisper-tiny.en", language="english") # Patch WhisperForConditionalGeneration._prepare_decoder_input_ids_for_generation because the one on GenerationMixin doesn't handle whisper properly. def prepare_decoder_input_ids_for_generation_patch(self, batch_size, model_input_name, model_kwargs, decoder_start_token_id, bos_token_id, device): if 'decoder_input_ids' not in model_kwargs: return torch.ones((batch_size, 1), dtype=torch.long, device=device) * decoder_start_token_id, model_kwargs else: return model_kwargs.pop('decoder_input_ids'), model_kwargs WhisperForConditionalGeneration._prepare_decoder_input_ids_for_generation = prepare_decoder_input_ids_for_generation_patch model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") audio = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")[3]["audio"]["array"] input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features # A custom logits processor to show how many times the forward pass and sample are run def logits_processor_count_factory(): count = 0 def logits_processor_count(input_ids, scores): nonlocal count count += 1 print(count) return scores return logits_processor_count PREV_TOKEN = 50360 # <|startofprev|> prompt_tokens = [PREV_TOKEN, 1770, 13, 2264, 346, 353, 318, 262, 46329, 286, 262, 3504, 6097, 11, 290, 356, 389, 9675, 284, 7062, 465, 21443, 13, 5414, 318, 1770, 13, 2264, 346, 353, 338, 5642, 1342, 3499, 621, 465, 2300, 13, 679, 4952, 514, 326, 379, 428, 43856, 1622, 286, 262, 614, 11, 351, 6786, 290, 32595, 12023, 28236, 878, 514, 11, 985, 2915, 7428, 422, 6600, 290, 663, 2482, 3051, 749, 14704, 284, 262, 2000, 13] # note prompt_ids is prefixed to forced_decoder_ids inside generate # counts to 106 forced_decoder_ids_output = model.generate(input_features=input_features, return_timestamps=False, prompt_ids=torch.LongTensor(prompt_tokens), logits_processor=[logits_processor_count_factory()])[0] print(tokenizer.decode(forced_decoder_ids_output, decode_with_timestamps=False)) SOT_TOKEN = 50257 # <|startoftranscript|> NO_TIMESTAMPS_TOKEN = 50362 # <|notimestamps|> decoder_input_ids = torch.LongTensor([prompt_tokens + [SOT_TOKEN, NO_TIMESTAMPS_TOKEN]]) # counts to 31 decoder_input_ids_output = model.generate(input_features=input_features, return_timestamps=False, forced_decoder_ids=None, begin_suppress_tokens=None, decoder_input_ids=decoder_input_ids, logits_processor=[logits_processor_count_factory()])[0] print(tokenizer.decode(decoder_input_ids_output, decode_with_timestamps=False)) ``` You can get performance for bothing in IPython doing: ```python %timeit model.generate(input_features=input_features, return_timestamps=False, prompt_ids=torch.LongTensor(prompt_tokens))[0] %timeit model.generate(input_features=input_features, return_timestamps=False, forced_decoder_ids=None, begin_suppress_tokens=None, decoder_input_ids=decoder_input_ids)[0] ``` On CPU for me using decoder_input_ids is 2x faster with this input. ### Motivation I want to be able to use the transformers implementation of whisper in a production system where cost and processing time will be critical, due to the way we are using whisper this issue impact performance a lot more than the 2x I quoted above, its more like 5x in our use case. Obviously we can code around it but if it's possible to change transformers and avoid custom code I'd prefer that. ### Your contribution I'd be able to create a PR but without knowing more about how the maintainers would like to handle backward compatibility etc I don't think its the right place to start. I'd be very happy to be involved in a discussion, offer opinions or testing etc.
05-29-2023 16:26:34
05-29-2023 16:26:34
Hey! Thanks for taking the time to open this PR. Totally get the speedup and the latency induced by the use of `foced_decoder_ids` rather than `decoder_input_ids`. The addition of the `prompt_ids` was mostly handled by @hollance, which will be able to have a better look at this. I don't think that there was a release yet, which means this can still be changeable (if its not impossible to update) <|||||>IIRC we decided for the time being to keep using `forced_decoder_ids` for the prompts, even though it's slower indeed. Would be nice to improve this.<|||||>What might a path to improvement look like? A PR to make sure passing in a custom decoder_input_ids works correctly might be a good start? Happy to do that. I know it doesn't work for PT as the <|startoftranscript|> token can get added by GenerationMixin in the wrong place, I haven't tried TF or flax.<|||||>I don't understand this part of the generation process well enough yet to say anything useful about it. You'd think that we could start generation by passing in the entire `forced_decoder_ids` as the `decoder_input_ids` as the first step, rather than doing it one token at a time. The `ForceTokensLogitsProcessor` also plays a part in this. @Narsil can probably enlighten us 😄 <|||||>@hollance Yes we could absolutely convert `forced_decoder_ids` to `decoder_input_ids` in `.generate(...)`, and I think we can do it in a way that doesn't break anyones code. I can put a draft PR together for the PT code probably sometime tomorrow. <|||||>Hi, not sure if I can enlighten. In general, I'm not sure why `forced_decoder_ids` is useful for, since if you know what ids you should get, there's no need to do inference. If it was added, the general caution is that it must have been useful for some reason at some point, but in this specific use case I don't really understand.<|||||>@Narsil For Whisper, we want to start generation not with a single "BOS" token (here, `<|startoftranscript|>`) but with several tokens. In the case of prompting, this could be a fairly long sequence of tokens. For example `<|startofprev|> here is the prompt <|startoftranscript|><|en|><|notimestamps|>`. The prompt text is used to prime the model with more context. Right now, we use `forced_decoder_ids` to feed in this sequence of "starting tokens", which means they get processed one-by-one in the generation loop. It's more efficient to allow the first step of generation to process this entire sequence at once. <|||||>Yes, I know. I don't *think* it's necessary but I just usually give the benefit of the doubt when something was coded intentionally.<|||||>Hello every one, what if we simply specify `decoder_input_ids` as an argument to generate call? ``` generated_ids = self.model.generate( inputs=input_features, decoder_input_ids=torch.tensor( [decoder_ids], dtype=torch.long ), ).cpu() ``` As I understood it will be used [here](https://github.com/huggingface/transformers/blob/1689aea73346816b936b84932e12b774974e61a6/src/transformers/generation/utils.py#L661)
transformers
23,844
closed
🌐 [i18n-KO] Translated `tasks_explained.mdx` to Korean
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated the `tasks_explained.mdx` file of the documentation to Korean 😄 ~~*Reference documents I added: `generation_strategies.mdx`, `task_summary.mdx`~~ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- Team PseudoLab, may you please review this PR? --> @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
05-29-2023 14:31:04
05-29-2023 14:31:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>특별히 수정할 만한 곳을 찾지 못했습니다. 고생하셨습니다!<|||||>> 무척 길고 알찬 문서였네요! 고생 많으셨습니다. 몇 가지 수정 의견을 아래와 같이 제안 드립니다 😄 꼼꼼한 리뷰 감사합니다! 리뷰 주신 사항 반영하여 커밋하였습니다 👍 <|||||>May you please review this PR? 😄 @sgugger, @ArthurZucker, @eunseojo
transformers
23,843
closed
Error in Falcon-40B 8bit-quantized when calling generate
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-72-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younes ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. Import modules and load the model: ```python from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer model_path="tiiuae/falcon-40b" config = AutoConfig.from_pretrained(model_path, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_path, config=config, trust_remote_code=True, load_in_8bit=True, device_map="auto") model.eval() model.config.eos_token_id = 0 model.config.forced_eos_token_id = 0 model.config.pad_token_id = 0 ``` 2. Tokenize a text: ```python text = "Hola qué tal estás Íñigo? ¿Qué vas a hacer hoy?" inpts = tokenizer(text, return_tensors="pt").to("cuda") ``` 3. Try to generate text: ```python out = model.generate(**{k: v for k, v in inpts.items() if "token_type" not in k}) ``` You will receive the following error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[13], line 1 ----> 1 out = model.generate(**{k: v for k, v in inpts.items() if "token_type" not in k}) File ~/miniconda3/envs/int4/lib/python3.9/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File ~/miniconda3/envs/int4/lib/python3.9/site-packages/transformers/generation/utils.py:1518, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs) 1512 raise ValueError( 1513 "num_return_sequences has to be 1 when doing greedy search, " 1514 f"but is {generation_config.num_return_sequences}." 1515 ) 1517 # 11. run greedy search -> 1518 return self.greedy_search( 1519 input_ids, 1520 logits_processor=logits_processor, 1521 stopping_criteria=stopping_criteria, 1522 pad_token_id=generation_config.pad_token_id, 1523 eos_token_id=generation_config.eos_token_id, 1524 output_scores=generation_config.output_scores, 1525 return_dict_in_generate=generation_config.return_dict_in_generate, ... 291 ) 293 x = attn_output.view(batch_size, self.num_heads, q_length, self.head_dim) 294 x = x.permute(0, 2, 1, 3) RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead. ``` ### Expected behavior It is expected that the falcon-40b model is able to generate also with int8, otherwise we cannot perform inference even on a 80GB A-100. Also, other models have no problem with inference in 8bit.
05-29-2023 14:22:42
05-29-2023 14:22:42
Hey! Thanks for reporting this. I would suggest you to open the issue on the model's repository, as the code you are using is not entirely on transformers. Cache might not be properly handled<|||||>Hi @avacaondata , I was able to successfully run your code on my setup (2 TITAN RTX 24GB) with the model in 8-bit and in 4-bit. Let me know if you are still have the error. Also make sure that you have the lastest version of bitsandbytes and accelerate. Thanks for the report =) <|||||>Yes I have tried with the last version of bitsandbytes and transformers and it works now, the issue is solved. Thank you very much :) @SunMarc
transformers
23,842
closed
TF SAM shape flexibility fixes
This PR makes some small changes to use dynamic instead of static shapes for SAM, which fixes issues when compiling and fine-tuning. cc @sayakpaul, fixes #23826
05-29-2023 14:16:25
05-29-2023 14:16:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,840
closed
[i18n-<languageCode>] Translating docs to <languageName>
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
05-29-2023 13:35:53
05-29-2023 13:35:53
transformers
23,839
open
4bit Blip2 compatibility
### System Info I am getting an error after loading Blip2 in 4bit, cant inference, cant train. Can anyone help? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `import torch from transformers import Blip2ForConditionalGeneration, AutoProcessor, Blip2Processor, AutoModelForCausalLM, BitsAndBytesConfig from peft import prepare_model_for_kbit_training #processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-6.7b-coco") #model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-6.7b-coco", device_map='auto', load_in_8bit=True) nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-6.7b-coco") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-6.7b-coco", device_map='auto', quantization_config=nf4_config)` Then when I want to train with PEFT or just do a single image captioning with the loaded model I get: `FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 6 3 pixel_values = inputs.pixel_values 5 #generated_ids = model.generate(pixel_values=pixel_values, min_length=50, max_new_tokens=50, length_penalty=1.4, top_k=150, top_p=0.95, repetition_penalty=2.1, num_beams=5, temperature=0.75) ----> 6 generated_ids = model.generate(pixel_values=pixel_values, max_length=50) 7 generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] 8 print(generated_caption) File H:\CONDA\envs\blip\lib\site-packages\torch\utils\_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File H:\CONDA\envs\blip\lib\site-packages\transformers\models\blip_2\modeling_blip_2.py:1854, in Blip2ForConditionalGeneration.generate(self, pixel_values, input_ids, attention_mask, **generate_kwargs) 1851 inputs_embeds = self.get_input_embeddings()(input_ids) 1852 inputs_embeds = torch.cat([language_model_inputs, inputs_embeds.to(language_model_inputs.device)], dim=1) -> 1854 outputs = self.language_model.generate( 1855 inputs_embeds=inputs_embeds, 1856 attention_mask=attention_mask, 1857 **generate_kwargs, 1858 ) 1860 return outputs File H:\CONDA\envs\blip\lib\site-packages\torch\utils\_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File H:\CONDA\envs\blip\lib\site-packages\transformers\generation\utils.py:1518, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs) 1512 raise ValueError( 1513 "num_return_sequences has to be 1 when doing greedy search, " 1514 f"but is {generation_config.num_return_sequences}." 1515 ) 1517 # 11. run greedy search -> 1518 return self.greedy_search( 1519 input_ids, 1520 logits_processor=logits_processor, 1521 stopping_criteria=stopping_criteria, 1522 pad_token_id=generation_config.pad_token_id, 1523 eos_token_id=generation_config.eos_token_id, 1524 output_scores=generation_config.output_scores, 1525 return_dict_in_generate=generation_config.return_dict_in_generate, 1526 synced_gpus=synced_gpus, 1527 streamer=streamer, 1528 **model_kwargs, 1529 ) 1531 elif is_contrastive_search_gen_mode: 1532 if generation_config.num_return_sequences > 1: File H:\CONDA\envs\blip\lib\site-packages\transformers\generation\utils.py:2335, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) 2332 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) 2334 # forward pass to get next token -> 2335 outputs = self( 2336 **model_inputs, 2337 return_dict=True, 2338 output_attentions=output_attentions, 2339 output_hidden_states=output_hidden_states, 2340 ) 2342 if synced_gpus and this_peer_finished: 2343 continue # don't waste resources running the code we don't need File H:\CONDA\envs\blip\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File H:\CONDA\envs\blip\lib\site-packages\accelerate\hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File H:\CONDA\envs\blip\lib\site-packages\transformers\models\opt\modeling_opt.py:957, in OPTForCausalLM.forward(self, input_ids, attention_mask, head_mask, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 944 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) 945 outputs = self.model.decoder( 946 input_ids=input_ids, 947 attention_mask=attention_mask, (...) 954 return_dict=return_dict, 955 ) --> 957 logits = self.lm_head(outputs[0]).contiguous() 959 loss = None 960 if labels is not None: 961 # move labels to correct device to enable model parallelism File H:\CONDA\envs\blip\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File H:\CONDA\envs\blip\lib\site-packages\accelerate\hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File H:\CONDA\envs\blip\lib\site-packages\bitsandbytes\nn\modules.py:219, in Linear4bit.forward(self, x) 216 x = x.to(self.compute_dtype) 218 bias = None if self.bias is None else self.bias.to(self.compute_dtype) --> 219 out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state) 221 out = out.to(inp_dtype) 223 return out AttributeError: 'Parameter' object has no attribute 'quant_state'` ### Expected behavior 8 bit works fine
05-29-2023 08:03:27
05-29-2023 08:03:27
cc @younesbelkada <|||||>Hi @betterftr Thanks for the issue, indeed there seems to be a bug, that should be fixed in https://github.com/huggingface/transformers/pull/23895
transformers
23,838
closed
Add EMD loss
### Feature request could we import this file for 1d EMD? its like kl div but allows us to better represent ordinal/numeric classes. https://github.com/TakaraResearch/Pytorch-1D-Wasserstein-Statistical-Loss/blob/master/pytorch_stats_loss.py is the best option Ive seen online. ### Motivation I am currently using it locally for ordinal discrete density function approximation. ### Your contribution I'm not totally sure whats necessary to incorporate it into the currently available options throughout the codebase, but it shouldn't be hard to import it.
05-29-2023 05:52:31
05-29-2023 05:52:31
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,837
closed
Fix floating point precision issue for RoPE
## What does this PR do? This PR fixes the issue of floating point precision in `RotaryEmbedding`. The purpose of this PR is to fix inconsistency between GPT-Neo-X and HF Transformers, which is causing a model performance degradation. ## Issue In the current implementation of `RotaryEmbedding`, `inv_freq` is first initialized by float32. This value is then used for initializing `cos_cached` and `sin_cached` by float32. As a result, `cos_cached` and `sin_cached` remain float32 even if the model (including inv_freq) uses float16; this is because these two variables are not the target of dtype conversion of `half()` method Note that there is also a recomputation logic for these two variables, but it is very unlikely to occur https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L268 However, this implementation seems inconsistent to the one in the [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox) library. In their implementation, `cos_cached` and `sin_cached` are almost always recomputed in the [forward method](https://github.com/EleutherAI/gpt-neox/blob/23ad392fdfe0f0a22c986013013209a03b7c28a1/megatron/model/positional_embeddings.py#L51-L63). Thus, dtype of `cos_cached` and `sin_cached` are always consistent to the dtype of inv_freq. This inconsistency between two libraries (HF Transformers and GPT-Neo-X) causes the performance degradation of the model converted from gpt-neox. For example, the perplexity score of the language model on Wikitext corpus is as follows: - gpt-neo-x w/o conversion: 520.7840 - gpt-neo-x w/ conversion to HF format: 520.9911 - gpt-neo-x w/ conversion to HF format and this PR: 520.7840 (Sorry that the perplexity value is really bad. I am reporting the performance of model trained on toy data for debugging purpose) ## Solution I basically followed the previous PR https://github.com/huggingface/transformers/pull/22888 and made a similar fix. ## Possible Side Effect In the original code, `cos_cashed` and `sin_cashed` are initialized in the model consturctor. However, I had to move the initialization code to forward method. Otherwise the library gave me the following error: "cos_vml_cpu" not implemented for 'Half'. As a result, `torch.jit.trace` might be no longer available. Since I am not sure what jit.trace is, I don't have any workaround for this. ## Similar Issues - https://github.com/huggingface/transformers/pull/22888 - https://github.com/EleutherAI/gpt-neox/issues/873 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? - I would really appreciate it if the reviewers could point out the missing tests. ## Who can review? @ArthurZucker and @younesbelkada
05-29-2023 04:29:30
05-29-2023 04:29:30
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23837). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for the message. While I appreciate that we have to keep the compatibility with existing models on the hub, my understanding is that all existing models converted from NeoX all have this precision issue. I would like to explore alternative solutions to address this issue rather than simply closing the pull request. Is there any other approach we can consider to fix the problem?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey! If you want to fix the problem without having to close the PR you should be aiming for a full backward compatibility, add tests to make sure that you are fixing the issue in place, and that previous behaviour is not broken.
transformers
23,836
closed
loading dataset
Whether the loaded dataset is loaded into the memory at one time or in batches, why the model with the same parameters can be trained with a small dataset, but the memory of a large dataset will be full
05-29-2023 03:10:18
05-29-2023 03:10:18
Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep GitHub issues for bugs and feature requests only.
transformers
23,835
closed
Sliding window for finetuning
### Feature request The sliding window feature is important since many models have limited size (BERT 512 token) , as far as i know the feature is available when using the pipelines for inference however when finetuning it's not. ### Motivation I'm trying to finetune BERT models since they reached state of the art in many NLP tasks, but especially for NER tasks and most of my documents are far larger than 512 tokens and truncating them will ruin the context. ### Your contribution I've been trying to implement sliding window manually by using the available features such as stride, return_overflowing_tokens
05-28-2023 16:19:42
05-28-2023 16:19:42
Please use the [forums](https://discuss.huggingface.co/) to ask such questions. The feature is implemented via `stride` and `return_overflowing_tokens` in tokenizers as you note.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,834
closed
Parameter: encoder_no_repeat_ngram_size or something that makes model not repeat input tokens in the output.
### Feature request Could you please add more explanations in the docs about what encoder_no_repeat_ngram_size practically does. It says in the docs that it makes sure that ngram sequence of specified length that is in the encoder input ids does not repeat in the decoder input ids, but I have no idea how this parameter changes decoder outputs ids. I use it with t5 ### Motivation When i set encoder_no_repeat_ngram_size=4 it does not repeat even ngram 2-3 in the output mostly. ### Your contribution with torch.no_grad(): beam_outputs = model1a.generate( input_ids=input_ids, attention_mask=attention_masks, encoder_no_repeat_ngram_size = 4, do_sample = False, num_return_sequences=4, num_beams=4, max_length=128 )
05-28-2023 15:58:58
05-28-2023 15:58:58
cc @gante <|||||>Hey @Oxi84 👋 Without a stand-alone short script to reproduce the issue (as well as the desired output), it is hard for me to help :) Nevertheless, I suspect the keyword argument you want to use is `no_repeat_ngram_size`, and not `encoder_no_repeat_ngram_size`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,833
closed
[llama] AutoTokenizer does not add `eos_token` at the end
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.29.2 - Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction code: ```python from transformers import AutoTokenizer, LlamaTokenizer auto_tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", add_eos_token=True, use_fast=True) llama_tokenizer = LlamaTokenizer.from_pretrained("huggyllama/llama-7b", add_eos_token=True, use_fast=True) print(auto_tokenizer.decode(auto_tokenizer.encode("auto_tokenizer", add_special_tokens = True))) print(llama_tokenizer.decode(llama_tokenizer.encode("llama_tokenizer", add_special_tokens = True))) ``` results: ```shell <s> auto_tokenizer <s> llama_tokenizer</s> ``` ### Expected behavior add eos token like: ```shell <s> auto_tokenizer</s> <s> llama_tokenizer</s> ```
05-28-2023 12:40:14
05-28-2023 12:40:14
Hi, Note that it doesn't make sense to pass `use_fast` to the slow (Python-based) `LlamaTokenizer`. It only makes sense to pass use_fast to the `AutoTokenizer` class, which can either load the fast (Rust-based) `LlamaTokenizerFast` class or the slow (Python-based) `LlamaTokenizer`. In the code snippet above, `auto_tokenizer` will be an instance of `LlamaTokenizerFast` and `llama_tokenizer` will be an instance of `LlamaTokenizer`: ``` >>> type(auto_tokenizer) <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'> >>> type(llama_tokenizer) <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'> ``` Pinging @ArthurZucker regarding the `eos_token` issue<|||||>> Hi, > > Note that it doesn't make sense to pass `use_fast` to the slow (Python-based) `LlamaTokenizer`. It only makes sense to pass use_fast to the `AutoTokenizer` class, which can either load the fast (Rust-based) `LlamaTokenizerFast` class or the slow (Python-based) `LlamaTokenizer`. > > In the code snippet above, `auto_tokenizer` will be an instance of `LlamaTokenizerFast` and `llama_tokenizer` will be an instance of `LlamaTokenizer`: > > ``` > >>> type(auto_tokenizer) > <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'> > >>> type(llama_tokenizer) > <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'> > ``` > > Pinging @ArthurZucker regarding the `eos_token` issue Thank you so much for explaining this ~~~<|||||>Hey! Thanks for reporting. The quickest fix I can give you is to initialise the fast tokenizer from the slow one, using the correct arguments. ```python fast = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", add_eos_token=True, from_slow=True) ``` This will produce the expected outputs: ```python >>> fast.encode("auto_tokenizer", add_special_tokens = True) [1, 4469, 29918, 6979, 3950, 2] ``` The reason behind this is that the `post_processor` is responsible of adding the `eos` and `bos` tokens. The processor is initialised when the slow tokenizer is converted to the fast version, and changing the argument on the fly will not result in a change of the processor. I'll open a PR to make sure that changing the eos and bos update the processor. Thanks for reporting.
transformers
23,832
closed
In ViTForMaskedImageModeling, you will receive a reconstructed_pixel_values which shape is different with input when model.config.patch_size is not 16. This further triggers an error about loss when patch_size is not 16 and bool_masked_pos is not None.
### System Info - `transformers` version: 4.29.2 - Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use ViTForMaskedImageModeling; Set model.config.patch_size except 16; Get error. ### Expected behavior In ViTForMaskedImageModeling, you will receive a reconstructed_pixel_values which shape is different with input when model.config.patch_size is not 16. This further triggers an error about loss when patch_size is not 16 and bool_masked_pos is not None.
05-28-2023 11:03:26
05-28-2023 11:03:26
cc @amyeroberts <|||||>Hi @yrqUni, thanks for reporting this issue. Digging into this, the error is arising because the decoder head for the model is parametreized by `config.encoder_stride`, which controls the size of the upscaled image. When we update the patch size, in order to calculate the loss, the encoder stride needs to be updated to ensure the reconstructed image has the same resolution as the input. I've opened a PR to raise a warning if the loss calculation isn't possible with the configuration settings.
transformers
23,831
closed
IndexError when training with GLUE dataset using pretrained from scratch ELECTRA.
Hi guys, I have a problem in which i'm not sure how to solve. Short story is I pretrained ELECTRA from scratch, now I wanted to train and test with GLUE. I converted ELECTRA tf checkpoint to pytorch using `transformers/src/transformers/models/electra/convert_electra_original_tf_checkpoint_to_pytorch.py` then I run the GLUE test with this `transformers/examples/pytorch/text-classification/run_glue.py`. At 1st I ran with electra_small model, its working fine. However when I ran it with my model that I have pretrained from scratch it produced this error. python /Users/nlplabo/tensorflow-test/transformers/examples/pytorch/text-classification/run_glue.py\ --model_name_or_path "/Users/nlplabo/Desktop/electra_pos" \ --task_name $TASK_NAME \ --ignore_mismatched_sizes true \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_gpu_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir "/Users/nlplabo/Desktop/electra_pos/result/testing"\ 05/28/2023 18:03:09 - WARNING - __main__ - Process rank: 0, device: cpu, n_gpu: 0distributed training: True, 16-bits training: False Downloading and preparing dataset glue/cola to /Users/nlplabo/.cache/huggingface/datasets/glue/cola/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad... Dataset glue downloaded and prepared to /Users/nlplabo/.cache/huggingface/datasets/glue/cola/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. 100%|███████████████████████████████████████████| 3/3 [00:00<00:00, 2262.71it/s] [WARNING|modeling_utils.py:3175] 2023-05-28 18:03:13,062 >> Some weights of the model checkpoint at /Users/nlplabo/Desktop/retrying/electra_pos were not used when initializing ElectraForSequenceClassification: ['discriminator_predictions.dense.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.bias', 'discriminator_predictions.dense_prediction.weight'] - This IS expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing ElectraForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:3187] 2023-05-28 18:03:13,062 >> Some weights of ElectraForSequenceClassification were not initialized from the model checkpoint at /Users/nlplabo/Desktop/retrying/electra_pos and are newly initialized: ['classifier.out_proj.bias', 'classifier.dense.weight', 'classifier.out_proj.weight', 'classifier.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/optimization.py:407: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( 0%| | 1/804 [00:01<16:47, 1.25s/it]Traceback (most recent call last): File "/Users/nlplabo/tensorflow-test/transformers/examples/pytorch/text-classification/run_glue.py", line 622, in <module> main() File "/Users/nlplabo/tensorflow-test/transformers/examples/pytorch/text-classification/run_glue.py", line 530, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/trainer.py", line 1940, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/trainer.py", line 2735, in training_step loss = self.compute_loss(model, inputs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/trainer.py", line 2767, in compute_loss outputs = model(**inputs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/models/electra/modeling_electra.py", line 1004, in forward discriminator_hidden_states = self.electra( File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/models/electra/modeling_electra.py", line 908, in forward hidden_states = self.embeddings( File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/transformers/models/electra/modeling_electra.py", line 201, in forward inputs_embeds = self.word_embeddings(input_ids) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 162, in forward return F.embedding( File "/Users/nlplabo/opt/miniconda3/envs/tftransformer/lib/python3.8/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self 0%| | 1/804 [00:01<17:31, 1.31s/it] I tried to search for solutions out there, but I couldn't get it to work. Last choice is to pretrained again from scratch, but that would take too much time as my pc was not that strong. So I hope to make this model works. Thank you.
05-28-2023 09:36:50
05-28-2023 09:36:50
The problem seems to be that you are using token IDs that are not accepted by your model. Are you sure your tokenzier length and model embedding size match?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,827
closed
Add saving to cpu for the state dict for fsdp
(oops sorry didn't mean to PR onto main branch) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-28-2023 06:32:20
05-28-2023 06:32:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23827). All of your documentation changes will be reflected on that endpoint.
transformers
23,826
closed
[TensorFlow SAM] Internal operations in the prompt encoder fail during fine-tuning
@merveenoyan and I are trying to create a fine-tuning notebook for the TensorFlow variant of [SAM](https://huggingface.co/docs/transformers/main/model_doc/sam). After compiling the model, when trying to run the actual fine-tuning, it leads to: ``` TypeError: in user code: File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1284, in train_function * return step_function(self, iterator) File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1268, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1249, in run_step ** outputs = model.train_step(data) File "<ipython-input-10-3c1490d3fea1>", line 12, in train_step outputs = self.sam( File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/tmp/__autograph_generated_filel5x0ne4l.py", line 37, in tf__run_call_with_unpacked_inputs retval_ = ag__.converted_call(ag__.ld(func), (ag__.ld(self),), dict(**ag__.ld(unpacked_inputs)), fscope) File "/tmp/__autograph_generated_filem4r4bspv.py", line 195, in tf__call (sparse_embeddings, dense_embeddings) = ag__.converted_call(ag__.ld(self).prompt_encoder, (), dict(batch_size=ag__.converted_call(ag__.ld(shape_list), (ag__.ld(image_embeddings),), None, fscope)[0], input_points=ag__.ld(input_points), input_labels=ag__.ld(input_labels), input_boxes=ag__.ld(input_boxes), input_masks=ag__.ld(input_masks)), fscope) File "/tmp/__autograph_generated_file4_u_db6c.py", line 90, in tf__call ag__.if_stmt(ag__.ld(input_boxes) is not None, if_body_3, else_body_3, get_state_3, set_state_3, ('batch_size', 'sparse_embeddings'), 2) File "/tmp/__autograph_generated_file4_u_db6c.py", line 68, in if_body_3 box_embeddings = ag__.converted_call(ag__.ld(self)._embed_boxes, (ag__.ld(input_boxes),), None, fscope) File "/tmp/__autograph_generated_filehsxk3fhx.py", line 14, in tf___embed_boxes coords = ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(boxes), (ag__.ld(batch_size), ag__.ld(nb_boxes), 2, 2)), None, fscope) TypeError: Exception encountered when calling layer 'tf_sam_model' (type TFSamModel). in user code: File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py", line 1356, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/sam/modeling_tf_sam.py", line 1433, in call * sparse_embeddings, dense_embeddings = self.prompt_encoder( File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler ** raise e.with_traceback(filtered_tb) from None File "/tmp/__autograph_generated_file4_u_db6c.py", line 90, in tf__call ag__.if_stmt(ag__.ld(input_boxes) is not None, if_body_3, else_body_3, get_state_3, set_state_3, ('batch_size', 'sparse_embeddings'), 2) File "/tmp/__autograph_generated_file4_u_db6c.py", line 68, in if_body_3 box_embeddings = ag__.converted_call(ag__.ld(self)._embed_boxes, (ag__.ld(input_boxes),), None, fscope) File "/tmp/__autograph_generated_filehsxk3fhx.py", line 14, in tf___embed_boxes coords = ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(boxes), (ag__.ld(batch_size), ag__.ld(nb_boxes), 2, 2)), None, fscope) TypeError: Exception encountered when calling layer 'prompt_encoder' (type TFSamPromptEncoder). in user code: File "/usr/local/lib/python3.10/dist-packages/transformers/models/sam/modeling_tf_sam.py", line 767, in call * box_embeddings = self._embed_boxes(input_boxes) File "/usr/local/lib/python3.10/dist-packages/transformers/models/sam/modeling_tf_sam.py", line 726, in _embed_boxes * coords = tf.reshape(boxes, (batch_size, nb_boxes, 2, 2)) TypeError: Failed to convert elements of (None, None, 2, 2) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes. ``` From the accompanying [Colab Notebook](https://colab.research.google.com/gist/sayakpaul/de59527f657d0461f46d9cb8c4a3884f/scratchpad.ipynb), one can check that there's nothing apparently off in the dataset we're passing to the trainer for fine-tuning: ```python for sample in train_ds.take(2): for k in sample: print(k, sample[k].shape, isinstance(sample[k], tf.Tensor)) ``` Leads to: ```bash pixel_values (2, 3, 1024, 1024) True original_sizes (2, 2) True reshaped_input_sizes (2, 2) True input_boxes (2, 1, 4) True ground_truth_mask (2, 256, 256) True pixel_values (2, 3, 1024, 1024) True original_sizes (2, 2) True reshaped_input_sizes (2, 2) True input_boxes (2, 1, 4) True ground_truth_mask (2, 256, 256) True ``` Anything we're missing out on? Cc: @Rocketknight1
05-28-2023 04:40:42
05-28-2023 04:40:42
Investigating now - when I was porting the model I got the feeling it didn't even really support fine-tuning! Is there a PyTorch notebook where fine-tuning works @sayakpaul?<|||||>Update: Some bits of the code were still using static shapes incorrectly! I've fixed it and I think your code sample should work now (there is a label shape issue, but I think that's not the model code's fault)<|||||>> Investigating now - when I was porting the model I got the feeling it didn't even really support fine-tuning! Is there a PyTorch notebook where fine-tuning works @sayakpaul? Here you go: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb<|||||>Thanks @Rocketknight1! So, I had to transpose the predicted mask to have channels_last memory layout to make the loss computation work with Sparse Categorical Crossentropy: ```python from tensorflow import keras class SAMFineTuner(keras.Model): def __init__(self, sam, **kwargs): super().__init__(**kwargs) self.sam = sam def train_step(self, inputs): with tf.GradientTape() as tape: # Forward pass. outputs = self.sam( pixel_values=inputs["pixel_values"], input_boxes=inputs["input_boxes"], multimask_output=False ) # Compute loss. predicted_masks = tf.squeeze(outputs.pred_masks, 1) predicted_masks = tf.transpose(predicted_masks, [0, 2, 3, 1]) ground_truth_masks = tf.cast(inputs["ground_truth_mask"], tf.float32) loss = self.compiled_loss(tf.expand_dims(ground_truth_masks, 1), predicted_masks) # Optimize the model. trainable_vars = self.sam.trainable_variables grads = tape.gradient(loss, trainable_vars) self.optimizer.apply_gradients(zip(grads, trainable_vars)) # Reporting. return {m.name: m.result() for m in self.metrics} ``` But we'll not be using SCCE anyway. Closing the issue, feel free to merge the PR :)
transformers
23,825
closed
[i18n-<languageCode>] Translating docs to <languageName>
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
05-28-2023 02:44:53
05-28-2023 02:44:53
transformers
23,824
closed
[i18n-<languageCode>] Translating docs to <languageName>
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
05-28-2023 02:44:19
05-28-2023 02:44:19
transformers
23,823
closed
🌐 [i18n-KO] Translated `pad_truncation.mdx` to Korean
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Translated the `pad_truncation.mdx` file of the documentation to Korean. Thank you in advance for your review! ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-28-2023 01:49:24
05-28-2023 01:49:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23823). All of your documentation changes will be reflected on that endpoint.<|||||>연휴에 수고 많으셨습니다! 위에서 댓글 남겨주셔서 더 수정할 부분은 없어 보입니다! <|||||>Could you review this PR? 😃 @sgugger, @ArthurZucker, @eunseojo
transformers
23,822
closed
index out of range in self torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
### System Info ``` - `transformers` version: 4.29.2 - Platform: macOS-13.4-x86_64-i386-64bit - Python version: 3.10.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the code [at](https://huggingface.co/tasks/table-question-answering): ``` from transformers import pipeline import pandas as pd # prepare table + question data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) question = "how many movies does Leonardo Di Caprio have?" # pipeline model # Note: you must to install torch-scatter first. tqa = pipeline(task="table-question-answering", model="google/tapas-large-finetuned-wtq") # result print(tqa(table=table, query=query)['cells'][0]) ``` # Observed Behavior ``` Exception has occurred: IndexError (note: full exception trace is shown but execution is paused at: _run_module_as_main) index out of range in self File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward return F.embedding( File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 326, in forward embeddings += getattr(self, name)(token_type_ids[:, :, i]) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 965, in forward embedding_output = self.embeddings( File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 1217, in forward outputs = self.tapas( File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/table_question_answering.py", line 142, in batch_inference return self.model(**inputs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/table_question_answering.py", line 390, in _forward outputs = self.batch_inference(**model_inputs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1025, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__ processed = self.infer(item, **self.params) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__ item = next(self.iterator) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1100, in __call__ outputs = list(final_iterator) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/pipelines/table_question_answering.py", line 350, in __call__ results = super().__call__(pipeline_inputs, **kwargs) File "/llm/tapas-poc/sample1.py", line 12, in <module> preds = table_qa(bkgs_df_str,queries) File "/usr/local/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame) return _run_code(code, main_globals, None, IndexError: index out of range in self ``` ### Expected behavior there should be no error
05-28-2023 01:37:45
05-28-2023 01:37:45
I cannot reproduce: ```py from transformers import pipeline import pandas as pd # prepare table + question data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) question = "how many movies does Leonardo Di Caprio have?" # pipeline model # Note: you must to install torch-scatter first. tqa = pipeline(task="table-question-answering", model="google/tapas-large-finetuned-wtq") # result print(tqa(table=table, query=question)['cells'][0]) ``` works without issue for me.<|||||>try with more than 64 rows On Tue, May 30, 2023 at 7:04 AM Sylvain Gugger ***@***.***> wrote: > I cannot reproduce: > > from transformers import pipelineimport pandas as pd > # prepare table + questiondata = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}table = pd.DataFrame.from_dict(data)question = "how many movies does Leonardo Di Caprio have?" > # pipeline model# Note: you must to install torch-scatter first.tqa = pipeline(task="table-question-answering", model="google/tapas-large-finetuned-wtq") > # result > print(tqa(table=table, query=question)['cells'][0]) > > works without issue for me. > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/23822#issuecomment-1568496021>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/A6NWEKZF4W7ZMAGNN5HKKF3XIX5AJANCNFSM6AAAAAAYRQMPSE> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> > <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,821
closed
T5 models
# What does this PR do? Add models for queston answering, sequence classification, and token classification with T5 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @ArthurZucker @younesbelkada
05-28-2023 01:32:57
05-28-2023 01:32:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23821). All of your documentation changes will be reflected on that endpoint.<|||||>Hello. Could you update the PR name and description to better reflect the content of the PR? Moreover, could you explain the motivation behind adding this to transformers? Only adding it for the encoder seems like specific usecase on your par that can be overcome buy having your own version of the code. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ArthurZucker Sorry for the late reply. I got caught up in exam corrections, conference travels, and other pre-holiday stuff. The use case for just using the encoder part is the following: f you want to evaluate or use T5 for Natural Language Understanding (NLU) tasks such as Named Entity Recognition or Extractive Question Answering, there are two possible avenues: 1) Fine tune the model with in-context learning using few-shots prompting, i.e., train T5 to hopefully decode the desired result to a query, typically prefixing the prompt with "task: ". This is how the model is already trained for some tasks. 2) Use either the encoder, the decoder, or both stacks with different heads and fine-tune the weights of these heads. While avenue 1) is the bread and butter of seq2seq language models, avenue 2) has some distinct advantages, including but not limited to: a) No need to post process unexpected answers. Seq2seq models that have been fine-tuned with few-shots prompting are prone to produce spurious/non-sensical/unexpected output sequences on some inputs. If the desired output for example is a sequence of word classes, we might expect that a prompt of "word classes: I like my fish raw." would return the desired output "pronoun[personal] verb[present] pronoun[possessive] noun[singular] adjective[descriptive]". But the output might be "pronouns are words that are used instead of nouns" or "dishwasher[soap] melon[hat]". b) Integration of the NLU model into larger models. A Grammatical Error Correction (GEC) model a la Grammarly's GECToR might directly use the outputs of an NER model for word classes as parts of its inputs. Having to analyze/parse/sanitize the output sequence puts a stopper to jointly train/fine-tune such models. (This is by the way not a hypothetical use case.) Now, why would one just want to use the encoder part and not both the encoder and the decoder part? First, for NLU tasks, the encoder should by all means be the part that represents the "understanding" part of the seq2seq model. Second, adding the decoder additionally should not hurt in principle, but it is unlikely to improve the performance significantly, blows up the model size, and provides some interesting implementation challenges. Regarding the implementation challenges, I have implemented such encoder+decoder for NLU tasks models, but getting them to clear the tests of the transformers library is not trivial, as these models use the concatenation of the encoder's and decoder's output as input to the head (i.e., classification layer etc). An alternative is to just use the decoder outputs, but this feels less meaningful for NLU tasks. How do we move forward? Would it help if I provided some benchmarks of using (i) encoder part only, (ii) decoder part only, (iii) encoder+decoder part concatenated, and (iv) encoder+decoder but only decoder part as input for the head? Chers, Peter<|||||>Oh, well. I just saw that there is now a T5ForQuestionAnswering model. I will also review that.<|||||>Hey! I think the best idea is to put your modifications on the hub! This would prevent you from having to go through all the hassle of passing the CI, and since it is your specific usage, it makes more sense. I invite you to follow[ this tutorial ](https://huggingface.co/docs/transformers/custom_models). Hope this will fit your usage! 🤗 <|||||>I can see that this pull request has been closed and is not updating.<|||||>Hi Arthur, Let’s do that for now. And if it turns out that a particular model is wildly successful, I’ll make a pull request for it. Cheers, Peter From: Arthur ***@***.***> Date: Friday, 21 July 2023 at 10.49 To: huggingface/transformers ***@***.***> Cc: Peter Schneider-Kamp ***@***.***>, Author ***@***.***> Subject: Re: [huggingface/transformers] T5 models (PR #23821) Hey! I think the best idea is to put your modifications on the hub! This would prevent you from having to go through all the hassle of passing the CI, and since it is your specific usage, it makes more sense. I invite you to follow this tutorial <https://huggingface.co/docs/transformers/custom_models> . Hope this will fit your usage! 🤗 — Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/23821#issuecomment-1645234058>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABPCCTHXFIYBHNQQQYQAW23XRI67ZANCNFSM6AAAAAAYRQKCUI>. You are receiving this because you authored the thread.Message ID: ***@***.***> <|||||>Perfect! 👍🏻 🤗
transformers
23,820
closed
Implement Lion (EvoLved Sign Optimizer)
# What does this PR do? Lion is a new optimizer from Google Brain that has seen early results improving on language modeling tasks: https://arxiv.org/abs/2302.06675 This PR implements Lion (unfused) as a drop-in replacement for Adam. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
05-28-2023 01:09:00
05-28-2023 01:09:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23820). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR. Transformers is a library of models, not optimizers. All optimizers implemented inside the library are deprecated and we won't accept new ones. You can already use Lion via bitsandbytes for instance.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,819
closed
AttributeError: EagerTensor object has no attribute 'size'
### System Info ``` - `transformers` version: 4.29.2 - Platform: macOS-13.4-x86_64-i386-64bit - Python version: 3.10.2 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run the code [at](https://huggingface.co/docs/transformers/main/en/model_doc/tapas#transformers.TFTapasForQuestionAnswering): ``` from transformers import AutoTokenizer, TapasForQuestionAnswering import pandas as pd tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-wtq") model = TapasForQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq") data = { "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Age": ["56", "45", "59"], "Number of movies": ["87", "53", "69"], } table = pd.DataFrame.from_dict(data) queries = ["How many movies has George Clooney played in?", "How old is Brad Pitt?"] inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf") outputs = model(**inputs) logits = outputs.logits logits_aggregation = outputs.logits_aggregation ``` # Observed Result ``` % python sample2.py 2023-05-27 16:48:53.829758: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. Traceback (most recent call last): File "/llm/tapas-poc/sample2.py", line 16, in <module> outputs = model(**inputs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 1217, in forward outputs = self.tapas( File "/llm/tapas-poc/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/llm/tapas-poc/.env/lib/python3.10/site-packages/transformers/models/tapas/modeling_tapas.py", line 928, in forward input_shape = input_ids.size() File "/llm/tapas-poc/.env/lib/python3.10/site-packages/tensorflow/python/framework/ops.py", line 437, in __getattr__ raise AttributeError( AttributeError: EagerTensor object has no attribute 'size'. If you are looking for numpy-related methods, please run the following: from tensorflow.python.ops.numpy_ops import np_config np_config.enable_numpy_behavior() ``` ### Expected behavior there should be no error
05-28-2023 00:54:44
05-28-2023 00:54:44
You are using TensorFlow inputs with a PyTorch model, this cannot work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,818
closed
LLaMATokenizerFast works abnormally
### System Info platform==Ubuntu18.04 python==3.10 transformers==4.29.2 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `</s>` is the special token of LLaMATokenizer(Fast), it is expected that `</s>` can be recognized as a single token when encoding the text. However, it can be shown that the two tokenizers behave differently: ```python >>> t1 = transformers.AutoTokenizer.from_pretrained("huggyllama/llama-7b", use_fast=True) >>> t2 = transformers.AutoTokenizer.from_pretrained("huggyllama/llama-7b", use_fast=False) >>> text = "I love you.</s>" >>> t1(text) >>> {'input_ids': [1, 306, 5360, 366, 21106, 29879, 29958], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]} >>> t2(text) >>> {'input_ids': [1, 306, 5360, 366, 29889, 2], 'attention_mask': [1, 1, 1, 1, 1, 1]} ``` also, LLaMATokenizerFast returns `token_type_ids` but LLaMATokenizer does not. ### Expected behavior LLaMATokenizerFast to be consistent with LLaMATokenzier.
05-27-2023 21:33:55
05-27-2023 21:33:55
Also have 2 questions related to `LlamaTokenizerFast`: First, loading a fast tokenizer from a saved slow one takes very long: ``` from transformers import LlamaTokenizer, LlamaTokenizerFast tokenizer = LlamaTokenizer.from_pretrained("huggyllama/llama-7b") tokenizer.save_pretrained(".") # the following line takes > 1 min fast_tokenizer = LlamaTokenizerFast.from_pretrained(".") ``` This is not the case for other tokenizers like `BertTokenizerFast`. Second, for a new model I'm working on (#23460) I wonder how to get the same behaviour between slow and fast tokenizers for the following: ``` from transformers import LlamaTokenizer, LlamaTokenizerFast tokenizer = LlamaTokenizer.from_pretrained("huggyllama/llama-7b", truncation_side="left") tokenizer.add_special_tokens({"pad_token": "[PAD]"}) tokenizer.add_special_tokens({"bos_token": "</s>"}) tokenizer.add_special_tokens({"eos_token": "</s>"}) tokenizer.add_special_tokens({"unk_token": "</s>"}) fast_tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", truncation_side="left") fast_tokenizer.add_special_tokens({"pad_token": "[PAD]"}) fast_tokenizer.add_special_tokens({"bos_token": "</s>"}) fast_tokenizer.add_special_tokens({"eos_token": "</s>"}) fast_tokenizer.add_special_tokens({"unk_token": "</s>"}) prompt = "What is unusual about this image?" encoding = tokenizer(prompt, return_tensors="pt") fast_encoding = fast_tokenizer(prompt, return_tensors="pt") for k,v in encoding.items(): assert torch.allclose(fast_encoding[k], v) ``` => this assertion fails since the input_ids differ: ``` tensor([[ 2, 1724, 338, 22910, 1048, 445, 1967, 29973]]) tensor([[ 1, 1724, 338, 22910, 1048, 445, 1967, 29973]]) ``` <|||||>cc'ing @ArthurZucker and @Narsil here<|||||>Hey! Thanks for opening this issue. - `return_token_type_ids` should be set to `None` by default but is updated with `"token_type_ids" in self.model_input_names`. This is specific to the fast tokenizer, and is a known difference. I am not sure why this was added only in the fast tokenizer but it's more than 2yo! - The BPE models splits on ` ` (spaces), before encoding the tokens. When converting the models from slow to fast the special tokens were added to the `BPE` vocabulary, with a score of `0`. We probably forgot to add them to the list of `additional_special_tokens`, which is why they are not properly split. ( quick fix: `t1.additional_special_tokens = ["</s>, ... ]`) - @NielsRogge when you load a slow from a fast, it takes a long time because you need to convert the BPE sentenpiece model, which is very long. Nothing we can do about that. - About your second question, the best thing would be to open a new issue. Seems like it might be another slow/fast discrepency but you are not completely doing this the way the API is designed! (check that each call to add a token actively adds it!) <|||||>> Hey! Thanks for opening this issue. > > * `return_token_type_ids` should be set to `None` by default but is updated with `"token_type_ids" in self.model_input_names`. This is specific to the fast tokenizer, and is a known difference. I am not sure why this was added only in the fast tokenizer but it's more than 2yo! > * The BPE models splits on ` ` (spaces), before encoding the tokens. When converting the models from slow to fast the special tokens were added to the `BPE` vocabulary, with a score of `0`. We probably forgot to add them to the list of `additional_special_tokens`, which is why they are not properly split. ( quick fix: `t1.additional_special_tokens = ["</s>, ... ]`) > * @NielsRogge when you load a slow from a fast, it takes a long time because you need to convert the BPE sentenpiece model, which is very long. Nothing we can do about that. > * About your second question, the best thing would be to open a new issue. Seems like it might be another slow/fast discrepency but you are not completely doing this the way the API is designed! (check that each call to add a token actively adds it!) In the `tokenizer_config.json` of `huggyllama/llama-7b`, `</s>` is quite a special token (`eos_token`). Adding `</s>` to `t1.additional_special_tokens` does not fix the problem.<|||||>Indeed, sorry for the confusion. I added a different token `<//s>` with `add_special_token` which worked as expected ( meaning whether there was a space or not, the output was properly encode) which is why the issue most probably lies with the handling of the special tokens ( maybe we should not have added them to the voab? I'll check). I'll dig into this! <|||||>@ArthurZucker How is the progress now?<|||||>I am still working on this, top priority! My PR did not fix it yet, so I am opening a new on just for llama and will see for the other ones.<|||||>> I am still working on this, top priority! My PR did not fix it yet, so I am opening a new on just for llama and will see for the other ones. Thanks for working on this! I appreciate the update and look forward to getting the issue resolved.<|||||>Update: in order to fix this, the `tokenizer.json` should be modified: the special tokens should not be normalized (so set `normalized = False`. There is a more profound issue, since the slow tokenizer is not bother by that and handles this differently. <|||||>@ArthurZucker My transformer version is `4.30.1`. I do not change the `tokenizer_config.json`, instead I replace the default special tokens by `add_special_tokens` like ```python >>> from transformers import AutoTokenizer >>> lt = AutoTokenizer.from_pretrained("huggyllama/llama-7b") >>> lt LlamaTokenizerFast(name_or_path='huggyllama/llama-7b', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True)}, clean_up_tokenization_spaces=False) >>> lt.add_special_tokens({"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>"}) >>> lt LlamaTokenizerFast(name_or_path='huggyllama/llama-7b', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}, clean_up_tokenization_spaces=False) >>> lt("ok</s>") >>> {'input_ids': [1, 3431, 829, 29879, 29958], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1]} ``` It seems that the problem still exists?<|||||>Hey, as mentioned in #23889, as well as in #24042 the `tokenizer.json` has to be modified. I did not have time to open pr on all models yet, but you still have `normalized = True` on the special tokens, which is why they are split. <|||||>> Hey, as mentioned in #23889, as well as in #24042 the `tokenizer.json` has to be modified. I did not have time to open pr on all models yet, but you still have `normalized = True` on the special tokens, which is why they are split. As shown in your example in [#23889](https://github.com/huggingface/transformers/issues/23889#issuecomment-1584090357), if I do not modify the `tokenizer.json`, reseting the `bos_token` and `eos_token` when initializing the fast tokenizer or using the `add_special_tokens` method do not work (the `normalized=True` attribute still exists), even if the `special_tokens_dict` attribute has been changed to `{"bos_token": "<s>", "eos_token": "</s>"}`. Is that true?<|||||>Yes. Basically, you have to correctly add the tokens when converting, ortherwise the underlying regex is not properly updated. We are thinking of adding a `update_tokens` feature, which would allow to modify a token that is already part of the vocab. See the following problem: ```python In [2]: lt.add_special_tokens({"eos_token": AddedToken("<//s>", normalized = False)}) Out[2]: 1 In [3]: lt.encode("Another tests<//s>") Out[3]: [1, 7280, 6987, 32000] In [4]: lt.add_special_tokens({"eos_token": AddedToken("<//s>", normalized = True)}) Out[4]: 0 In [5]: lt.encode("Another tests<//s>") Out[5]: [1, 7280, 6987, 32000] In [6]: lt.add_special_tokens({"eos_token": AddedToken("<///s>", normalized = True)}) Out[6]: 1 In [7]: lt.encode("Another tests<///s>") Out[7]: [1, 7280, 6987, 29966, 6658, 29879, 29958] ```<|||||>> Yes. Basically, you have to correctly add the tokens when converting, ortherwise the underlying regex is not properly updated. We are thinking of adding a `update_tokens` feature, which would allow to modify a token that is already part of the vocab. See the following problem: > > ```python > In [2]: lt.add_special_tokens({"eos_token": AddedToken("<//s>", normalized = False)}) > Out[2]: 1 > > In [3]: lt.encode("Another tests<//s>") > Out[3]: [1, 7280, 6987, 32000] > > In [4]: lt.add_special_tokens({"eos_token": AddedToken("<//s>", normalized = True)}) > Out[4]: 0 > > In [5]: lt.encode("Another tests<//s>") > Out[5]: [1, 7280, 6987, 32000] > > In [6]: lt.add_special_tokens({"eos_token": AddedToken("<///s>", normalized = True)}) > Out[6]: 1 > > In [7]: lt.encode("Another tests<///s>") > Out[7]: [1, 7280, 6987, 29966, 6658, 29879, 29958] > ``` Thank you for your kind guidance!
transformers
23,817
closed
🌐 [i18n-KO] Translated `documnet question answering.mdx` to Korean
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 --> # What does this PR do? Translated the `document_question_answering.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of #20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before reviewing - [ ] Check for missing / redundant translations (번역 누락/중복 검사) - [ ] Grammar Check (맞춤법 검사) - [ ] Review or Add new terms to glossary (용어 확인 및 추가) - [ ] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) <!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
05-27-2023 17:20:29
05-27-2023 17:20:29
Closing in favor of #24588
transformers
23,816
closed
`MPTForCausalLM` does not support `device_map='auto'` yet.
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import AutoModelForCausalLM, AutoConfig from transformers import BitsAndBytesConfig model_name = 'mosaicml/mpt-7b-instruct' nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, offload_folder="offload", offload_state_dict = True, quantization_config=nf4_config) ``` Error ``` ValueError: MPTForCausalLM does not support `device_map='auto'` yet. ``` I saw a similar issue #22188 for `XGLMForCausalLM`. I couldn't find `MPTForCausalLM ` in the repository if the MPT model is not supported currently, is there any hack that I can use to get `acclerate` support ? Than ### Expected behavior Model gets loaded without any errors
05-27-2023 16:42:04
05-27-2023 16:42:04
Hey! This is a duplicate of #23784 . Seems like we should give a more informative message. <|||||>This makes me think that we should add the ability to pass no split modules directly when calling from_pretrained for super users. There are more and more models that uses code on the Hub feature and this should make life much easier for these users (sometimes it takes a lot of time for the authors to approve / merge these PRs) wdyt @ArthurZucker @sgugger ?<|||||>Hi @harikc456 You can check: https://github.com/huggingface/transformers/pull/23896#issuecomment-1570036714 To illustrate what should be done, [I made a PR on the Hub directly,](https://huggingface.co/mosaicml/mpt-7b/discussions/45) you can load the mpt-7b model as follows (until the authors will merge my PR): ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = 'mosaicml/mpt-7b' tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") model = AutoModelForCausalLM.from_pretrained( model_name, load_in_8bit=True, device_map="auto", trust_remote_code=True, revision="pr/45" ) prompt = "What is the boiling point of Nitrogen?" input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(0) out = model.generate(input_ids) print(tokenizer.decode(out[0], skip_special_tokens=True)) ```<|||||>> you can load the mpt-7b model as follows (until the authors will merge my PR): To clarify, the code will continue working after the PR is merged, but you will also be able to do the same thing without `revision="pr/45"`.
transformers
23,815
closed
RuntimeError
### System Info Facing error when doing`from transformers import Trainer` `RuntimeError: Failed to import transformers.training_args because of the following error (look up to see its traceback): cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)` Here is the environment - `transformers` version: 4.29.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.10 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.7 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No Solutions tried `pip install -U accelerate`. But it is still not resolved ### Who can help? @pacman100 @ArthurZucker @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction `from transformers import Trainer` ### Expected behavior Expecting no error
05-27-2023 15:12:33
05-27-2023 15:12:33
Could you let us know your version of Accelerate?<|||||>same problem<|||||>This worked at my Kaggle Notebook https://stackoverflow.com/questions/76363436/cannot-import-name-partialstate-from-accelerate-when-using-huggingface-pipel<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,814
open
Adding GPTNeoX (Tensorflow version)
### Model description Huggingface has GPTNeoX model by ElutherAI. It's a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. However Huggingface currently has only PyTorch implementation of the model. I would like to contribute it's corresponding TensorFlow implementation. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
05-27-2023 14:46:00
05-27-2023 14:46:00
cc: @sgugger @NielsRogge <|||||>How would you run this model in Tensorflow though? I don't think it offers the same flexibility as PyTorch wrt to distributing layers on several devices. cc @Rocketknight1 <|||||>I believe a TF implementation with distribution of layers across devices is possible using [DTensor](https://www.tensorflow.org/guide/dtensor_overview). We've been in communication with the TF team about DTensor, but we haven't implemented a TF model with it yet. This could be a good model to try it with!<|||||>Note that implementing our first DTensor model will probably be quite challenging @shivance - we'll support you if you try it, but you should expect that the PR will need changes to some of our tests or internal functions to support DTensor, and there'll be several rounds of iteration before it's ready.<|||||>@sgugger @Rocketknight1 Check this out https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/esm/modeling_tf_esm.py#L102<|||||> I've seen the use of dtensor at KerasNLP deeply and I think I can start working on it if you and @sgugger give me an initial roadmap!<|||||>Hi @shivance, the first task would be to make a port of GPT-NeoX to TF, and then we can start adding `DTensor` layout code to it. You can look at this [ongoing LLAMA PR](https://github.com/huggingface/transformers/pull/24375) to see what you need to do. The biggest thing you have to do is to make a conversion of the existing `modeling_gpt_neox.py` file to `modeling_tf_gpt_neox.py`, then import the classes from it and run some tests to see that it gives (approximately) the same outputs. Once that's done, we can talk about next steps!<|||||>Hey @Rocketknight1 ! I've worked on same as my Google Summer of Code project, check Tf version of GPT Neo X [here](https://github.com/keras-team/keras-nlp/pull/1056), I think what's needed is to do it in Huggingface Design.
transformers
23,813
closed
[MMS] Scaling Speech Technology to 1,000+ Languages | Add attention adapter to Wav2Vec2
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds the MMS models fine-tuned on speech recognition. See official announcement here: https://about.fb.com/news/2023/05/ai-massively-multilingual-speech-technology/ See more details here: https://github.com/facebookresearch/fairseq/blob/main/examples/mms/README.md#asr Fixes #23811 and #23665 For now checkpoints are uploaded here: ## Pretrained-only - https://huggingface.co/patrickvonplaten/mms-300m - https://huggingface.co/patrickvonplaten/mms-1b ## ASR fine-tuned - https://huggingface.co/patrickvonplaten/mms-1b-fl102 - https://huggingface.co/patrickvonplaten/mms-1b-l1107 - https://huggingface.co/patrickvonplaten/mms-1b-all The fine-tuned checkpoints are based on **Adapter** layers as can be seen in this PR. The ASR fine-tuned weights consist of two parts: - The non-adapter weights which are exactly the same as the base model weights - Language specific fine-tuned adapter layer weights. This means we have 1000+ adapter weights for `mms-1b-all` If one wants to use a specific language, specific adapter weights need to be loaded into `mms-1b-all`. By default `mms-1b-all` et. al load the English adapter layer weights as is currently done in https://huggingface.co/patrickvonplaten/mms-1b-all The following works with this PR: ```py from transformers import Wav2Vec2ForCTC, AutoProcessor import soundfile as sf import torch ckpt = "./mms-1b-fl102/" ckpt = "./mms-1b-l1107" ckpt = "./mms-1b-all/" processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # get audio.flac from https://huggingface.co/datasets/patrickvonplaten/audios/blob/main/audio.flac audio, sr = sf.read("./audio.flac") inputs = processor(audio, sampling_rate=sr, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.argmax(-1))[0] print(f"Transcription: {transcription}") ``` Now, the question what API to we want to build for allow the user to easily switch between languages for the fine-tuned weights. **Note**: - To switch from one language to another, both the tokenizer's vocab and the model's adapter layers need to be switched out - The tokenizer can always easily hold all langs dicts in RAM because each lang has around 150 entries so we have 150,000 entries which is not too much for RAM - **However**, things are a bit more tricky for the model. The base model requires 3.1 GB in FP32 RAM and each adapter weights are around 9MB in size. This means loading all adapter layers into RAM would cost ~9GB which is quite a bit. How should we design this model? We need to have some kind of switching between languages function anyways. I see the following APIs that could work. ### 1.) By default, we download **all** adapter layers and load all in RAM, but we provide a functionality to remove all language but one from RAM: ```py from transformers import Wav2Vec2ForCTC, AutoProcessor ckpt = "./mms-1b-all/" processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # requires at least 10GB of CPU RAM target_lang = "esp" processor.set_lang("esp") adapter_id = processor.lang_to_id["esp"] model.set_adapter_weights(adapter_id) # throw away all but one weights => 3.1GB of CPU RAM model.to("cuda") ``` A problem with this is though also that it's not trivial to switch between languages because one needs to load the whole model again and then set the language again. Also we would have to add a `set_adapter_weights` function to Wav2Vec2 which is not ideal ### 2.) By default we only the adapter weights one of language (e.g. English) and the load upon request more adapter layers ```py ```py from transformers import Wav2Vec2ForCTC, AutoProcessor ckpt = "./mms-1b-all/" processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # requires only 3GB of CPU RAM target_lang = "esp" processor.set_lang("esp") model.load_adapter("esp") # This will load a file called "adapter.esp.bin" from: https://huggingface.co/patrickvonplaten/mms-1b-all , cache it and replace the adapter model.to("cuda") ``` Think this is quite user-friendly, intuitive and this way we also never require more than 3.1 GB of RAM. It however requires to add a pretty specific `load_adapter` function to Wav2Vec2 (think it's fine though). ### 3.) We just upload 1000+ repos one for each language. This way we don't need any "set" or "load" function and we just tread each adapter weights as their own model: ```py from transformers import Wav2Vec2ForCTC, AutoProcessor ckpt = "./mms-1b-all-esp/" # repo names then become lang specific processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt) # requires only 3GB of CPU RAM model.to("cuda") ``` Big disadvantage is that it's pretty wasteful since an adapter layer is just 0.3% of all the models weights. => Overall, I'm tending to API **2.)** because it's the most user-friendly and intuitive. It'd just require to add a somewhat specific "load_adapter" function to Wav2Vec2, but think that's totally fine. Thoughts @sanchit-gandhi @Vaibhavs10 @sgugger @LysandreJik @amyeroberts ?
05-27-2023 14:06:57
05-27-2023 14:06:57
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @patrickvonplaten - Thanks for working on this and I reviewed the options provided. I believe the second one would work best from a developer standpoint. IMO it ensures that all the adapter weights are in one repository and it all works the way it should, should someone want to use a different language with the base model. I am not a big fan of option 1 because it would make it difficult for a model to run in a resource-constrained environment. I am a bit conflicted with option 3, primarily because it involves the end-user having the same experience with Wav2Vec2 without worrying about the specific language adapter layers and so on. Although having 1000+ repos for the same sounds a bit wasteful IMO. Question: How would this work for fine-tuning, I am assuming if someone fine-tunes the Wav2Vec2-MMS on a language "X" then they'll push their adapter weights to a new repo and pull from that. So that'd mean that purely from a UX perspective, we should allow for the `load_adapter` function to be able to pull from a separate repository too right?<|||||>I think 2 is probably the better solution, and I would also make it possible to set the lang in the `from_pretrained` call: ```py from transformers import Wav2Vec2ForCTC, AutoProcessor ckpt = "./mms-1b-all/" processor = AutoProcessor.from_pretrained(ckpt) model = Wav2Vec2ForCTC.from_pretrained(ckpt, target_lang="esp") processor.set_lang("esp") model.to("cuda") ### Stuff # want to change the language: model.load_adapter("fra") ```<|||||>+1 on the composite solution proposed by @sgugger. Regarding fine-tuning @Vaibhavs10, users will save both the fine-tuned base weights and adapter layer weights to the same repo (this is different to PEFT where we only save the adapter weights, since here the base weights are also trainable. The way to view the adapter layer is as a extra small feed-forward network on top of the transformer block, so a regular layer of weights rather than a parameter efficient one), so probably we can assume we're loading the base weights and adapter weights from the same repo.<|||||>Agreed, with all above - 2 would be my choice: * 1 doesn't feel very user friendly. I'd expect most people would only use a consistent subset so downloading everything is slow and wasteful. * 2 feels the most intuitive with the current API and flexible. Seconding @Vaibhavs10's questions about finetuning, pushing to the hub and loading finetuned weights. If we load model weights from `mms-1b-fl102` and want our own finetuned adapter weights, how do I specify when loading and how is this information saved? How would we differentiate weights such that when I call `model.push_to_hub` the adapter weights are uploaded separately from the rest of the model (pattern matching?) Should the adapter weights be tied to a specific version of the 'base model' weights? * 3 Probably simplest to do - but seems like a waste with many repeated weights. <|||||>I'll leave more in-detail functionality for fine-tuning adapter weights for a future PR, but in short we can already do the following: ```py from transformers import Wav2Vec2ForCTC ckpt = "patrickvonplaten/mms-1b" model = Wav2Vec2ForCTC.from_pretrained(ckpt, num_attn_adapters=1, vocab_size=277) adapter_keys = set(model._adapters.keys()) for name, param in model.named_parameters(): if name not in adapter_keys: param.requires_grad = False ``` So once we add adapter fine-tuning to the wav2vec2 fine-tuning script, we could also add a simple "freeze_all_but_adapter()" function or something.<|||||>The code is now finished. I still need to upload the adapters for the smaller checkpoints, transfer them to Facebook and write some nice docs. **All modeling files except Wav2Vec2 are changed due to the #Copied from mechanism**. I think this is better than removing the copy-from mechanism, but happy to change.<|||||>Could I get a final review here @sgugger @amyeroberts ? Once approved, I'll move the facebook checkpoints, add some more examples to the docs & fix the doc test. The final question here for me is whether I should: a) Keep # Copied from at the expense of adding currently not used code to Hubert etc... b) Remove # Copied fromr c) Adapt the config of Hubert etc.. as well so that one could fine-tune Hubert with this adapter training going forward. Currently I have a) implemented as it's the safest option IMO. Happy to hear your opinion though.<|||||>'Wav2Vec2Processor' object has no attribute 'set_lang'
transformers
23,812
closed
Add support for HYBRID_SHARD and _HYBRID_SHARD_ZERO2 in the trainer
#21156
05-27-2023 10:51:37
05-27-2023 10:51:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Requesting for review.<|||||>@pacman100 Gentle remainder.<|||||>Hello @raghavanone, after PR #23158, the FSDP logic is being handled by Accelerate. So, these changes should reflect in Accelerate if they aren't being supported by FSDP-XLA <|||||>> Hello @raghavanone, after PR #23158, the FSDP logic is being handled by Accelerate. So, these changes should reflect in Accelerate if they aren't being supported by FSDP-XLA Sure, I can make the changes there too, any pointers on things to do in accelerate will of great help.<|||||>Hello @raghavanone, the above PR adds this functionality to the Accelerate which now powers Trainer—also made you the co-author of it. Thank you for all the effort and helping us keep up-to-date with the PyTorch FSDP.
transformers
23,811
closed
Metas MMS speech recognition
### Feature request Our request a simpler and more convenient inference process for a speech recognition model based on MMS just like wav2vec 2.0 in Transformers. ### Motivation We aim to encapsulate the various subroutines called by Facebook’s official model into a direct speech recognition model that is as easy to use as other transformer-based models like wav2vec 2.0. But we also know that the Hugging face team has been among the industry leaders in this area of work. ### Your contribution We recognize that it may not be feasible for us to directly assist the Hugging Face technical team in this task. We believe that such an effort would be forward-looking given the popularity of MMS in current speech recognition research. The resulting model would be ideal for quickly transcribing our meeting notes.
05-27-2023 08:36:43
05-27-2023 08:36:43
Duplicate of #23665<|||||>@NielsRogge Hi, Can you tell me how far along we are and about how long it will be ready for us to use? Thank you! <|||||>PR merged. Also see: - https://huggingface.co/docs/transformers/main/en/model_doc/mms - https://github.com/huggingface/transformers/pull/23813 - https://huggingface.co/facebook/mms-1b-all<|||||>> PR merged. > > Also see: > > * https://huggingface.co/docs/transformers/main/en/model_doc/mms > * [[MMS] Scaling Speech Technology to 1,000+ Languages | Add attention adapter to Wav2Vec2 #23813](https://github.com/huggingface/transformers/pull/23813) > * https://huggingface.co/facebook/mms-1b-all Can I use my own dataset instead of the dataset "mozilla_foundation_common voice_6.1", which you have shown in tutorial [https://huggingface.co/blog/mms_adapters](url) ? If so then , how. Thanks<|||||>Sure, you just need to load your own dataset, maybe this helps: https://huggingface.co/docs/datasets/v2.13.1/en/audio_load<|||||>> Sure, you just need to load your own dataset, maybe this helps: https://huggingface.co/docs/datasets/v2.13.1/en/audio_load Thank you for your kind reply, although I have gone through the suggested tutorial but it didn't help, and ... I have just uploaded some demo dataset here u can check it: [https://huggingface.co/datasets/rashmi035/MKB_Hindi_2023](url) ,As you can see the audio is visible in the dataset viewer but the corresponding ngram is not visible, can you help me with this?
transformers
23,810
closed
How to convert flax model to pytorch?
### Feature request run_t5_mlm_flax.py only generate a `flax_model.msgpack` ,but i want to obtain `pytorch_model.bin`. ### Motivation convert msgpack to bin ### Your contribution none
05-27-2023 07:26:22
05-27-2023 07:26:22
Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,809
closed
object of type 'IterableDataset' has no len()
**run script**: run_speech_recognition_ctc_streaming.py (Multi GPU CTC with Dataset Streaming) ``` for split, dataset in raw_datasets.items(): vectorized_datasets[split] = ( dataset.map(prepare_dataset) .remove_columns(raw_column_names[split] + ["target_text"]) .with_format("torch") ) if split == "train": vectorized_datasets[split] = vectorized_datasets[split].shuffle( buffer_size=data_args.shuffle_buffer_size, seed=training_args.seed, ) .... trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=vectorized_datasets["train"] if training_args.do_train else None, eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None, tokenizer=processor, callbacks=[ShuffleCallback()], ) train_result = trainer.train(resume_from_checkpoint=checkpoint) ``` **Error**: ``` Traceback (most recent call last): File "/usr/local/bin/wav2vec2/run_speech_recognition_ctc_streaming.py", line 702, in <module> main() File "/usr/local/bin/wav2vec2/run_speech_recognition_ctc_streaming.py", line 656, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.10/dist-packages/transformers-4.30.0.dev0-py3.10.egg/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/usr/local/lib/python3.10/dist-packages/transformers-4.30.0.dev0-py3.10.egg/transformers/trainer.py", line 1909, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 677, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py", line 981, in __iter__ for key, example in ex_iterable: File "/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py", line 647, in __iter__ for x in self.ex_iterable: File "/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py", line 512, in __iter__ if self.remove_columns: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataset.py", line 276, in __len__ total += len(d) # type: ignore[arg-type] TypeError: object of type 'IterableDataset' has no len() ``` I'm try fix but error change to "_IterDataPipeSerializationWrapper' object has no attribute 'set_epoch" ``` trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=IterableWrapper(vectorized_datasets["train"]) if training_args.do_train else None, eval_dataset=IterableWrapper(vectorized_datasets["eval"]) if training_args.do_eval else None, tokenizer=processor, callbacks=[ShuffleCallback()], ) ``` Any thoughts on this?
05-27-2023 05:46:51
05-27-2023 05:46:51
cc @sanchit-gandhi <|||||>Hey @johnchienbronci - could you verify that all the columns `raw_column_names[split] + ["target_text"]` are in `raw_datasets[split]` for each split prior to calling the `.map` method? Failing that, could you paste the results of: ``` transformers-cli env ``` And also provide a reproducible code-snippet so that I can run the script locally on my end and reproduce the error result? Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,808
closed
Why don't we set use_cache=False in default when training?
### Feature request Let's take GPT-2 as an example, in the current implementation (modeling_gpt2.py: Line 856~861): ``` if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False ``` Why don't we set like this: ``` if self.training: # As long as it is being trained, we set use_cache=True. if use_cache: logger.warning_once( "`use_cache=True` makes no sense when training. Setting `use_cache=False`..." ) use_cache = False ``` Because when training, `use_cache=True` makes no sense (at least for decoder-only auto-regressive model) and if you use gradient_checkpointing, it should be under training instead of inference. ### Motivation Hello contributors, I realize that we set `use_cache=False` in default for almost all the transformer-based models. I understand that it can speed up the generation when using the cache from the previous step. However, when training (pertaining or fine-tuning), we don't need it and it consumes many memories when processing a very long sequence, especially when the model is very large. But it cannot provide us any advantage. ### Your contribution If needed, I can help to correct this for all models in the transformers library.
05-27-2023 04:19:47
05-27-2023 04:19:47
We can't change the models as it would be a breaking change, sadly.
transformers
23,807
closed
Same size on memory usage when loading gpt2 from torch.float16 and 8bit Quantization
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?:No ### Who can help? @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForCausalLM from transformers import BitsAndBytesConfig import torch model_8bit = AutoModelForCausalLM.from_pretrained( "gpt2", load_in_8bit=True, device_map='auto', ) print('8 bit: ', model_8bit.get_memory_footprint()) model_float16 = AutoModelForCausalLM.from_pretrained( "gpt2", load_in_8bit=False, device_map='auto', torch_dtype=torch.float16 ) print('float16: ', model_float16.get_memory_footprint()) ``` ### Expected behavior Result is 8 bit: 274045464 float16: 261462552 Should see 8bit smaller than float16
05-27-2023 02:06:15
05-27-2023 02:06:15
cc @younesbelkada <|||||>Hi @czhang-trinity Thanks for the issue, running your script on the current main branch gives me: ```bash 8 bit: 261462552 16 bit: 261462552 ``` This is because GPT2 uses Conv1D in replacement to all linear layers. Therefore the 8bit conversion ends up converting no Linear layers ! ```bash GPT2LMHeadModel( (transformer): GPT2Model( (wte): Embedding(50257, 768) (wpe): Embedding(1024, 768) (drop): Dropout(p=0.1, inplace=False) (h): ModuleList( (0-11): 12 x GPT2Block( (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (attn): GPT2Attention( (c_attn): Conv1D() (c_proj): Conv1D() (attn_dropout): Dropout(p=0.1, inplace=False) (resid_dropout): Dropout(p=0.1, inplace=False) ) (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (mlp): GPT2MLP( (c_fc): Conv1D() (c_proj): Conv1D() (act): NewGELUActivation() (dropout): Dropout(p=0.1, inplace=False) ) ) ) (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=768, out_features=50257, bias=False) ) ```
transformers
23,806
closed
bnb_4bit for Flan-T5-XL/XXL? Can't load on Colab T4...
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger (I think) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Not sure this is the best place to raise the issue - please feel free to redirect. Following your great [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA blog](https://huggingface.co/blog/4bit-transformers-bitsandbytes), I used your example [Basic usage Google Colab notebook](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf?usp=sharing), modified the model_id to "google/flan-t5-xl" (the 3B model) as follows. However, the Colab session crashes (free T4 GPU). According to the blog, T5 is supported, but I may be missing something. Please point me the right direction? ``` from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from transformers import T5ForConditionalGeneration model_id = "google/flan-t5-xl" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16import torch ) tokenizer = AutoTokenizer.from_pretrained(model_id) model_4bit = T5ForConditionalGeneration.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto") # crashes - runs out of memory ``` ### Expected behavior I had expected the model to load and use test it. I'd like to finetune Flan-T5 using LoRA. In case the settings for T5 needs to be something specific, it would be helpful to know what they are. Thanks.
05-26-2023 23:14:08
05-26-2023 23:14:08
Can load using b16 sharded version.
transformers
23,805
closed
[WIP] CI/CD Testing
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-26-2023 21:54:24
05-26-2023 21:54:24
transformers
23,803
closed
[WIP] CI/CD Testing
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-26-2023 20:24:52
05-26-2023 20:24:52
transformers
23,802
open
[WIP] Add CLIPViP
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Fixes #22829 ## Before submitting - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/22829 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). Docs are WIP I haven't updated the docstrings yet. - [X] Did you write any new necessary tests? Integration test still needs some work. It's basically just the CLIP integration test, it doesn't test how video retrieval would work ## Who can review? @NielsRogge maybe others?
05-26-2023 19:32:37
05-26-2023 19:32:37
cc @amyeroberts <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23802). All of your documentation changes will be reflected on that endpoint.<|||||>@tensorpro Thank you very much for this PR and adding this great model! Let us know when this is ready for review and feel free to ping me if you have any issues or questions about the implementation in the meantime. <|||||>Thanks @amyeroberts! This PR should be pretty close. I need to add a custom processor for CLIPViP to setup the preprocessing for videos and improve documentation. But after that, it should be ready to review. <|||||>Are there bugs in [modeling_outputs'](https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/modeling_outputs.py#L46-L47) type hints for attentions/hidden outputs? It seems like we want `Tuple[FloatTensor,...]` instead of the`Tuple[FloatTensor]`? The current type hint makes it seem like we should return a tuple with a single`FloatTensor` instead of a tuple with an arbitrary number of float tensors.<|||||>@tensorpro Good spot - yes, the returned tuples can have a variable number of items. In practice, we're not running e.g. mypy against the repo so shouldn't break things but, as you point out, could be misleading. Would you like to open an issue or PR addressing this? <|||||>Ah thanks for clearing it up! I only caught it cause my editor was complaining about the return value types. And I'd be happy to make a PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry about the delay, but I got around to adding a CLIPViPProcessor + improving some of the docs. Merging with the recent changes in [24306](https://github.com/huggingface/transformers/pull/24306) was a bit confusing, since it wasn't clear from the error messages I was getting that we need to add `_set_token_in_kwargs` to the config. Would it make sense to identify these errors and add something like "you may need to add _set_token_in_kwargs" to the error messages? Also, I think the PR is ready for review though I will be refining the docs a bit more.<|||||>I need to update the of the docstring examples that use images as examples to use videos instead, it should be ready for review after that though.<|||||>@tensorpro Great :) Ping me when that's done and I'll review 👍
transformers
23,801
closed
Training siamese (biencoder) based transformer model with gradient checkpointing throws error
### System Info PyTorch Lightning Version 1.6.5 Torch 1.13.0 Python version 3.8 CUDA Version: 11.4 4 NVIDIA A100-SXM4-40GBs transformers 4.24.0 ### Reproduction After adding `model.gradient_checkpointing_enable()` to the training code, throwing below error ```RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations``` The workaround to fix this is add `use_reentrant=False` in the below file. https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L600 ``` layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, use_reentrant=False ) ``` What's the best way to fix this? instead of adding the above flag manually in the source code ### Expected behavior adding `model.gradient_checkpointing_enable()` shouldn't throw any error ### Code to reproduce ```import torch import torch.nn as nn from torch.utils.data import DataLoader, Dataset from transformers import BertTokenizer, BertModel import pytorch_lightning as pl # Sample data class SampleDataset(Dataset): def __init__(self): self.data = [ ("I love coding", "I enjoy programming", 1), ("Python is great", "Java is popular", 0), ("Deep learning is fascinating", "Machine learning is interesting", 1), ("I prefer cats", "I like dogs", 0) ] self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') def __len__(self): return len(self.data) def __getitem__(self, idx): text1, text2, label = self.data[idx] encoded_text1 = self.tokenizer.encode_plus(text1, add_special_tokens=True, padding='max_length', max_length=128, truncation=True) encoded_text2 = self.tokenizer.encode_plus(text2, add_special_tokens=True, padding='max_length', max_length=128, truncation=True) input_ids1 = torch.tensor(encoded_text1['input_ids']) attention_mask1 = torch.tensor(encoded_text1['attention_mask']) input_ids2 = torch.tensor(encoded_text2['input_ids']) attention_mask2 = torch.tensor(encoded_text2['attention_mask']) return (input_ids1, attention_mask1), (input_ids2, attention_mask2), label # Define your LightningModule class SiameseBiEncoder(pl.LightningModule): def __init__(self): super(SiameseBiEncoder, self).__init__() self.bert = BertModel.from_pretrained('bert-base-uncased') self.hidden_size = self.bert.config.hidden_size self.cosine_similarity = nn.CosineSimilarity(dim=1) self.criterion = nn.BCELoss() def forward(self, input_ids, attention_mask): outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask) pooled_output = outputs.pooler_output return pooled_output def training_step(self, batch, batch_idx): (input_ids1, attention_mask1), (input_ids2, attention_mask2), labels = batch embeddings1 = self.forward(input_ids1, attention_mask1) embeddings2 = self.forward(input_ids2, attention_mask2) similarity_scores = self.cosine_similarity(embeddings1, embeddings2) loss = self.criterion(similarity_scores, labels.float()) self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): (input_ids1, attention_mask1), (input_ids2, attention_mask2), labels = batch embeddings1 = self.forward(input_ids1, attention_mask1) embeddings2 = self.forward(input_ids2, attention_mask2) similarity_scores = self.cosine_similarity(embeddings1, embeddings2) loss = self.criterion(similarity_scores, labels.float()) self.log('val_loss', loss) return loss def configure_optimizers(self): optimizer = torch.optim.AdamW(self.parameters(), lr=2e-5) return optimizer # Create the LightningDataModule class SampleDataModule(pl.LightningDataModule): def __init__(self, batch_size=4): super(SampleDataModule, self).__init__() self.batch_size = batch_size def setup(self, stage=None): self.train_dataset = SampleDataset() self.val_dataset = SampleDataset() def train_dataloader(self): return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=True) def val_dataloader(self): return DataLoader(self.val_dataset, batch_size=self.batch_size) # Create an instance of your LightningModule model = SiameseBiEncoder() model.bert.gradient_checkpointing_enable() print(f"Gradient Checkpointing: {model.bert.is_gradient_checkpointing}") # Create the LightningDataModule instance data_module = SampleDataModule() # Create a Trainer instance trainer = pl.Trainer( max_epochs=3, devices=2, accelerator="gpu", strategy="ddp") trainer.fit(model, data_module)
05-26-2023 19:11:23
05-26-2023 19:11:23
cc @ArthurZucker and @younesbelkada <|||||>@sachinya00 What does your code look like, including training setup and training args?<|||||>I've updated the post with the code to reproduce the same<|||||>Hey, thanks for providing a reproduction script. Based on the provided traceback it seems like the issue lies with `DDP` that is asking you to use `_set_static_graph()`. Did that work for you? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,800
closed
Log the right train_batch_size if using auto_find_batch_size and also log the adjusted value seperately.
# What does this PR do? This PR will log the `train_batch_size` that is reduced when using the auto-batch-finder utility, and will also seperatly log what the reduced batch size is on a debug level. Fixes # (issue) Solves #23762 Solves #21950 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
05-26-2023 18:54:59
05-26-2023 18:54:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,799
closed
Enable code-specific revision for code on the Hub
# What does this PR do? This PR adds a new `code_revision` argument to all auto classes `from_pretrained` (and the auto models `from_config`) to allow for a specific revision for code on the Hub. Since code can now live in a different repo than the weights, the `revision` argument can't be used directly for the code files and we need a new argument. This PR also makes `code_revision` default to `revision` when the repo contains both the code and the model weights. Fixes #23745
05-26-2023 18:39:46
05-26-2023 18:39:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,798
closed
TFBertTokenizer - support for "never_split"
### Feature request Often vocabularies contain special tokens that should not be split. For example, in model "anferico/bert-for-patents", the vocabulary contains a token "[abstract]" (token_id is 5) https://huggingface.co/anferico/bert-for-patents/raw/main/vocab.txt The normal `BertTokenizer` supports a param "never_split" for this: ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('anferico/bert-for-patents', never_split=['[abstract]']) tokenizer.tokenize('[abstract] some text here') # ['[abstract]', 'some', 'text', 'here'] ``` So above, even though '[abstract]' has parens, it is not split. But `TFBertTokenizer` does not have a "never_split" param, and so there is no way to prevent splits. For example: ```python from transformers import TFBertTokenizer tokenizer = TFBertTokenizer.from_pretrained('anferico/bert-for-patents') tokenizer(tf.constant(['[abstract] some text here'])) # {'input_ids': <tf.Tensor: shape=(1, 8), dtype=int64, numpy=array([[ 2, 1036, 9726, 1038, 1735, 3458, 1847, 3]])>, 'attention_mask': <tf.Tensor: shape=(1, 8), dtype=int64, numpy=array([[1, 1, 1, 1, 1, 1, 1, 1]])>, 'token_type_ids': <tf.Tensor: shape=(1, 8), dtype=int64, numpy=array([[0, 0, 0, 0, 0, 0, 0, 0]])>} ``` Above, notice that token_id 5 (['abstract']) is missing in the input_ids, and in fact '[abstract]' has been split into three separate tokens: * '[' - 1036 * 'abstract' - 9726 * ']' - 1038 ### Motivation We would like to use an end-to-end model, on TensorFlow Serving, with in-graph tokenization. But we need to be able to include special tokens in our input, such as '[abstract]', '[claims]' etc https://huggingface.co/anferico/bert-for-patents/raw/main/vocab.txt If TFBertTokenizer had a "never_split" param, this would be possible. But currently it is not, so we need to do Tokenization outside the server.
05-26-2023 18:07:59
05-26-2023 18:07:59
cc @Rocketknight1 <|||||>Hi @benzitohhh , and sorry for the delay! This is an interesting and useful idea, but we're depending on the underlying Tensorflow Text layers, specifically [BertTokenizer](https://www.tensorflow.org/text/api_docs/python/text/BertTokenizer) in this case. I don't think there is a 'never_split' option here, but we could use the `preserve_unused_token` argument. This would mean that tokens of the form `[unused0]`, `[unused1]`, etc. would never be split, so you could use those as a control token like `[abstract]`. Would this work for your use case? If it's useful to you it's probably useful to other people, and we can add it to the `TFBertTokenizer` layer in a PR.<|||||>hi @Rocketknight1 Thanks for the response, and sorry so slow getting back to you also! Just to check I understand... In our case, vocabulary (token_id to token mapping) looks as below, where 5-9 inclusive are "special" tokens: https://huggingface.co/anferico/bert-for-patents/raw/main/vocab.txt ``` 0: [PAD] 1: [UNK] 2: [CLS] 3: [SEP] 4: [MASK] 5: [abstract] 6: [claim] 7: [summary] 8: [invention] 9: [cpc] 10: [unused0] 11: [unused1] 12: [unused2] 13: [unused3] 14: [unused4] 15: [unused5] etc... ``` So with the `preserve_unused_token` approach, I guess we'd need to do something like: ```python input = ' [abstract] some text here. ' #out = [2, 5, 1735, 3458, 1847, 3] #### Expected tokenized ids # 1. Replace each "special" token with a unique "unused" token # So we need to map: # '[abstract]' -> '[unused0]' # '[claims]' -> '[unused1]' # etc.. # I guess we could use some regex for this. input__unused = '[unused0] some text here' # 2. Do the tokenization bert_input__unused = tokenizer(tf.constant([input__unused])) # { 'input_ids': ... numpy=array([[ 2, 10, 1735, 3458, 1847, 3]])> etc... } # i.e. the "10" above is the is '[unused0]' token # 3. Replace "unused" token_ids with the correct special token_ids # Not sure exactly how to do this with tensor operations, but I guess it's possible? # So we need to map: # 10 ('[unused0]') -> 5 ('[abstract]') # 11 ('[unused1]') -> 6 ('[claims]') # etc.. bert_input = .. # { 'input_ids': ... numpy=array([[ 2, 5, 1735, 3458, 1847, 3]])> etc... } ``` Will the above work? If so, that would be amazing, and totally solve our situation. Obviously, being able to add a "never_split" param would be much nicer :) Anyways let us know what is possible - thanks!<|||||>Hi @benzitohhh, yes, that's correct! I'd also need to file a PR to expose the option in our tokenizer, but if you're interested then I can do that soon. For the issue of mapping the `unused` token ids to the correct special token IDs, I suggest using a block of unused token IDs in the same order as your special token IDs. Then all you would need to do is: ```python # `True` for `unused` tokens, `False` otherwise condition = (input_ids >= unused_start_idx) & (input_ids <= unused_end_idx) # Subtract offset value from all unused tokens input_ids = tf.where(condition, input_ids - offset, input_ids) ``` In the vocab list you linked above, an offset of `5` would map `[unused0]` -> `[abstract]` and so on.<|||||>For more complex replacements, you could also just reorder the `vocab_list` for the `TFBertTokenizer` so it generates the indices you want!<|||||>@Rocketknight1 Ok this would totally work for us, and would allow us to create an end-to-end model - yay! If you could create a PR that would be super appreciated. Thanks again for all your help here, and the super clear explanations. Have a good weekend meanwhile. <|||||>Hi @benzitohhh, the PR is now open at #24324. You can try out the PR branch with the following command: ``` pip install git+https://github.com/huggingface/transformers.git@allow_tf_tokenizer_kwargs ``` When creating the `TFBertTokenizer`, add the arguments `use_fast_bert_tokenizer=False` and `preserve_unused_token=True`. Also, note that only the slower TF tokenizer layer supports the `preserve_unused_token` argument, but only the fast layer can be exported to TFLite. This means that this solution won't work for you if you want to export to TFLite! <|||||>@Rocketknight1 Ah amazing thanks! Will try this out first thing on Monday and let you know asap<|||||>@Rocketknight1 Ok just tested the PR, it works perfectly. Thanks again for making this happen!<|||||>Cool! Hopefully we can merge the PR soon in that case, so you can stop installing from the branch.<|||||>@benzitohhh this has now been merged. You can now get it just by installing from `main` with ``` pip install git+https://github.com/huggingface/transformers.git ``` It will be included with the next release of transformers in a few weeks, at which point you can go back to the usual `pip install transformers`<|||||>@Rocketknight1 amazing - thanks again
transformers
23,797
closed
Fix last instances of kbit -> quantized
# What does this PR do? Just encountered a few kbit remaining. In particular the `_is_loaded_in_kbit` really needs to be changed, the others are just for consistency.
05-26-2023 18:02:02
05-26-2023 18:02:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,841
closed
Causal language modeling documentation is wrong?
I just noticed that [on this page](https://huggingface.co/docs/transformers/tasks/language_modeling) we do not add any end-of-speech token (EOS) to the end of the texts. This means we are training a model that does not shut up! The EOS token should be added!
05-26-2023 15:22:32
05-26-2023 15:22:32
Thanks a lot! Transfering this to transformers<|||||>It's just an example that we keep as simple as possible. You can customize it to your needs for your own trainings.<|||||>An example that is **wrong**, let's not try to argue that it isn't 😅 . In that example, you are interested in training to generate sentences, but you are actually training the model to never stop generating...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,796
closed
fix: Replace `add_prefix_space` in `get_prompt_ids` with manual space for FastTokenizer compatibility
# What does this PR do? Fixes #23764 As discussed in the issue the 'FastTokenizer' for Whisper and other models does not accept `add_prefix_space` as an argument to tokenize, so to make `get_prompt_ids` compatible across both slow and fast tokenizers this was replaced with `" " + text` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @hollance @sanchit-gandhi
05-26-2023 14:20:42
05-26-2023 14:20:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sanchit-gandhi for sure just pushed something up
transformers
23,795
closed
no_cuda does not take effect in non distributed environment
Fixes # (issue) no_cuda does not take effect in non distributed case. gpu is still selected. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - trainer: @sgugger
05-26-2023 14:09:41
05-26-2023 14:09:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,794
closed
Transformer trainer training crashed with GLM models
### System Info ### Environment ``` - `transformers` version: 4.29.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: A100 - Using distributed or parallel set-up in script?: Single GPU - ``` I suspect it's because the model has no device info attached to it, so when transformer trainer tries to fetch per_device_batch * device but somehow the device is 0 and caused the issue. See detailed stack trace below: ### Stack trace: ``` /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1664 in train │ │ │ │ 1661 │ │ inner_training_loop = find_executable_batch_size( │ │ 1662 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1663 │ │ ) │ │ ❱ 1664 │ │ return inner_training_loop( │ │ 1665 │ │ │ args=args, │ │ 1666 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1667 │ │ │ trial=trial, │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1909 in _inner_training_loop │ │ │ │ 1906 │ │ │ │ rng_to_sync = True │ │ 1907 │ │ │ │ │ 1908 │ │ │ step = -1 │ │ ❱ 1909 │ │ │ for step, inputs in enumerate(epoch_iterator): │ │ 1910 │ │ │ │ total_batched_samples += 1 │ │ 1911 │ │ │ │ if rng_to_sync: │ │ 1912 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │ │ │ │ 630 │ │ │ if self._sampler_iter is None: │ │ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │ │ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │ │ ❱ 633 │ │ │ data = self._next_data() │ │ 634 │ │ │ self._num_yielded += 1 │ │ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │ │ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │ │ │ │ 674 │ │ │ 675 │ def _next_data(self): │ │ 676 │ │ index = self._next_index() # may raise StopIteration │ │ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │ │ 678 │ │ if self._pin_memory: │ │ 679 │ │ │ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) │ │ 680 │ │ return data │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:49 in fetch │ │ │ │ 46 │ def fetch(self, possibly_batched_index): │ │ 47 │ │ if self.auto_collation: │ │ 48 │ │ │ if hasattr(self.dataset, "__getitems__") and self.dataset.__getitems__: │ │ ❱ 49 │ │ │ │ data = self.dataset.__getitems__(possibly_batched_index) │ │ 50 │ │ │ else: │ │ 51 │ │ │ │ data = [self.dataset[idx] for idx in possibly_batched_index] │ │ 52 │ │ else: │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2782 in __getitems__ │ │ │ │ 2779 │ │ │ 2780 │ def __getitems__(self, keys: List) -> List: │ │ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │ │ ❱ 2782 │ │ batch = self.__getitem__(keys) │ │ 2783 │ │ n_examples = len(batch[next(iter(batch))]) │ │ 2784 │ │ return [{col: array[i] for col, array in batch.items()} for i in range(n_example │ │ 2785 │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2778 in __getitem__ │ │ │ │ 2775 │ │ │ 2776 │ def __getitem__(self, key): # noqa: F811 │ │ 2777 │ │ """Can be used to index columns (by string names) or rows (by integer index or i │ │ ❱ 2778 │ │ return self._getitem(key) │ │ 2779 │ │ │ 2780 │ def __getitems__(self, keys: List) -> List: │ │ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2762 in _getitem │ │ │ │ 2759 │ │ format_kwargs = kwargs["format_kwargs"] if "format_kwargs" in kwargs else self._ │ │ 2760 │ │ format_kwargs = format_kwargs if format_kwargs is not None else {} │ │ 2761 │ │ formatter = get_formatter(format_type, features=self._info.features, **format_kw │ │ ❱ 2762 │ │ pa_subtable = query_table(self._data, key, indices=self._indices if self._indice │ │ 2763 │ │ formatted_output = format_table( │ │ 2764 │ │ │ pa_subtable, key, formatter=formatter, format_columns=format_columns, output │ │ 2765 │ │ ) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:578 in query_table │ │ │ │ 575 │ │ _check_valid_column_key(key, table.column_names) │ │ 576 │ else: │ │ 577 │ │ size = indices.num_rows if indices is not None else table.num_rows │ │ ❱ 578 │ │ _check_valid_index_key(key, size) │ │ 579 │ # Query the main table │ │ 580 │ if indices is None: │ │ 581 │ │ pa_subtable = _query_table(table, key) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:531 in │ │ _check_valid_index_key │ │ │ │ 528 │ │ │ _check_valid_index_key(min(key), size=size) │ │ 529 │ elif isinstance(key, Iterable): │ │ 530 │ │ if len(key) > 0: │ │ ❱ 531 │ │ │ _check_valid_index_key(int(max(key)), size=size) │ │ 532 │ │ │ _check_valid_index_key(int(min(key)), size=size) │ │ 533 │ else: │ │ 534 │ │ _raise_bad_key_type(key) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:521 in │ │ _check_valid_index_key │ │ │ │ 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: │ │ 519 │ if isinstance(key, int): │ │ 520 │ │ if (key < 0 and key + size < 0) or (key >= size): │ │ ❱ 521 │ │ │ raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") │ │ 522 │ │ return │ │ 523 │ elif isinstance(key, slice): │ │ 524 │ │ pass │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ IndexError: Invalid key: 4 is out of bounds for size 0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ### Code to reproduce: ``` !pip install -q bitsandbytes datasets accelerate loralib !pip install sentencepiece !pip install -q transformers peft import torch import torch.nn as nn import bitsandbytes as bnb import datasets import accelerate import loralib import sentencepiece as spm import transformers from peft import LoraConfig, get_peft_model from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-10b-chinese", trust_remote_code=True) model = AutoModelForSeq2SeqLM.from_pretrained("THUDM/glm-10b-chinese", trust_remote_code=True) model = model.half().cuda() ds = ["hello world", "what the heck", "you are not alone", "not toyda", "more or less"] ds = {"text": ds} ds = datasets.Dataset.from_dict(ds) ds = ds.map(lambda x: tokenizer(x["text"], padding=True), batched=True) config = LoraConfig( r=16, lora_alpha=32, target_modules=["query_key_value"], lora_dropout=0.05, bias="none", task_type="CASUAL_LM" ) model = get_peft_model(model, config) trainer = transformers.Trainer( model=model, train_dataset=ds, args=transformers.TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=1, warmup_steps=0, num_train_epochs=2, learning_rate=3e-4, fp16=True, logging_steps=1, output_dir='outputs', save_total_limit=2, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False) ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! trainer.train() ``` ### Expected behavior Expect not to crash
05-26-2023 13:49:20
05-26-2023 13:49:20
The problem lies within your dataset, nothing to do with the `Trainer` :-)<|||||>Hi @sgugger, I'm new to `Transformer.Trainer`, wonder what's the issue here? how should I setup the dataset? Thanks! I thought the tokenizer should tokenize the text and return a dict with `input_ids` in it. then `transformers.DataCollatorForLanguageModeling` should map `input_ids` to `labels` correctly? Wonder if there is any example about using custom dataset?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,793
closed
Inference API takes forever and output: "Model ... is currently loading"
### System Info I'm currently working on my final project (Fine-tuning the "Helsinki-NLP/opus-mt-en-zh" and "Helsinki-NLP/opus-mt-zh-en" models for Translation English-Chinese), have already trained the model, and deployed it to the Hub. The issue is that I'm not able to consume the API properly: it's slow, it seems not to work properly, and it takes forever to load. I'm trying to use the Inference API connected to my Nextjs (Frontend) app, but I'm getting this error message: ``` { "error": "Model ... is currently loading", "estimated_time": 20 } ``` Sometimes, It works, and then It stops working... Any help, please? Please, give me all the possible suggestions, would love to explore. Thanks ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` async function translateEnglishtoChinese(text: string) { const response = await fetch( "https://api-inference.huggingface.co/models/KennStack01/Helsinki-NLP-opus-mt-en-zh", { headers: { Authorization: `Bearer ${process.env.NEXT_PUBLIC_HF_TOKEN}`, }, method: "POST", body: JSON.stringify({ inputs: text, }), } ); const result = await response.json(); console.log("English to Chinese", result[0]?.translation_text); return result; } async function translateChinesetoEnglish(text: string) { const response = await fetch( "https://api-inference.huggingface.co/models/KennStack01/Helsinki-NLP-opus-mt-zh-en", { headers: { Authorization: `Bearer ${process.env.NEXT_PUBLIC_HF_TOKEN}`, }, method: "POST", body: JSON.stringify({ inputs: text, }), } ); const result = await response.json(); console.log("Chinese to English", result[0]?.translation_text); return result; } ``` ### Expected behavior Expecting to see a valid translation_text. Simple as that. Thanks :)
05-26-2023 13:42:06
05-26-2023 13:42:06
cc @Narsil <|||||>Any update? :( <|||||>Still unsolved. [The same problem in the Discord channel.](https://discord.com/channels/879548962464493619/1112875103311630366)<|||||>Hi, things should be back to normal, can you confirm ? This model and some others were still running very old code, and some internal changes have made it crash, unfortunately silently for us, since everything was still 200 status codes somehow. We merely updated everything. Thanks @oOraph for the fix.<|||||>I will check now, also a question: will there be additional parameters added to the voice recognition, something besides file (e.g. wait_for_model or output_language) [About additional parameters](https://discord.com/channels/879548962464493619/1115750221226446889)<|||||>@Narsil ![image](https://github.com/huggingface/transformers/assets/100136305/08decc4c-a1fc-4654-a6ed-cd9f732f8d87) <|||||>openai/whisper-large-v2 worked right away, but still in English, although the source was in Russian. ![image](https://github.com/huggingface/transformers/assets/100136305/c393e9fa-2616-4d0d-bc77-c741eb798a31) <|||||>Still getting this error: `{ "error": "Model *** is currently loading", "estimated_time": 20 }` Any help?<|||||>Hi @KennStack01 :), just made the test, if you haven't requested your model for a while it 's offline so you get this message (flagged as an error so that it cannot be confused with an answer to your prompt but it's not really an error in the sense your model won't work). I just tested your model and it got online in 20-22 seconds the two times I tried. So you get this message for sth like 20 s (sometimes more for bigger models but yours should be really fast to load given its size). Once it's online your prompt gets correctly answered :) (unless I misunderstood what you're saying and you're saying sometimes it just does not load at all, which I did not observe but would still be possible). And once online, it will stay online for a while, especially if you're requesting it regularly. But at some point it will get offline again and be preempted by others. More or less quickly, depending on several factors, essentially the whole cluster's load, the last time it was requested and the resources it consumes, sometimes only a few minutes later: this could explain your test @Zapzatron. Because from what I understand, in your test, you requested openai/whisper-large once and took the estimated time in response to know how long to sleep (which makes total sense). But from the tests I made, the estimated_time provided in answer was bad and it actually got online in less than 247s (in sth like 70s). Since you did not request it, it could actually have been already brought offline by the time you ended your sleep and requested it again. I would suggest that you actually poll the api a bit more frequently, sth like every 20 seconds after the first message, and would not pay too much attention to the estimated_time to know how long to sleep :)<|||||>[I tried every 20 seconds, but that's a lot of requests](https://discord.com/channels/879548962464493619/1112875103311630366) ![image](https://github.com/huggingface/transformers/assets/100136305/72cc7f6d-5ad3-4a16-b89c-8be5c19794a7) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,792
closed
Fix Trainer when model is loaded on a different GPU
# What does this PR do? When a small model is loaded with `device_map="auto"` it might end up all on GPU 1, so currently `is_model_parallel` is set to `False` (cause one device) and later on the Trainer moves the model to GPU 0 which fails the execution of all the Accelerate hooks. This PR fixes this by making sure `is_model_parallel` is set to `True` when there is one device but it's not GPU 0.
05-26-2023 13:33:58
05-26-2023 13:33:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,791
closed
[Bloom] Inconsistent results when testing pretrained model bloom under different dtypes(float16, float32)
### System Info Hi, We suffer inconsistent results when running model 'bloom' under different dtypes(float16, float32) Is that a bug? Environment: - `transformers` version: 4.29.1 - Platform: Linux-3.10.107-1-tlinux2_kvm_guest-0049-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.0a0+08820cb (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @younesbelkada @thomasw21 @sywangyi @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here's the testing code. We load the data from https://huggingface.co/bigscience/bloomz-560m ```python3 import torch from transformers import BloomConfig from transformers.models.bloom.modeling_bloom import BloomForCausalLM as BloomForCausalLMHF def test(model_name): config = BloomConfig.from_pretrained(model_name) torch.manual_seed(0) batch_size = 1 max_seqlen = 512 _ = torch.randint(max_seqlen // 2, max_seqlen + 1, (batch_size,), device='cuda') # keep this input_ids = torch.randint(0, config.vocab_size, (batch_size, max_seqlen), dtype=torch.long, device='cuda') bloom_32 = BloomForCausalLMHF.from_pretrained(model_name).cuda().to(dtype=torch.float32) bloom_32.eval() out_32 = bloom_32.transformer(input_ids).last_hidden_state out_32 = out_32.cpu().detach() del bloom_32 bloom_16 = BloomForCausalLMHF.from_pretrained(model_name).cuda().to(dtype=torch.float16) bloom_16.eval() out_16 = bloom_16.transformer(input_ids).last_hidden_state out_16 = out_16.cpu().detach() print(f'max diff: {(out_16 - out_32).abs().max().item()}') print(f'mean diff: {(out_16 - out_32).abs().mean().item()}') test("/path/to/bloomz-560m") ``` ### Expected behavior we run the code on our machine, and the result can be (given the random seed): ``` max diff: 196.88119506835938 mean diff: 0.2683866322040558 ```
05-26-2023 09:16:31
05-26-2023 09:16:31
@Lemon-412 have you try bloom_16 = BloomForCausalLMHF.from_pretrained(model_name).half().cuda()? I think "model.cuda().to(dtype=torch.float16)" is strange😿<|||||>Hey! Thanks for opening an issue. The best way to test closeness between to tensors is to use `torch.testing.all_close(tensor_a, tensor_b, atol, rtol)` which I would suggest you to use. This will give: ```python Mismatched elements: 453121 / 524288 (86.4%) Greatest absolute difference: 200.00748252868652 at index (0, 508, 505) (up to 0.001 allowed) Greatest relative difference: 20611.88115471691 at index (0, 457, 98) (up to 0.001 allowed) ``` Which indeed seems to show that there are instabilities. Pinging @thomasw21 in case he has already seen this. <|||||>Wow this seems big! So I might see a few reasons why: - We convert bf16 original weights to fp16 weights, which might come at a huge cost - There are some systems we haven't ported from the original codebase, since we thought they were for backward stability: https://github.com/huggingface/transformers/blob/af45ec0a1611062929ddbf6f15935e01e6cbf1af/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py#L133 (this might affect the runs) I tried running with `gpt_bigcode-santader` and `opt-350m`, and it indeed seems that `bloom` has a particular issue. I think since `santacoder` and `bloomz` are trained in roughly similar codebases, I think we should run `diffs` in their modeling code to see.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,790
closed
Model.generate stop code execution without any error
### System Info I am pretty new to HF, this is my first attempt to use a model. The problem is `model.generate` kinda abrupt the script execution without any error. Here’s my code: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base') text = "write for cycle" input_ids = tokenizer(text, return_tensors="pt").input_ids print("before") generated_ids = model.generate(input_ids, max_length=8) print("after") print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` This code gives me no output except “before”. Also, I’ve tried other models with the same result. It looks the issue on my side… I’ll be very grateful for your help. Thanks! Env: ```text created virtual environment CPython3.9.16.final.0-64 in 444ms creator CPython3Posix(dest=/Users/zonder/Documents/PyCharmProjects/huggingface/venv, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/zonder/Library/Application Support/virtualenv) added seed packages: Jinja2==3.1.2, MarkupSafe==2.1.2, PyYAML==6.0, certifi==2023.5.7, charset_normalizer==3.1.0, distlib==0.3.6, filelock==3.12.0, fsspec==2023.5.0, huggingface_hub==0.14.1, idna==3.4, mpmath==1.3.0, networkx==3.1, numpy==1.24.3, packaging==23.1, pip==23.1.2, platformdirs==3.5.1, regex==2023.5.5, requests==2.31.0, safetensors==0.3.1, setuptools==67.7.2, sympy==1.12, tokenizers==0.13.3, torch==2.0.1, tqdm==4.65.0, transformers==4.29.2, typing_extensions==4.6.2, urllib3==2.0.2, virtualenv==20.23.0, wheel==0.40.0 activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator ``` Logs: ```text loading file vocab.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/vocab.json loading file merges.txt from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/merges.txt loading file added_tokens.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/added_tokens.json loading file special_tokens_map.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/special_tokens_map.json loading file tokenizer_config.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/tokenizer_config.json loading configuration file config.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/config.json Model config T5Config { "_name_or_path": "/content/drive/MyDrive/CodeT5/pretrained_models/codet5_base", "architectures": [ "T5ForConditionalGeneration" ], "bos_token_id": 1, "d_ff": 3072, "d_kv": 64, "d_model": 768, "decoder_start_token_id": 0, "dense_act_fn": "relu", "dropout_rate": 0.1, "eos_token_id": 2, "feed_forward_proj": "relu", "gradient_checkpointing": false, "id2label": { "0": "LABEL_0" }, "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": false, "label2id": { "LABEL_0": 0 }, "layer_norm_epsilon": 1e-06, "model_type": "t5", "n_positions": 512, "num_decoder_layers": 12, "num_heads": 12, "num_layers": 12, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "task_specific_params": { "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } }, "torch_dtype": "float32", "transformers_version": "4.30.0.dev0", "use_cache": true, "vocab_size": 32100 } loading weights file pytorch_model.bin from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/pytorch_model.bin Generate config GenerationConfig { "_from_model_config": true, "bos_token_id": 1, "decoder_start_token_id": 0, "eos_token_id": 2, "pad_token_id": 0, "transformers_version": "4.30.0.dev0" } All model checkpoint weights were used when initializing T5ForConditionalGeneration. All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at Salesforce/codet5-base. If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training. Generation config file not found, using a generation config created from the model config. before Generate config GenerationConfig { "_from_model_config": true, "bos_token_id": 1, "decoder_start_token_id": 0, "eos_token_id": 2, "pad_token_id": 0, "transformers_version": "4.30.0.dev0" } ``` ### Who can help? @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running the script above return no output except “before”. ### Expected behavior At least print "after"
05-26-2023 08:57:20
05-26-2023 08:57:20
Hey @hitriyvalenok 👋 Your snippet seems to be working fine on my end -- have a look at [this notebook](https://colab.research.google.com/drive/1lyXcScMOPhTP1bM0waLakIOBpYq7IjVO?usp=sharing)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,789
closed
[OPT] Doc nit, using fast is fine
# What does this PR do? use_fast=False when loading OPT's tokenizer? Fixes #23768
05-26-2023 08:53:52
05-26-2023 08:53:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,788
closed
[Agents] text_reader does not work
### System Info Google Colab, Python 3.10.11, transformers 4.29.2 ### Who can help? @sgugger @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import load_tool tool = load_tool("text-to-speech") audio = tool("This is a text to speech tool") ``` or more end-to-end ``` from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key="TOKEN") agent.chat("can you make an audio recording of someone saying 'hi'?") ``` Full error stacktrace ``` ==Explanation from the agent== I will use the tool `text_reader` to read the text "Hi" out loud. ==Code generated by the agent== audio_recording = text_reader(text="Hi") ==Result== ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ in <cell line: 1>:1 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py:278 in chat │ │ │ │ 275 │ │ │ │ print("\n\n==Result==") │ │ 276 │ │ │ │ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cac │ │ 277 │ │ │ │ self.chat_state.update(kwargs) │ │ ❱ 278 │ │ │ │ return evaluate(code, self.cached_tools, self.chat_state, chat_mode=True │ │ 279 │ │ │ else: │ │ 280 │ │ │ │ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) │ │ 281 │ │ │ │ return f"{tool_code}\n{code}" │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:61 in evaluate │ │ │ │ 58 │ result = None │ │ 59 │ for idx, node in enumerate(expression.body): │ │ 60 │ │ try: │ │ ❱ 61 │ │ │ line_result = evaluate_ast(node, state, tools) │ │ 62 │ │ except InterpretorError as e: │ │ 63 │ │ │ msg = f"Evaluation of the code stopped at line {idx} before the end because │ │ 64 │ │ │ if chat_mode: │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:98 in │ │ evaluate_ast │ │ │ │ 95 │ if isinstance(expression, ast.Assign): │ │ 96 │ │ # Assignement -> we evaluate the assignement which should update the state │ │ 97 │ │ # We return the variable assigned as it may be used to determine the final resul │ │ ❱ 98 │ │ return evaluate_assign(expression, state, tools) │ │ 99 │ elif isinstance(expression, ast.Call): │ │ 100 │ │ # Function call -> we return the value of the function call │ │ 101 │ │ return evaluate_call(expression, state, tools) │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:139 in │ │ evaluate_assign │ │ │ │ 136 │ │ 137 def evaluate_assign(assign, state, tools): │ │ 138 │ var_names = assign.targets │ │ ❱ 139 │ result = evaluate_ast(assign.value, state, tools) │ │ 140 │ │ │ 141 │ if len(var_names) == 1: │ │ 142 │ │ state[var_names[0].id] = result │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in │ │ evaluate_ast │ │ │ │ 98 │ │ return evaluate_assign(expression, state, tools) │ │ 99 │ elif isinstance(expression, ast.Call): │ │ 100 │ │ # Function call -> we return the value of the function call │ │ ❱ 101 │ │ return evaluate_call(expression, state, tools) │ │ 102 │ elif isinstance(expression, ast.Constant): │ │ 103 │ │ # Constant -> just return the value │ │ 104 │ │ return expression.value │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in │ │ evaluate_call │ │ │ │ 164 │ # Todo deal with args │ │ 165 │ args = [evaluate_ast(arg, state, tools) for arg in call.args] │ │ 166 │ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call │ │ ❱ 167 │ return func(*args, **kwargs) │ │ 168 │ │ 169 │ │ 170 def evaluate_subscript(subscript, state, tools): │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:532 in __call__ │ │ │ │ 529 │ │ │ 530 │ def __call__(self, *args, **kwargs): │ │ 531 │ │ if not self.is_initialized: │ │ ❱ 532 │ │ │ self.setup() │ │ 533 │ │ │ │ 534 │ │ encoded_inputs = self.encode(*args, **kwargs) │ │ 535 │ │ encoded_inputs = send_to_device(encoded_inputs, self.device) │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/text_to_speech.py:45 in setup │ │ │ │ 42 │ def setup(self): │ │ 43 │ │ if self.post_processor is None: │ │ 44 │ │ │ self.post_processor = "microsoft/speecht5_hifigan" │ │ ❱ 45 │ │ super().setup() │ │ 46 │ │ │ 47 │ def encode(self, text, speaker_embeddings=None): │ │ 48 │ │ inputs = self.pre_processor(text=text, return_tensors="pt", truncation=True) │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:492 in setup │ │ │ │ 489 │ │ Instantiates the `pre_processor`, `model` and `post_processor` if necessary. │ │ 490 │ │ """ │ │ 491 │ │ if isinstance(self.pre_processor, str): │ │ ❱ 492 │ │ │ self.pre_processor = self.pre_processor_class.from_pretrained(self.pre_proce │ │ 493 │ │ │ │ 494 │ │ if isinstance(self.model, str): │ │ 495 │ │ │ self.model = self.model_class.from_pretrained(self.model, **self.model_kwarg │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/processing_utils.py:184 in from_pretrained │ │ │ │ 181 │ │ │ │ [`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`] and │ │ 182 │ │ │ │ [`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`]. │ │ 183 │ │ """ │ │ ❱ 184 │ │ args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwarg │ │ 185 │ │ return cls(*args) │ │ 186 │ │ │ 187 │ @classmethod │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/processing_utils.py:228 in │ │ _get_arguments_from_pretrained │ │ │ │ 225 │ │ │ else: │ │ 226 │ │ │ │ attribute_class = getattr(transformers_module, class_name) │ │ 227 │ │ │ │ │ ❱ 228 │ │ │ args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, * │ │ 229 │ │ return args │ │ 230 │ │ │ 231 │ @property │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TypeError: 'NoneType' object is not callable ``` ### Expected behavior It should generate an audio file
05-26-2023 08:17:42
05-26-2023 08:17:42
I'm not able to reproduce locally or on Colab. Are you using the official [demo Colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj)? It's probably some missing dependency so would be interesting to know more about the env where this is failing.<|||||>I was using Colab, but it seems the issue was not full-restarting after installing `sentencepiece` (which was importable but I should have restarted). Clean colab, installing first, works well<|||||>Closing this as it's a user error not a agents error
transformers
23,787
closed
Update trainer.mdx class_weights example
class_weights tensor should follow model's device # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-26-2023 07:52:17
05-26-2023 07:52:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,786
closed
Convert Pre-LN Transformers into equivalent Pre-RMSNorm Transformers to accelerate inference and training
### Feature request LayerNorm and RMSNorm are the top two normalization methods in Transformers. We unify them in Pre-Normalization Transformers in our paper https://arxiv.org/abs/2305.14858. The arithmetic equivalence allows us to convert Pre-LN Transformers into Pre-RMSNorm models without impact on the model functionality. Since RMSNorm offers superior efficiency compared to LayerNorm, our method enables faster equivalent inference and training for any Pre-LN Transformers, e.g., GPT, ViT. Our implementation is at https://github.com/ZixuanJiang/pre-rmsnorm-transformer. As the first step, we can start by accelerating the deployment of the existing Pre-LN Transformers. ### Motivation Related GitHub issue: https://github.com/pytorch/pytorch/issues/72643#issue ### Your contribution We have provided our reference implementation at https://github.com/ZixuanJiang/pre-rmsnorm-transformer. We are open to submitting a related PR in the future.
05-26-2023 07:19:02
05-26-2023 07:19:02
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,785
closed
deepcopy added in pipeline, breaks anything that worked before that used Rlock etc. like streaming generation
### System Info transformers==4.29.2 python 3.10 ### Who can help? @gante @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Pass streamer instance of TextIteratorStreamer(), which includes thread, to TextGenerationPipeline as generator kwargs. 2. See pickle error due to new copy.deepcopy() added. Due to this change by @gante: https://github.com/huggingface/transformers/commit/b369e507aaa78103baf5d3f3563952b44a0408a1 This is a fully blocking change for me. I cannot upgrade to new transformers since this change because this breaks streaming scenario. ### Expected behavior I don't think copy.deepcopy() is appropriate. The dictionary itself is passed in as **generate_kwargs and any mutation to the dictionary itself has no effect on the parent dictionary or items passed in. Additionally, the modifications made to the dictionary in that changed code only involve entries *within* the dictionary, not to mutable items inside the dictionary, so none of those changes would have any effect to any other block of code. A simple shallow copy is sufficient. But additionally, I can't see any reason for any copy at all. Changes to the dictionary locally have no effect anywhere else.
05-26-2023 06:32:50
05-26-2023 06:32:50
@pseudotensor you closed this without a comment. Is this issue still relevant ?<|||||>Github trying to be too smart with separate issue in another repo<|||||>@pseudotensor 👋 TIL "The dictionary itself is passed in as **generate_kwargs and any mutation to the dictionary itself has no effect on the parent dictionary or items passed in." In that case you're right, no copy is needed at all!<|||||>@pseudotensor should be fixed now (closing the issue, but feel free to reopen if you find further related issues)
transformers
23,784
closed
CodeT5pEncoderDecoderModel does not support `device_map='auto'` yet.
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Loaded the InstructCodeT5+ model as per https://huggingface.co/Salesforce/instructcodet5p-16b 2. Tried to use LoRA to fine-tune InstructCodeT5+ on NL to code translation task. Code and Traceback given below: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch token_id="Salesforce/instructcodet5p-16b" tokenizer = AutoTokenizer.from_pretrained(token_id) model = AutoModelForSeq2SeqLM.from_pretrained(token_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, trust_remote_code=True, decoder_start_token_id=1, pad_token_id=-100, load_in_8bit=True, device_map='auto').to(device) ``` Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-1436d2a64ffb> in <module> 10 11 # load model from the hub ---> 12 model = AutoModelForSeq2SeqLM.from_pretrained(model_id, 13 torch_dtype=torch.float16, 14 low_cpu_mem_usage=True, ~/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 460 class_ref, pretrained_model_name_or_path, **hub_kwargs, **kwargs 461 ) --> 462 return model_class.from_pretrained( 463 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 464 ) ~/.cache/huggingface/modules/transformers_modules/instructcodet5p-16b/modeling_codet5p.py in from_pretrained(cls, *args, **kwargs) 855 ) 856 kwargs["_fast_init"] = False --> 857 return super().from_pretrained(*args, **kwargs) 858 859 def forward( ~/.local/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2683 2684 if model._no_split_modules is None: -> 2685 raise ValueError(f"{model.__class__.__name__} does not support `device_map='{device_map}'` yet.") 2686 no_split_modules = model._no_split_modules 2687 if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: ValueError: CodeT5pEncoderDecoderModel does not support `device_map='auto'` yet. ``` Seems Like this functionality is not included yet. When to expect it to be added? Thanks in advance! ### Expected behavior Expect the model to get loaded without any error
05-26-2023 05:46:25
05-26-2023 05:46:25
Based on the traceback, I would suggest the authors to update their code to add a `_no_split_module` class variable, which would fix the error. Now we don't necessarily have a say on other's repo, I would suggest you open an Issue on the hub (or event better a PR) to support this. <|||||>@ArthurZucker it seems like the issue is resolved in https://huggingface.co/Salesforce/instructcodet5p-16b/discussions/1. Thanks for the help!
transformers
23,783
closed
Fix no such file or directory error
# What does this PR do? Add the logic to check if the output directory exists before opening the file. It fixes `no such file or directory` error when run [ViT model](https://github.com/huggingface/transformers/tree/8a817e1ecac6a420b1bdc701fcc33535a3b96ff5/examples/tensorflow/image-classification). ``` Traceback (most recent call last): File "/home/ranran/transformers/examples/tensorflow/image-classification/run_image_classification.py", line 564, in <module> main() File "/home/ranran/transformers/examples/tensorflow/image-classification/run_image_classification.py", line 546, in main with open(os.path.join(training_args.output_dir, "all_results.json"), "w") as f: FileNotFoundError: [Errno 2] No such file or directory: './beans_outputs/all_results.json' ``` # How to reproduce: ``` pip install --upgrade pip git clone https://github.com/huggingface/transformers.git cd transformers && pip install . pip install -r examples/tensorflow/_tests_requirements.txt pip install -r examples/tensorflow/image-classification/requirements.txt cd examples/tensorflow/image-classification python3 run_image_classification.py \ --dataset_name beans \ --output_dir ./beans_outputs/ \ --remove_unused_columns False \ --do_train \ --do_eval \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --save_total_limit 3 ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts, @Rocketknight1 Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-26-2023 05:31:10
05-26-2023 05:31:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>Looks like the CI is complaining about code style now! Can you `pip install transformers[quality]` and then `make style` in the `transformers `directory, then commit/push? That will run our code formatters and hopefully resolve the issue.<|||||>Thank you! I think the issues is resolved.<|||||>Yep, looks good. Thank you for the PR and the quick iteration!
transformers
23,782
closed
[WIP] Add internimage
# What does this PR do? The PR adds internimage to transformers. Addresses #22240 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR and help with the left out work. Feel free to tag members/contributors who may be interested in your PR. If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @adit299 @Weiyun1025 @amyeroberts @sgugger @stevhliu @MKhalusova
05-26-2023 04:06:34
05-26-2023 04:06:34
All I did was copy the code from [here](https://huggingface.co/OpenGVLab/internimage_s_1k_224) as mentioned in #22240. I couldn't figure out the test cases so I commented those out for now. I manually imported the model and initialized an instance to see if it works and it did. I will fix the failing documentation testcases. Might need some help with the technical details of the model.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23782). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @millionhz, thanks for opening this PR and for the work adding this model! The linked to repo already has the [model code on the hub](https://huggingface.co/OpenGVLab/internimage_s_1k_224/blob/main/intern_image.py), and the model can be loaded directly with: ```python from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/internimage_s_1k_224", trust_remote_code=True) ``` so a PR to add into the transformers repo isn't necessary. <|||||>@amyeroberts Oh. I opened the PR because of #22240. I close it if its not needed.<|||||>@millionhz - no worries, it's not obvious from the issue. I'll comment on the issue and we can close both that and this PR. It's great that you're wanting to contribute and there's still plenty of ways you can in the library - addressing models or good first issues. We're looking forward to any future PRs :) Just remember to read the contributor's guideline and template carefully, e.g. for this PR more that 3 people were tagged.
transformers
25,285
open
Sentence start got unexpected space
![image](https://github.com/huggingface/tokenizers/assets/21303438/9a5464e5-89c0-47a3-be97-a8b8a8abecc2) I got some iput_ids which encoded, after append some tgt_ids to input_ids, the new decoded sentences added weired spaces. Here is the code: ```python from transformers import LlamaTokenizer # any LLama tokenizer tokenizer = LlamaTokenizer.from_pretrained("checkpoints/BiLLa-7B-LLM/tokenizer.model") prefix = "Human: \n用python写一段快排\n\nAssistant: \n" output = "OK, I will do for u!" sentence_ids = tokenizer.encode(prefix, add_special_tokens=False) b = tokenizer.decode(sentence_ids) print(sentence_ids) print(b) input_ids = sentence_ids + tokenizer.encode(output, add_special_tokens=False) input_ids += [tokenizer.eos_token_id] o = tokenizer.decode(input_ids) print(input_ids) print() print(o) ``` My output: ``` [12968, 29901, 29871, 13, 30406, 4691, 31479, 30287, 31559, 32815, 32996, 13, 13, 7900, 22137, 29901, 29871, 13] Human: 用python写一段快排 Assistant: [12968, 29901, 29871, 13, 30406, 4691, 31479, 30287, 31559, 32815, 32996, 13, 13, 7900, 22137, 29901, 29871, 13, 9280, 29892, 306, 674, 437, 363, 318, 29991, 2] Human: 用python写一段快排 Assistant: OK, I will do for u!</s> ``` As you can see, both before Human and OK, there is an space, but actually not expected. Why?
05-26-2023 03:45:08
05-26-2023 03:45:08
2 things that could be problematic here 1. a token that has a prefixspace (metasymbol for unigram or or accent G for BBPE etc) 2. somewhere in the tokenizer chain there is a module within the tokenizer that is adding a prefix (add_prefix_space = True) I checked and token 12968 does not have prefix space so it is not 1.<|||||>possibly related huggingface/tokenizers#1250 huggingface/tokenizers#1174 huggingface/tokenizers#990 <|||||>@chris-ha458 Thanks for taking look. I depart this problem and digged a little bit, this is what I found: *Encode one sentence in 2 part (such as question + answer) without any space in them, and then concat the ids, compare with ecnoder the whole sentence at once, THEY ARE NOT SAME*. I don't know if this is expected, but this is out of my expecetations. For detail, please run this script on any llama tokeizer: ```python from transformers import LlamaTokenizer # any LLama tokenizer tokenizer = LlamaTokenizer.from_pretrained("checkpoints/BiLLa-7B-LLM/tokenizer.model") def test1(): prefix = "Human:\n用 python 写一段快排\n\nAssistant:" output = "OK, I will do for u!" sentence_ids = tokenizer.encode(prefix, add_special_tokens=False) # b = tokenizer.decode(sentence_ids) print(sentence_ids) d = tokenizer.encode(output, add_special_tokens=False) print(d) input_ids = sentence_ids + d # input_ids += [tokenizer.eos_token_id] o = tokenizer.decode(input_ids) print(input_ids) print(o) def test2(): print('---------------- test2') prefix = "Human:\n用 python 写一段快排\n\nAssistant:" output = "OK, I will do for u!" sentence = prefix + output sentence_ids = tokenizer.encode(sentence, add_special_tokens=False) b = tokenizer.decode(sentence_ids) print(sentence_ids) print(b) c = tokenizer.decode([12968]) print(c) c = tokenizer.decode([9280]) print(c) c = tokenizer.decode([8949]) print(c) if __name__ == '__main__': test1() test2() ``` Here is interesting thing: the 2 way to encode **same sentence** got different ids: ``` [12968, 29901, 13, 30406, 3017, 29871, 31479, 30287, 31559, 32815, 32996, 13, 13, 7900, 22137, 29901, 9280, 29892, 306, 674, 437, 363, 318, 29991] [12968, 29901, 13, 30406, 3017, 29871, 31479, 30287, 31559, 32815, 32996, 13, 13, 7900, 22137, 29901, 8949, 29892, 306, 674, 437, 363, 318, 29991] ``` And I decode the different ids that might caused space, they actually same character....... So I am totally missed here....<|||||>Hey! This issue has nothing to do with `tokenizers` since it uses the `slow` tokenizer. I believe that this will be fixed by #25224
transformers
23,781
open
BART-fusion
### Model description BART- fusion, a novel model for generating lyric interpretations from lyrics and music audio that combines a large-scale pre-trained language model with an audio encoder. It uses a cross-modal attention module to incorporate the audio representation into the lyrics representation to help the pre-trained language model understand the song from an audio perspective, while preserving the language model’s original generative performance. Please see the paper here: https://arxiv.org/abs/2208.11671 ### Open source status - [X] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation Here is the code repository for the paper: https://github.com/ldzhangyx/BART-fusion/tree/main. The weights should be available in the checkpoints: https://drive.google.com/drive/folders/18EUUx-KT9xGJ1uq2UoOgj0X9BpngNn_T
05-26-2023 03:38:52
05-26-2023 03:38:52
@sgugger what do you think of this request? If you think it's a good addition to the repo, I can take this task on.<|||||>cc @sanchit-gandhi and @hollance <|||||>Hey @jnj2102! Thanks for the feature request - while I think it's a cool model, I'm not sure it's best suited in the `transformers` library directly since the original repository has quite low usage (20 stars) and the paper as well (4 citations). If you're really keen on using this model, you could explore adding it to the Hub, e.g. as done with the [MERT](https://huggingface.co/m-a-p/MERT-v1-95M) model. WDYT?<|||||>Hi! No problem. How do you add a model to the Hub? I’ll check out the MERT model too. On Fri, Jun 2, 2023 at 11:29 AM Sanchit Gandhi ***@***.***> wrote: > Hey @jnj2102 <https://github.com/jnj2102>! Thanks for the feature request > - while I think it's a cool model, I'm not sure it's best suited in the > transformers library directly since the original repository has quite low > usage (20 stars) and the paper as well (4 citations). If you're really keen > on using this model, you could explore adding it to the Hub, e.g. as done > with the MERT <https://huggingface.co/m-a-p/MERT-v1-95M> model. WDYT? > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/23781#issuecomment-1573929450>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP2N4VD6EPP5HNQTGMZ6ALXJIBFDANCNFSM6AAAAAAYPVMXB4> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > -- Best wishes, Jami <|||||>Hey Jami! Awesome - there's info on using custom code on the Hub here: https://huggingface.co/docs/transformers/v4.27.1/en/custom_models#using-a-model-with-custom-code. Let me know if you have any questions, more than happy to help here!
transformers
23,780
closed
trainer evaluation strucked when using dynamic padding in distributed evaluation
### System Info tranasformer version=4.28.1 deepspeed=0.9.2 As I known, trainer evaluate func is distributed. when I use longest padding in every eval batch, the program will be strucked. This won't happen when I use max_length padding. I guess processes is structed because gather operation between different length tensors from different processes. Please fix this bug. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1.use longest padding 2.trainer evaluate in multi-gpus with deepspeed ### Expected behavior the program won't struck
05-26-2023 02:48:15
05-26-2023 02:48:15
Without a code reproducer, there is nothing we can do. The Trainer will pad samples to the same length before gathering them, so this is already accounted for.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,779
closed
considering add some logic here, responsing with some text like "I don't know" when the model output is with some probs lower than threshold ,which means that it is not that confident.
https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/generation/utils.py#LL2650C69-L2650C69 considering add some logic here, responsing with some text like "I don't know" when the model output is with some probs lower than threshold ,which means that it is not that confident.
05-26-2023 02:39:57
05-26-2023 02:39:57
stopping_criteria may works
transformers
23,778
closed
Training ByT5 for next response generation
Hi, I am trying to train a ByT5 model for text2text generation specifically, given previous chat history the objective is to produce a response for the input. I understand that I can use decoder-only models for the task, but we need to use the byte-level information which we will be using in the future. For training purposes, I have obtained a dataset for fine-tuning and used the following configuration: ``` --model_name_or_path google/byt5-base \ --do_train \ --do_eval \ --do_predict \ --output_dir ./t5-base_50k_tast10 \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=16 \ --predict_with_generate \ --eval_steps 1 \ --greater_is_better True \ --load_best_model_at_end True\ --logging_steps 4 \ --metric_for_best_model bleu_2 \ --num_train_epochs 100 \ --save_steps 1 \ --save_total_limit 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --max_source_length 1000 \ --max_target_length 200 \ --learning_rate 5e-5 \ ``` My code to fine-tune looks like the following: ``` config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir) tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=True, truncation_side='left') model = AutoModelForSeq2SeqLM.from_pretrained( model_args.model_name_or_path, config=config, cache_dir=model_args.cache_dir, ) embedding_size = model.get_input_embeddings().weight.shape[0] if(len(tokenizer)>embedding_size): model.resize_token_embeddings(len(tokenizer)) if model.config.decoder_start_token_id is None: raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined") max_target_length = data_args.max_target_length padding = "max_length" if data_args.pad_to_max_length else False def preprocess(text): ... # some preprocessing code def preprocess_function(examples): ... #call preprocess above and tokenize model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding='longest', truncation=True, return_tensors="pt") labels = tokenizer(text_target = targets, max_length=max_target_length, padding='longest', truncation=True, return_tensors="pt") ... if(training_args.do_train): train_dataset = train_dataset.map(preprocess_function, batched=True, num_proc=data_args.preprocessing_num_workers, desc="Running tokenizer on train dataset",remove_columns=column_names, load_from_cache_file=False) if(training_args.do_eval): eval_dataset = val_dataset.map(preprocess_function, batched=True, num_proc=data_args.preprocessing_num_workers, desc="Running tokenizer on validation dataset", remove_columns=column_names, load_from_cache_file=False) if(training_args.do_predict): test_dataset = test_dataset.map(preprocess_function, batched=True, num_proc=data_args.preprocessing_num_workers, desc="Running tokenizer on prediction dataset",remove_columns=column_names, load_from_cache_file=False) label_pad_token_id = -100 if data_args.ignore_pad_token_for_loss else tokenizer.pad_token_id data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, label_pad_token_id=label_pad_token_id, pad_to_multiple_of=8 if training_args.fp16 else None) metric = evaluate.load("bleu") def postprocess_text(preds, labels): ...#post process stuff return preds, labels def compute_metrics(eval_preds): ... #get bleu and other metrics return result training_args.generation_max_length = training_args.generation_max_length if training_args.generation_max_length is not None else data_args.val_max_target_length training_args.generation_num_beams = data_args.num_beams if data_args.num_beams is not None else training_args.generation_num_beams trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset = train_dataset if training_args.do_train else None, eval_dataset = eval_dataset if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, callbacks = [EarlyStoppingCallback(early_stopping_patience=5)] ) if training_args.do_train: checkpoint = None if(training_args.resume_from_checkpoint is not None): checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model() metrics = train_result.metrics trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() ``` However, the problem with the above code is after a lot of fine-tuning the model generates text which is repeated again and again and sometimes copies from the input or generates responses that are not relevant or related to the input. I have tried contrastive search, beam search, etc. also but the response generated by the model is still gibberish. Any suggestions on how to improve ByT5's capability to do the task? As I understand, T5-based models (or ByT5) perform well on many seq2seq tasks such as Text2SQL, etc. so they should at least generate relevant responses to the input for this task too. Please let me know, any suggestions you have. @ArthurZucker @younesbelkada I am also attaching some sample responses generated by the model. <img width="1204" alt="Screenshot 2023-05-25 at 10 24 34 PM" src="https://github.com/huggingface/transformers/assets/19395011/f67ade1b-99cc-4adc-95f6-7eecc1077bd0">
05-26-2023 02:27:43
05-26-2023 02:27:43
Hey! Thanks for reporting, however urgent this is, please refrain from pinging as many people as that. All the questions related to `how to train` or `improve my training` should be asked on the [forum](https://discuss.huggingface.co/), as they are not bugs and the community is more adept to help you there.
transformers
23,777
closed
transformers-cli serve doesn't support muti-workers
### System Info transformers version=4.28.1 error: You must pass the application as an import string to enable "reload" or "workers" transformers-cli serve uses fastapi with uvicorn, the application adds routes in a class rather than decorators. In this way, uvicorn run can not import the applications. I don't have a solution yet. Does anyone have ideas to fix this ? ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction transformers-cli serve --workers 2 error: You must pass the application as an import string to enable "reload" or "workers" ### Expected behavior cli command supports muti-workers
05-26-2023 02:21:45
05-26-2023 02:21:45
This is not an API we still maintain FYI.
transformers
23,776
open
Saving Models Broke
### System Info I get this error when building a hugginface NLLB model on a ClearML docker image as per this bug in my repo (https://github.com/sillsdev/machine.py/issues/14): ``` 50% 500/1000 [08:59<09:03, 1.09s/it][INFO|trainer.py:2904] 2023-05-24 13:04:36,643 >> Saving model checkpoint to /root/machine/builds/646e3f95cf5823db7b5edd92/model/checkpoint-500 Traceback (most recent call last): File "/root/.clearml/venvs-builds/3.8/code/untitled.py", line 11, in <module> run(args) File "/usr/local/lib/python3.8/dist-packages/machine/jobs/build_nmt_engine.py", line 56, in run job.run(check_canceled) File "/usr/local/lib/python3.8/dist-packages/machine/jobs/nmt_engine_build_job.py", line 54, in run model_trainer.train(check_canceled=check_canceled) File "/usr/local/lib/python3.8/dist-packages/machine/translation/huggingface/hugging_face_nmt_model_trainer.py", line 263, in train train_result = self._trainer.train(resume_from_checkpoint=ckpt) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1664, in train return inner_training_loop( File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2019, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2308, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2365, in _save_checkpoint self.save_model(output_dir, _internal_call=True) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2866, in save_model self._save(output_dir) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2922, in _save self.model.save_pretrained( File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1734, in save_pretrained model_to_save.config.save_pretrained(save_directory) File "/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py", line 457, in save_pretrained self.to_json_file(output_config_file, use_diff=True) File "/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py", line 850, in to_json_file writer.write(self.to_json_string(use_diff=use_diff)) File "/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py", line 836, in to_json_string return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" File "/usr/lib/python3.8/json/__init__.py", line 234, in dumps return cls( File "/usr/lib/python3.8/json/encoder.py", line 201, in encode chunks = list(chunks) File "/usr/lib/python3.8/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/usr/lib/python3.8/json/encoder.py", line 353, in _iterencode_dict items = sorted(dct.items()) TypeError: '<' not supported between instances of 'str' and 'int' ``` Here is some analysis -> The normal dict has, among other things: ``` "id2label": { "0": "LABEL_0", "1": "LABEL_1" } ``` But after being trained (and possible ClearML doing something), it becomes: ``` "id2label": { 0: "LABEL_0", 1: "LABEL_1", "0": "LABEL_0", "1": "LABEL_1" } ``` Which causes the sorting to break (and the above error). **Ideas:** * There were no labels passed to it, but labels are auto-created based upon this code: * If there are no id2label mapping, num_labels is set to 2: https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/configuration_utils.py#L319-L331 * If num_labels is 2, then labels the above labels are created: https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/configuration_utils.py#L418-L421 * Likely this code got called twice - and in between the int's got converted to strings due to label stuff happening. What is a good path forward? ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This is pretty custom and would not work without all the proper licenses - but to reproduce: * Spin up the docker-compose environment from: https://github.com/sillsdev/serval * Setup a clearml docker agent using a docker image made from the master branch from: https://github.com/sillsdev/machine.py * Run the NmtBatch E2E test from: https://github.com/sillsdev/serval * The error occurs in the docker container in clearml. ### Expected behavior * Don't crash. * Likely, don't auto-create labels if none are given (the case here) * Or, if you are going to convert all the numbers to strings when saving to a dictionary, account for the id2label fields properly, such as always use strings or convert back to int's when loading from a dict, etc.
05-26-2023 02:01:54
05-26-2023 02:01:54
I tested out a work around - specifically adding empty id2label's etc. and it worked. ``` AutoConfig.from_pretrained(model_name, label2id={}, id2label={}, num_labels=0) ``` This probably should have a longer term fix - possibly both in not auto-creating meaningless labels and making the save/restore not cause saving conflicts with int/str dict keys.<|||||>What is a reproducer for the first issue?<|||||>Hey @johnml1135, what version of clearml were you using?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I believe the most recent version of ClearML - though the source of the error can be seen in the code refrenced.
transformers
23,775
closed
expose safe_serialization argument in the pipeline API
# What does this PR do? expose safe_serialization argument of PreTrainedModel and TFPreTrainedModel in the save_pretrained of the pipeline API ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Narsil
05-25-2023 22:13:43
05-25-2023 22:13:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,774
closed
Fix RWKV backward on GPU
# What does this PR do? Fixes the backward pass for RWKV on GPU. The function backward was not adapted to the revamp of the forward, my bad.
05-25-2023 21:00:31
05-25-2023 21:00:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,773
open
Implement DINO V2
### Model description Code and model is available here: https://github.com/facebookresearch/dinov2 Full paper here: https://arxiv.org/abs/2304.07193 The implementation seems fairly simple. Most layers is already implemented within transformers library (it's just a ViT). There's some changes compared to DINO (which is implemented already), such as SwiGLU and LayerScale. According to #20403, SwiGLU is already implemented, though, the original code uses xformers's SwiGLU. DINO V2 also have a different license as listed here: https://github.com/facebookresearch/dinov2/blob/main/LICENSE It is NonCommercial. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_ If there's no issue with license, I can make a PR for the model.
05-25-2023 19:44:26
05-25-2023 19:44:26
Same as issue as mentioned at #23739 <|||||>> Same as issue as mentioned at #23739 Oops, I didn't notice. You want to port the weight/code over? Will likely have to add a small layer or two to transformer library and write a weight convert script just like https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/convert_dino_to_pytorch.py<|||||>> > Same as issue as mentioned at #23739 > > > > Oops, I didn't notice. You want to port the weight/code over? Will likely have to add a small layer or two to transformer library and write a weight convert script just like https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/convert_dino_to_pytorch.py If you don't mind I'd like to take this one. Also, thanks for the tips I'll take a look at the reference you mentioned <|||||>> > > Same as issue as mentioned at #23739 > > > > > > Oops, I didn't notice. You want to port the weight/code over? Will likely have to add a small layer or two to transformer library and write a weight convert script just like https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/convert_dino_to_pytorch.py > > If you don't mind I'd like to take this one. Also, thanks for the tips I'll take a look at the reference you mentioned Sure, please do. I look forward to it! If there's any new layer implemented, will you also add the corresponding flax implementation for those new layer?
transformers
23,768
closed
use_fast=False when loading OPT's tokenizer?
### System Info platform==Ubuntu 18.04.01 python==3.10 transformers==4.29.1 ### Who can help? @sgugger @stevhliu @MK ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction It is shown in the OPT model [documentation](https://huggingface.co/docs/transformers/model_doc/opt) (in Tips) that it is required to pass `use_fast=False` when loading a OPT tokenizer, as OPT tokenizer will add `</s>` to the beginning of every prompt. I made a trial: ```python >>> import transformers >>> tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast=False) >>> tokenizer_fast = transformers.AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast=True) >>> text = "I like you." >>> tokenizer(text) >>> {'input_ids': [2, 100, 101, 47, 4], 'attention_mask': [1, 1, 1, 1, 1]} >>> tokenizer_fast(text) >>> {'input_ids': [2, 100, 101, 47, 4], 'attention_mask': [1, 1, 1, 1, 1]} ``` `</s>` is correctly added and no difference is observed. ### Expected behavior Is the tips wrong or in some other cases `use_fast=Fast` is actually required?
05-25-2023 18:16:13
05-25-2023 18:16:13
cc @ArthurZucker <|||||>Hey! Thanks for reporting. Pretty sure the doc is wrong, but `use_fast=True` use to not be supported for OPT, which could explain this.
transformers
23,767
closed
Bump tornado from 6.0.4 to 6.3.2 in /examples/research_projects/visual_bert
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's changelog</a>.</em></p> <blockquote> <h1>Release notes</h1> <p>.. toctree:: :maxdepth: 2</p> <p>releases/v6.3.2 releases/v6.3.1 releases/v6.3.0 releases/v6.2.0 releases/v6.1.0 releases/v6.0.4 releases/v6.0.3 releases/v6.0.2 releases/v6.0.1 releases/v6.0.0 releases/v5.1.1 releases/v5.1.0 releases/v5.0.2 releases/v5.0.1 releases/v5.0.0 releases/v4.5.3 releases/v4.5.2 releases/v4.5.1 releases/v4.5.0 releases/v4.4.3 releases/v4.4.2 releases/v4.4.1 releases/v4.4.0 releases/v4.3.0 releases/v4.2.1 releases/v4.2.0 releases/v4.1.0 releases/v4.0.2 releases/v4.0.1 releases/v4.0.0 releases/v3.2.2 releases/v3.2.1 releases/v3.2.0 releases/v3.1.1 releases/v3.1.0 releases/v3.0.2 releases/v3.0.1 releases/v3.0.0 releases/v2.4.1 releases/v2.4.0 releases/v2.3.0 releases/v2.2.1 releases/v2.2.0 releases/v2.1.1</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/tornadoweb/tornado/commit/34f5c1cf2696afec5532ca9e870ba32cbc7fee27"><code>34f5c1c</code></a> Version 6.3.2</li> <li><a href="https://github.com/tornadoweb/tornado/commit/32ad07c54e607839273b4e1819c347f5c8976b2f"><code>32ad07c</code></a> web: Fix an open redirect in StaticFileHandler</li> <li><a href="https://github.com/tornadoweb/tornado/commit/e0fa53ee96db720dc7800d0248c39a4ffb8911e9"><code>e0fa53e</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3257">#3257</a> from bdarnell/build-workflow-wstest-warning</li> <li><a href="https://github.com/tornadoweb/tornado/commit/f5a1d5c7e235ad8860a4c2c5f259a43692bcbaab"><code>f5a1d5c</code></a> ci: Only run pypi actions from the main repo</li> <li><a href="https://github.com/tornadoweb/tornado/commit/1849ef6c48415ef8f5fecbd47d9f68225588507c"><code>1849ef6</code></a> test: Close a websocket client that causes occasional test failures</li> <li><a href="https://github.com/tornadoweb/tornado/commit/fcb09eba4bd45c2ebfb6356a38acdb3b4450c0d8"><code>fcb09eb</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3256">#3256</a> from bdarnell/build-workflow-qemu</li> <li><a href="https://github.com/tornadoweb/tornado/commit/c3d50f41a29cda5f76031c60cf7902b175b79479"><code>c3d50f4</code></a> ci: Update setup-qemu-action version</li> <li><a href="https://github.com/tornadoweb/tornado/commit/419838b9bcc51445241630def0478f1fbaa61b4b"><code>419838b</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3255">#3255</a> from bdarnell/bump-version-6.3.1</li> <li><a href="https://github.com/tornadoweb/tornado/commit/cd5b9fcf4ac16c3f5480b3d8ae81b4103c0e7549"><code>cd5b9fc</code></a> Bump version to 6.3.1</li> <li><a href="https://github.com/tornadoweb/tornado/commit/245334401570a40ba01813d9adb14976c50d77dd"><code>2453344</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3254">#3254</a> from bdarnell/fix-set-cookie-case</li> <li>Additional commits viewable in <a href="https://github.com/tornadoweb/tornado/compare/v6.0.4...v6.3.2">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tornado&package-manager=pip&previous-version=6.0.4&new-version=6.3.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
05-25-2023 17:50:22
05-25-2023 17:50:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,766
closed
Bump tornado from 6.0.4 to 6.3.2 in /examples/research_projects/lxmert
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's changelog</a>.</em></p> <blockquote> <h1>Release notes</h1> <p>.. toctree:: :maxdepth: 2</p> <p>releases/v6.3.2 releases/v6.3.1 releases/v6.3.0 releases/v6.2.0 releases/v6.1.0 releases/v6.0.4 releases/v6.0.3 releases/v6.0.2 releases/v6.0.1 releases/v6.0.0 releases/v5.1.1 releases/v5.1.0 releases/v5.0.2 releases/v5.0.1 releases/v5.0.0 releases/v4.5.3 releases/v4.5.2 releases/v4.5.1 releases/v4.5.0 releases/v4.4.3 releases/v4.4.2 releases/v4.4.1 releases/v4.4.0 releases/v4.3.0 releases/v4.2.1 releases/v4.2.0 releases/v4.1.0 releases/v4.0.2 releases/v4.0.1 releases/v4.0.0 releases/v3.2.2 releases/v3.2.1 releases/v3.2.0 releases/v3.1.1 releases/v3.1.0 releases/v3.0.2 releases/v3.0.1 releases/v3.0.0 releases/v2.4.1 releases/v2.4.0 releases/v2.3.0 releases/v2.2.1 releases/v2.2.0 releases/v2.1.1</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/tornadoweb/tornado/commit/34f5c1cf2696afec5532ca9e870ba32cbc7fee27"><code>34f5c1c</code></a> Version 6.3.2</li> <li><a href="https://github.com/tornadoweb/tornado/commit/32ad07c54e607839273b4e1819c347f5c8976b2f"><code>32ad07c</code></a> web: Fix an open redirect in StaticFileHandler</li> <li><a href="https://github.com/tornadoweb/tornado/commit/e0fa53ee96db720dc7800d0248c39a4ffb8911e9"><code>e0fa53e</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3257">#3257</a> from bdarnell/build-workflow-wstest-warning</li> <li><a href="https://github.com/tornadoweb/tornado/commit/f5a1d5c7e235ad8860a4c2c5f259a43692bcbaab"><code>f5a1d5c</code></a> ci: Only run pypi actions from the main repo</li> <li><a href="https://github.com/tornadoweb/tornado/commit/1849ef6c48415ef8f5fecbd47d9f68225588507c"><code>1849ef6</code></a> test: Close a websocket client that causes occasional test failures</li> <li><a href="https://github.com/tornadoweb/tornado/commit/fcb09eba4bd45c2ebfb6356a38acdb3b4450c0d8"><code>fcb09eb</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3256">#3256</a> from bdarnell/build-workflow-qemu</li> <li><a href="https://github.com/tornadoweb/tornado/commit/c3d50f41a29cda5f76031c60cf7902b175b79479"><code>c3d50f4</code></a> ci: Update setup-qemu-action version</li> <li><a href="https://github.com/tornadoweb/tornado/commit/419838b9bcc51445241630def0478f1fbaa61b4b"><code>419838b</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3255">#3255</a> from bdarnell/bump-version-6.3.1</li> <li><a href="https://github.com/tornadoweb/tornado/commit/cd5b9fcf4ac16c3f5480b3d8ae81b4103c0e7549"><code>cd5b9fc</code></a> Bump version to 6.3.1</li> <li><a href="https://github.com/tornadoweb/tornado/commit/245334401570a40ba01813d9adb14976c50d77dd"><code>2453344</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3254">#3254</a> from bdarnell/fix-set-cookie-case</li> <li>Additional commits viewable in <a href="https://github.com/tornadoweb/tornado/compare/v6.0.4...v6.3.2">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tornado&package-manager=pip&previous-version=6.0.4&new-version=6.3.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
05-25-2023 17:47:39
05-25-2023 17:47:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,765
closed
Multiple different models returning only `<unk>` tokens in text generation
### System Info - `transformers` version: 4.29.2 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.3 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I've tested the following minimal reproducible example for three models: `ausboss/llama-30b-supercot`, `digitous/Alpacino30b`, and `MetaIX/GPT4-X-Alpasta-30b`. I originally thought that this issue was related to my prompts not being in the correct format for a given model, as referenced in the comments of #23411. However, I've identified the correct prompt formatting for `ausboss/llama-30b-supercot` and `digitous/Alpacino30b` from their model cards. Additionally, while the prompt format is not explicitly stated in the model card for `MetaIX/GT4-X-Alpasta-30b`, it is also based off the alpaca model, so I would expect the same prompt formatting to work as well. The example (the only thing that I changed for each model was what string was the `checkpoint` variable): ``` from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load model and tokenizer checkpoint = 'digitous/Alpacino30b' model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.float16, device_map='auto', load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(checkpoint) # Build prompt prompt = """ Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Extract the biological relations from the following text as (Subject, Predicate, Object) triples in the format ("Subject", "predicate", "Object"): ### Input: Salmeterol is a long-acting beta2-adrenergic receptor (beta 2AR) agonist used clinically to treat asthma. ### Response: """ # Generate predictions inputs = tokenizer(prompt, return_tensors='pt') inputs = inputs.to(0) output = model.generate(inputs['input_ids'], max_new_tokens=500) response = tokenizer.decode(output[0].tolist()) print(response) ``` The output of this script for all three models gives an identical response: ``` <s> Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Extract the biological relations from the following text as (Subject, Predicate, Object) triples in the format ("Subject", "predicate", "Object"): ### Input: Salmeterol is a long-acting beta2-adrenergic receptor (beta 2AR) agonist used clinically to treat asthma. ### Response: <unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk> ``` I'm immediately suspicious that I'm getting an identical output from multiple models. I can't load these models without `load_in_8bit`, so I can't check whether or not this is related to the quantization of the models. I also tried running this code with a shorter prompt that contained less instruction, in case it was too complex: "Extract the biological relations from the following text:". However, I once again get an identical output. ### Expected behavior A response that contains normal tokens, and varies between different models at least somewhat.
05-25-2023 17:34:29
05-25-2023 17:34:29
Same issue here. From that kernel revision it seems like @serenalotreck is running on CentOS 7, possibly within an HPC setup. That's the case with me as well. I've tried several methods of loading / merging but not getting anything from the output apart from `<unk>` tokens for quantized models. For reference, all concerned libraries are at their latest versions, including a few attempts with git versions. I've requested our sysadmins to update the NVIDIA drivers, but other than that, I'm not sure what to do next.<|||||>@akhilvaid that's correct about my kernel revision -- I'm so glad to finally have some answer about what's going on, even if it's a frustrating one! Do you think rolling back versions could possibly help? I don't have a good sense of how recently the features I'm using were added, so I'm not sure if I'd be able to do that from a, still being able to run the code I have, standpoint, but maybe this was a bug that was recently introduced if all the newest versions don't work?<|||||>Also want to tag @gante since this is related to `generate`<|||||>Hey! Thanks for reporting this. I am not entirely sure about what might be going on here, I would suggest to try running a smaller model without `load_in_8bits` and check if the issue persists. If not, then it might be related to `generate` otherwise, it can be a problem with the instabilities<|||||>@akhilvaid just wanted to update you that I got the suggestion from someone on the HPC staff to try running CentOS 9 in a Singularity container, so I'm spending some time trying that today in hopes that it works! @ArthurZucker How small of a model is small? 😄 <|||||>@serenalotreck Unless I'm missing something, containers only really have additional / new libraries installed. That said, I tried the same thing using one of the NGC docker images repurposed into a singularity container with v515 drivers - but the error is persisting. @ArthurZucker I can successfully use/generate responses with a 13B parameter Vicuna in 16bit on an A100 80G. 33B or greater don't fit into a single GPU's memory - and quantization leads to the same issues as earlier. GPU memory and compute utilization jumps - but only `<unk>` tokens are generated.<|||||>@ArthurZucker @akhilvaid I found the same thing -- if I could fit the non-quantized model into memory then it was fine, it's definitely related to the quantization process. However, I only have access to GPUs with 32768MB (~33GB) memory, so I'm even more limited in what I can do without being able to quantize the models. Do you have any specific suggestions for what to do to try and get around this issue?<|||||>Hey @serenalotreck @akhilvaid 👋 This is not a fix per se, as I don't have a similar setup and can't reproduce the issue. Using transformers 4.30, bitsandbytes 0.39.0, pytorch 2.0.0, and **4 bit quantization** on a single RTX 3090, I get ``` ### Response: ("Salmeterol", "is a long-acting beta2-adrenergic receptor (beta 2AR) agonist", "used clinically to treat asthma")</s> ``` This means that the error is likely related to 8 bit quantization or to your setup. Using 4 bit quantization may solve the issue 🙌 Let us know about further developments on your end :) ______________________________________________ ```py from transformers import AutoModelForCausalLM, AutoTokenizer # Load model and tokenizer checkpoint = 'digitous/Alpacino30b' model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map='auto', load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained(checkpoint) # Build prompt prompt = """ Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Extract the biological relations from the following text as (Subject, Predicate, Object) triples in the format ("Subject", "predicate", "Object"): ### Input: Salmeterol is a long-acting beta2-adrenergic receptor (beta 2AR) agonist used clinically to treat asthma. ### Response: """ # Generate predictions inputs = tokenizer(prompt, return_tensors='pt') inputs = inputs.to(0) output = model.generate(inputs['input_ids'], max_new_tokens=500) response = tokenizer.decode(output[0].tolist()) print(response) ```<|||||>@gante thank you! @akhilvaid Curious to know what happens when you run this code. Something whacky is happening on my end -- the code aborts at trying to load the model (after successfully downloading the shards). When I had `load_in_4bit=True`, it didn't print anything, and when I removed `load_in_4bit=True`, it printed out half of a message: ``` lerate` to properly deal with them (`pip install --upgrade acc .cuda ``` I'm working in a conda environment so I ran `conda upgrade accelerate` to see if that would help, accelerate was successfully upgraded, but I still got the same weird half-message. When I change the model to `ausboss/llama-30b-supercot` and include `load_with_4bit`, I get a different part-message: ``` d(init_empty_weights()) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,764
closed
Whisper `get_prompt_ids` throws error when used with a 'FastTokenizer'
### System Info - `transformers` version: 4.30.0.dev0 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.9.16 - Huggingface_hub version: 0.12.0 - Safetensors version: 0.2.8 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi @hollance ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```py from transformers import WhisperTokenizerFast, WhisperTokenizer, GPT2Tokenizer, GPT2TokenizerFast slow_tokenizer = WhisperTokenizer.from_pretrained('openai/whisper-tiny') prompt_ids = slow_tokenizer.get_prompt_ids("Hello, world!", return_tensors="pt") print('Whisper slow tokenizer succeeded') try: fast_tokenizer = WhisperTokenizerFast.from_pretrained('openai/whisper-tiny') prompt_ids = fast_tokenizer.get_prompt_ids("Hello, world!", return_tensors="pt") except Exception as e: print('Whisper fast tokenizer failed - ', e) # Alternatively, this slow-fast param difference can be seen when tokenizing with a # pipeline or any model that has a slow tokenizer `prepare_for_tokenization` method # that checks `add_prefix_space` (GPT2 is old but there are ~20 models this applies to) tokenizer = GPT2Tokenizer.from_pretrained('gpt2', use_fast=False) prompt_ids = tokenizer("Hello, world!", add_prefix_space=True)["input_ids"] print('GPT2 slow tokenizer succeeded') try: tokenizer = GPT2TokenizerFast.from_pretrained('gpt2') prompt_ids = tokenizer("Hello, world!", add_prefix_space=True)["input_ids"] except Exception as e: print('Whisper fast tokenizer failed - ', e) ``` ### Expected behavior Are the slow and fast tokenizers supposed to have the same arg options for tokenizing text? They diverge with the `add_prefix_space` argument; while the slow tokenizer accepts and applies it with the [prepare_for_tokenization](https://github.com/huggingface/transformers/blob/3416bba7c70c358ac17efd3be31e9090135969ab/src/transformers/tokenization_utils.py#L502) method that same model's fast tokenizer does not and throws an error. Given that this arg difference appears to be present across all models where `add_prefix_space` can be provided to the slow tokenizer (at a glance appears to be ~20) I'd imagine the answer is no, the arg options aren't supposed to be 1:1. The fix for the Whisper tokenizer `get_prompt_ids` method is straightforward as we can just do `" " + text` directly in the method instead of `add_prefix_space=True`, but I wanted to bring up the above in case that argument is actually supposed to compatible across both slow and fast tokenizers in which case we would also want to address that.
05-25-2023 17:28:59
05-25-2023 17:28:59
Related issue #17391 mentions that `add_prefix_space` can only be specified for fast tokenizers upon init, so it seems like just the manual `" " + text` replacement for this param would be the appropriate fix.<|||||>Hey! Thanks for reporting. Indeed I think you can easily fix this for a single model (in the fast tokenizer you could allow the argument to flow), but I do agreed that it is not really expected that the API between fast and slow would be different on that.
transformers
23,763
closed
Trainer do model generation during evaluation loop
### Feature request Current trainer only supports teacher-forcing generation for computing evaluation loss but not auto-regressive generation for other metrics. Seq2SeqTrainer supports this but seems that it only accepts encoder-decoder models like T5 instead of GPT-style (decoder-only) models. Would this feature be added in the future? ### Motivation I am training a decoder-only model and want to use model.generate to evaluate it during training. ### Your contribution I haven't investigated deeply into Trainer code.
05-25-2023 17:11:26
05-25-2023 17:11:26
You can write your own subclass of the Trainer, it's not supported and we don't plan on adding it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,762
closed
Trainer.train() initializing train multiple times for no apparent reason and doubling total optimization steps with LoRA
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no + accelerate-0.19.0-py3-none-any.whl + datasets-2.12.0-py3-none-any.whl + peft-0.3.0-py3-none-any.whl ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments, DataCollatorForLanguageModeling from peft import get_peft_model, LoraConfig, TaskType model_name_or_path = "asi/gpt-fr-cased-small" def preprocess_function(examples): return tokenizer(text=examples["review"], truncation=True, padding="max_length", max_length=tokenizer.max_model_input_sizes["gpt2"]) trainset = load_dataset("allocine", split="train").remove_columns("label").select(range(900)) testset = load_dataset("allocine", split="test").remove_columns("label").select(range(900,1000)) tokenizer_name_or_path = "asi/gpt-fr-cased-small" tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path) tokenizer.model_max_length = tokenizer.max_model_input_sizes["gpt2"] if tokenizer.pad_token_id is None: tokenizer.pad_token_id = tokenizer.eos_token_id trainset = trainset.map(preprocess_function, remove_columns=trainset.features.keys(), num_proc=32) testset = testset.map(preprocess_function, remove_columns=testset.features.keys(), num_proc=32) peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, inference_mode=False, r=12, lora_alpha=32, lora_dropout=0.15, fan_in_fan_out=True, ) model = AutoModelForCausalLM.from_pretrained(model_name_or_path) lora_model = get_peft_model(model, peft_config) trainer = Trainer( model=lora_model, train_dataset=trainset, eval_dataset=testset, data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False), args=TrainingArguments( auto_find_batch_size = True, fp16=True, num_train_epochs = 2, learning_rate = 2e-5, optim = "adamw_torch", evaluation_strategy = "steps", eval_delay = 0, eval_steps = 10, eval_accumulation_steps = 1, logging_strategy = "steps", logging_first_step = True, logging_steps=10, log_level = "info", save_strategy = "steps", save_steps = 100, save_total_limit = 10, output_dir='outputs', ), ) trainer.train() ``` ### Expected behavior Hello ! The first logs from trainer seems accurate to me (`Total optimization steps = Num Epochs * Num examples//Total train batch size`) but right after, trainer doubles the total optimization steps for no reason. I also encountered a case where it doubled 4 times ! ``` ***** Running training ***** Num examples = 900 Num Epochs = 2 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 226 Number of trainable parameters = 442,368 You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. ***** Running training ***** Num examples = 900 Num Epochs = 2 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 450 Number of trainable parameters = 442,368 ```
05-25-2023 16:14:17
05-25-2023 16:14:17
How are you launching your training? Also cc @younesbelkada since peft is involved.<|||||>The example I provided has been run on a Google Colab with GPU for reproducibility. I also had the same issue on Jupyterlab notebooks.<|||||>Looks like the misleading behavior comes from the Training argument `auto_find_batch_size`. When ran with `per_device_batch_size=8`, my script throws an OOM error but with `per_device_batch_size=4` everything works like charm. So the last log is accurate since batch_size has been cut in half. I also found the same behavior on my private script where logs loop 4 times. https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/trainer.py#L1693 I assume at some point the `args.per_device_train_batch_size` is not updated, hence the discrepancy in logs. Edit : I take a look at accelerate.utils.find_executable_batch_size and I think the reason why the log are wrong is simply because in` _inner_training_loop` https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/trainer.py#L1703 `args.train_batch_size` is used but neither updated. Logs should use `self._train_batch_size`<|||||>cc @muellerzr then :-)<|||||>Thanks! https://github.com/huggingface/transformers/pull/23800 will solve this :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing as this as it seems resolved !
transformers
23,761
closed
My QUESTION is how run a very big model like bloom on a cluster of machines ?
### System Info bloom, pytorch, ubuntu ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction .. ### Expected behavior Hello i can run opt 66b on one server with 6 gpu 24 Gb by using you page on huggingface on how load big models : I give device_map. I can also run bloom on one server with 8 GPUs 24 GB by giving device_map but it uses offload on CPU and it takes time to answer. My QUESTION is how run a very big model like bloom on a cluster of machines indeed bloom would need 20 GPus 24 Gb and it needs a cluster of 3 machines with 8 gpus to deploy, with accelerate it is not possible as we are limited to only one machine. I have tried everything like to use RPC Framework but it seems it is only for CPU. Thanks for your help. Regards Pat
05-25-2023 16:04:01
05-25-2023 16:04:01
Please use the [forums](https://discuss.huggingface.co/) for such questions.<|||||>yes thanks for your answer. i wrote it also on forum but it not an easy question and only specialists like you can answer or give me a give me some help so i can continue ... Regards pat<|||||>so could you give some technical help, regards, pat<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
23,760
closed
Move TF building to an actual build() method
This has been a longstanding dream of mine: To move all TF model building into a proper `build()` method, using symbolic tensors instead of actual dummies. This would allow us to, among other things, stop our very hacky overriding of `save_spec`, as well as allowing us to build our TF models with zero device flops (although the speedup may be system-dependent, as we do have some compile time with this approach). It would make our models much closer to the Keras standard, which would stop Chollet casting curses upon me from afar. In the past, we've run into serious problems with tensor names moving around when we tried this - I think I've figured out why, though, and I have a couple of ideas to resolve that without lots of hacky edge-case code. This is an extremely draft PR that will break everything until I finish testing it properly! **Update:** Using symbolic tensors is much slower - it works in most cases, but increases the time it takes for our tests to run by a factor of ~4, which is probably not acceptable. Instead, I'm going to rework this PR to move to a standard build() method using actual dummies. With some optimizations, I believe we can make this work, while still preserving most of the benefits of this PR, including not repeating the build unnecessarily and adding the ability to override `build()` to speed up our slowest models
05-25-2023 15:07:25
05-25-2023 15:07:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>This should be ready to review now! Some tests failing, but that looks like Hub connection issues<|||||>Actually, I should explain my reasoning for some of the changes here - you're probably right that I can improve the API, though! Firstly, the removal of `tf.cond` is actually not a necessary part of this PR anymore, but it is good practice (Longformer and LED are the only two models in all of Transformers that use it in their modelling code). The reason is because of the Keras call stack. In the `__call__` method for any TF module, Keras appends that layer to the call stack, and enters that layer's namespace. This means that if you have `self.bert` and that calls `self.encoder` and that calls `self.attn`, Keras will be in the `bert/encoder/attn` namespace. Incredibly, though, `tf.cond` counts as a layer with its own namespace, but **only when the tf.cond is not being eagerly evaluated**. In my initial PR, I was trying to replace our dummies with symbolic TF tensors, which meant the `tf.cond` was not evaluated at compile time, but instead had to be compiled as a conditional in the model graph. The result is that all layer weights inside the conditional got encapsulated in a `/cond.1/` namespace. This broke compatibility with existing checkpoints. Removing `tf.cond` helped, but to be safe I added a manual build to those layers to directly control the weight naming regardless of what the call stack thought it should be. As a result, I could probably revert the `tf.cond` calls, but I think it's preferable if we don't, and just try to keep it out of modelling code and just use `if` statements instead (which TF can compile into graph conditionals if it can't resolve the branch to be chosen at compile time). `tf.cond` is fine in generation code where no weight names are created. Secondly, the distinction between `build()` and `build_with_dummies()` is a bit of an ugly hack - I think I could probably remove `build_with_dummies()` entirely, but there was a piece of the TF-PT crossloading code that only worked if it could build the model with specific inputs of its choice. I added `build_with_dummies()` to support that, with a separate `built_with_dummies` flag to make sure that any repeated calls wouldn't waste more time. However, it would probably make more sense to just manually pass the inputs through the model in those particular crossloading functions and delete the method and the flag. WDYT?<|||||>> tf.cond counts as a layer with its own namespace, but only when the tf.cond is not being eagerly evaluated. 😑 In this case, let's rid ourselves of this pseudolayer! I'm pro the if/else changes :) > it would probably make more sense to just manually pass the inputs through the model in those particular crossloading functions and delete the method and the flag. WDYT? Yep, that's what I would go for. Would it be possible to still have some of the logic to exit early if already built? Or would this be to tricky to handle to be worth it? <|||||>I think we could, but it's probably not necessary - the only cases where we build the model with specific inputs are in weird PT-TF crossloading functions, which should always be called during or near model init anyway, so I think it's fine if there's a risk of a little bit of duplicated work there to save on overall code complexity.<|||||>@amyeroberts Done! `build_with_dummies` is no more<|||||>Also, this PR looks ready but I'm going to let it sit for a couple of days to make sure the CI is working again after my last library-breaking PR, then merge it.<|||||>Change of plans: The CI is working except for OOM errors during building for some of the pipelines, and since this cleans up building a bit we're going to merge this one too and see if it helps. If it doesn't, I'll open a new PR to see if I can lower the memory usage in the affected models.
transformers
23,759
closed
Adds a FlyteCallback
# What does this PR do? This PR adds a Flyte callback that integrates with Flyte's [intra-task checkpoints](https://docs.flyte.org/projects/cookbook/en/stable/auto/core/control_flow/checkpoint.html#why-intra-task-checkpoints) and [Flyte Decks](https://docs.flyte.org/projects/cookbook/en/latest/auto/core/flyte_basics/deck.html). I raised this issue in order to get approval for this PR #23476 I am using this [example](https://gist.github.com/peridotml/68f376f0f4fd1926fb0746daaeea09f8) to test on a flyte cluster. It uses Flyte's checkpointing system to restart from a hugging face checkpoint (see screenshots). <img width="400" alt="Screenshot 2023-05-26 at 2 57 59 PM" src="https://github.com/huggingface/transformers/assets/106936600/5cf83157-cce0-4a2e-8a2f-cd1a72c65820"> <img width="400" alt="Screenshot 2023-05-26 at 2 58 14 PM" src="https://github.com/huggingface/transformers/assets/106936600/891d86e7-5885-4851-889f-e912d42f2902"> Once this is merged, I will add this and more to Flyte's documentation. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
05-25-2023 14:57:00
05-25-2023 14:57:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>Really thrilled to have this opportunity to contribute! 🎉 Before I transition this PR out of draft mode, I want to ensure everything on the Flyte side is spot on. - I'm working on linking to a live example on Flyte - I've also reached out to @cosmicBboy, @kumare3, @zeryx on the Flyte team - who might have some comments. I know they were excited about this integration 😄. <|||||>@sgugger we should be good to go now! I responded to the Flyte team and updated the docs<|||||>@sgugger should be good! 🤞
transformers
23,758
closed
[`Nllb-Moe`] Fix nllb moe accelerate issue
# What does this PR do? Fixes: https://github.com/huggingface/transformers/issues/23385 Before this PR, it seemed that `_no_split_modules` was not properly set. Due to the skip connections in `NllbMoeEncoderLayer` and `NllbMoeDecoderLayer` one needs to add these modules inside `_no_split_modules` instead of `NllbMoeAttention`. All accelerate tests pass cc @ArthurZucker @sgugger
05-25-2023 14:47:01
05-25-2023 14:47:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
23,754
closed
When I use Bloom, I get a error which is Caught RuntimeError in replica 0 on device 0.
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.27 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - GPU Tesla T4 *4 ### Who can help? @sgugger ,@ArthurZucker Hello, when I use Bloom model, the following problem occurs, but when I use RedPajamam model or other models, this kind of error does not occur ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction My code: from transformers import AutoTokenizer, AutoModelForCausalLM import os import torch from datasets import load_dataset os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3" tokenizer = AutoTokenizer.from_pretrained("Bigscience/bloom-560m",cache_dir='./cache/') tokenizer.add_special_tokens({'pad_token': '[PAD]'}) model = AutoModelForCausalLM.from_pretrained("Bigscience/bloom-560m",device_map='auto',cache_dir='./cache/',torch_dtype=torch.float16) model.resize_token_embeddings(len(tokenizer)) model.gradient_checkpointing_enable() model.config.use_cache = False # dataset = load_dataset("BelleGroup/train_1M_CN") # from datasets import load_dataset # dataset = load_dataset("json", data_files="./data/alpaca_data_zh_51k.json") dataset = load_dataset("json", data_files="/home/new_store/Llama/data/alpaca_gpt4_data_zh.json") dataset = dataset.filter(lambda x: x["output"]!=None) dataset= dataset.filter(lambda x: x["instruction"] !=None) dataset= dataset.filter(lambda x: x["input"] !=None) eval_dataset = load_dataset("json", data_files="split.json") eval_dataset = eval_dataset.filter(lambda x: x["output"]!=None) eval_dataset = eval_dataset.filter(lambda x: x["input"] !=None) eval_dataset = eval_dataset.filter(lambda x: x["instruction"]!=None) def preprocess_function(sample): l = "<##human>:" for i in range(len(sample['instruction'])): if sample['input'][i]!='': sample['instruction'][i]=sample['instruction'][i]+'[PAD]'+sample['input'][i] # print(sample['input'][i]) output = ['<##bot>:'+i for i in sample['output']] model_inputs = tokenizer(sample['instruction'], truncation=True,padding=True,max_length=256) labels = tokenizer(output, truncation=True, padding=True,max_length=256) model_inputs["labels"] = labels["input_ids"] # print(model_inputs) return model_inputs input_data = dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output']) eval_data = eval_dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output']) from transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling trainArgs = TrainingArguments( output_dir= '../ckps_bloom_1M', do_train=True, # per_device_train_batch_size=1, auto_find_batch_size=True, gradient_accumulation_steps=4, evaluation_strategy="steps", save_strategy="steps", save_steps=500, eval_steps=500, logging_steps=100, warmup_steps=100, num_train_epochs=2, learning_rate=2e-5, #fp16=True, # bf16=True, load_best_model_at_end=True, #deepspeed= './zero.json', report_to="wandb" ) trainer = Trainer( model=model, args=trainArgs, train_dataset=input_data, eval_dataset=eval_data, data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False), ) trainer.train() Error: ![Uploading image.png…]() RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker output = module(*input, **kwargs) File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/transformers/models/bloom/modeling_bloom.py", line 913, in forward transformer_outputs = self.transformer( File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/transformers/models/bloom/modeling_bloom.py", line 730, in forward inputs_embeds = self.word_embeddings(input_ids) File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 160, in forward return F.embedding( File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument index in method wrapper__index_select) ### Expected behavior I hope the error can be fixed
05-25-2023 13:30:36
05-25-2023 13:30:36
I don't think `device_map="auto"` is compatible with gradient checkpointing.<|||||>Thanks for the response. I've cancelled the gradient checkpointing, but the problem still exists<|||||>Since we don't have access to your data files, it's going to be pretty hard to reproduce the issue. Could you: 1. format your code so we can copy/paste it 2. use a dataset from the Hub instead so we can replicate Thanks!<|||||>Thank you again for your patient reply. I have modified the data set on Hub and formatted the code. The code is as follows. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch from datasets import load_dataset # tokenizer = AutoTokenizer.from_pretrained("Bigscience/bloom-560m",cache_dir='./cache/') tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-70m",cache_dir='./cache/') tokenizer.add_special_tokens({'pad_token': '[PAD]'}) tokenizer.padding_side='right' model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-70m",device_map='balanced',cache_dir='./cache/',torch_dtype=torch.float16) # model = AutoModelForCausalLM.from_pretrained("Bigscience/bloom-560m",device_map='balanced',cache_dir='./cache/',torch_dtype=torch.float16) model.resize_token_embeddings(len(tokenizer)) model.config.use_cache = True dataset = load_dataset("c-s-ale/alpaca-gpt4-data-zh") def preprocess_function(sample): for i in range(len(sample['instruction'])): sample['instruction'][i]=sample['instruction'][i]+'[PAD]'+sample['input'][i] output = ['<bot>:'+i for i in sample['output']] model_inputs = tokenizer(sample['instruction'], truncation=True,padding=True,max_length=100,return_tensors="pt") labels = tokenizer(output, truncation=True, padding=True,max_length=100,return_tensors="pt") model_inputs["labels"] = labels["input_ids"] return model_inputs input_data = dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output']) from transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling trainArgs = TrainingArguments( output_dir= './ckps_bloom', do_train=True, auto_find_batch_size=True, gradient_accumulation_steps=4, evaluation_strategy="steps", save_strategy="steps", save_steps=10, eval_steps=10, logging_steps=10, warmup_steps=100, num_train_epochs=2, learning_rate=2e-5, fp16=True, load_best_model_at_end=True, push_to_hub=False, report_to="wandb" ) trainer = Trainer( model=model, args=trainArgs, train_dataset=input_data, data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False), ) trainer.train() ``` <|||||>Thanks for sharing. Note that in any case, you won't be able to train your model in float16 (you will get an error in the line of "Attempting to unscale FP16 gradients.". Training in float16 does not converge so the Trainer does not support it. You will need to remove the line `torch_dtype=torch.float16` when loading your model. For the Pythia model, something weird is happening with `device_map="auto"` since the model is so tiny: it is all placed on GPU-1 (in my case) and then the Trainer tries to move it to GPU-0. Will fix this but a simple workaround in the meantime is to `place_model_on_device=False` in your training arguments.<|||||>Thank you again for your reply. I removed the line torch_dtype =torch.float16 and set place_model_on_device=False, but the problem still exists. This problem existed regardless of the size of the model, and I also tried to use bloom 3b from the Hub. When I removed device_map='auto', the program worked, but only on one GPU. The code is as follows. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch from datasets import load_dataset tokenizer = AutoTokenizer.from_pretrained("Bigscience/bloom-3b",cache_dir='./cache/') # tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-70m",cache_dir='./cache/') tokenizer.add_special_tokens({'pad_token': '[PAD]'}) tokenizer.padding_side='right' # model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-70m",device_map='balanced',cache_dir='./cache/') model = AutoModelForCausalLM.from_pretrained("Bigscience/bloom-3b",device_map='auto',cache_dir='./cache/') model.resize_token_embeddings(len(tokenizer)) model.config.use_cache = True dataset = load_dataset("c-s-ale/alpaca-gpt4-data-zh") def preprocess_function(sample): for i in range(len(sample['instruction'])): sample['instruction'][i]=sample['instruction'][i]+'[PAD]'+sample['input'][i] output = ['<bot>:'+i for i in sample['output']] model_inputs = tokenizer(sample['instruction'], truncation=True,padding=True,max_length=100,return_tensors="pt") labels = tokenizer(output, truncation=True, padding=True,max_length=100,return_tensors="pt") model_inputs["labels"] = labels["input_ids"] return model_inputs input_data = dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output']) from transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling trainArgs = TrainingArguments( output_dir= './ckps_bloom', do_train=True, auto_find_batch_size=True, gradient_accumulation_steps=4, evaluation_strategy="steps", save_strategy="steps", save_steps=10, eval_steps=10, logging_steps=10, warmup_steps=100, num_train_epochs=2, learning_rate=2e-5, fp16=True, load_best_model_at_end=True, push_to_hub=False, report_to="wandb", ) TrainingArguments.place_model_on_device=False trainer = Trainer( model=model, args=trainArgs, train_dataset=input_data, data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False), ) trainer.train() ``` GPU memory allocation is also weird. In the past, it was pretty even, but now it looks like this. <img width="402" alt="image" src="https://github.com/huggingface/transformers/assets/69674181/ed1b7c87-cacc-42d5-a327-1b8f3962b0fc"> error: <img width="1207" alt="image" src="https://github.com/huggingface/transformers/assets/69674181/26494845-b675-499e-961f-af02d15ce264"> <|||||>I didn't understand what you meant before. I'm sorry, but it has been solved now.