The parameter, max_length, seems to be manipulated to some random value which is different with input, when tried to use for text-generation with pipeline.
Current version of gpt2 does not work properly with max_length parameter.
It seems the model manipulates the max_length to some strange value.
I tried the following code.
from transformers import pipeline
generator = pipeline("text-generation", model="openai-community/gpt2", max_length=10, num_return_sequences=2)
generator("Hello, I'm a language model,")
And the output was like below.
[{'generated_text': "Hello, I'm a language model, and I don't want to be a programmer. I want to be a real world system.\n\nBut I don't want to just be a programmer. I want to be a real world system, and"},
{'generated_text': "Hello, I'm a language model, a language model. I'm not going to be speaking Russian. I'm not going to be talking to anyone else. I'm not going to be writing a language. I'm not going to be talking to"}]
As you can see, the length of each returned sequence is much longer than I gave; which was 10.
But we can see that 'max_length' is set to some random value, internally, because both of returned sequences did not end with period.
I can sure that this is critical issue for anyone studies with this model.
I used cuda 12.5.1, cudnn 9.3.0, pytorch 2.6.0, and latest with any other packages to try the code above.
Well, I forgot that gpt2 is not recent gpt. I think we cannot expect any fix on it...
Ok, I found that it should be max_new_tokens; just was a problem because I was just noob... sorry.
Ok, not was my problem.
https://huggingface.co/docs/transformers/v4.53.3/en/main_classes/text_generation#transformers.GenerationMixin.generate
The documentation about GenerationConfig says that max_length is total length of output, and only be overridden by max_new_tokens when set.
However, only max_new_tokens works fine while max_length ignored.
Also, the output was much longer when max_length is smaller.