File size: 1,921 Bytes
0bd62e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# Batch functions

Gradio supports the ability to pass _batch_ functions. Batch functions are just
functions which take in a list of inputs and return a list of predictions.

For example, here is a batched function that takes in two lists of inputs (a list of
words and a list of ints), and returns a list of trimmed words as output:

```py

import time



def trim_words(words, lens):

    trimmed_words = []

    time.sleep(5)

    for w, l in zip(words, lens):

        trimmed_words.append(w[:int(l)])

    return [trimmed_words]

```

The advantage of using batched functions is that if you enable queuing, the Gradio server can automatically _batch_ incoming requests and process them in parallel,
potentially speeding up your demo. Here's what the Gradio code looks like (notice the `batch=True` and `max_batch_size=16`)

With the `gr.Interface` class:

```python

demo = gr.Interface(

    fn=trim_words, 

    inputs=["textbox", "number"], 

    outputs=["output"],

    batch=True, 

    max_batch_size=16

)



demo.launch()

```

With the `gr.Blocks` class:

```py

import gradio as gr



with gr.Blocks() as demo:

    with gr.Row():

        word = gr.Textbox(label="word")

        leng = gr.Number(label="leng")

        output = gr.Textbox(label="Output")

    with gr.Row():

        run = gr.Button()



    event = run.click(trim_words, [word, leng], output, batch=True, max_batch_size=16)



demo.launch()

```

In the example above, 16 requests could be processed in parallel (for a total inference time of 5 seconds), instead of each request being processed separately (for a total
inference time of 80 seconds). Many Hugging Face `transformers` and `diffusers` models work very naturally with Gradio's batch mode: here's [an example demo using diffusers to
generate images in batches](https://github.com/gradio-app/gradio/blob/main/demo/diffusers_with_batching/run.py)