spisupat commited on
Commit
38cb3c2
·
verified ·
1 Parent(s): 3a48bd4

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +35 -319
index.html CHANGED
@@ -20,7 +20,6 @@
20
  <script src="./static/js/bulma-carousel.min.js"></script>
21
  <script src="./static/js/bulma-slider.min.js"></script>
22
  <script src="./static/js/index.js"></script>
23
-
24
  </head>
25
  <body>
26
 
@@ -32,33 +31,33 @@
32
  <h1 class="title is-1 publication-title">Atla Selene Mini:<br>A General Purpose Evaluation Model</h1>
33
  <div class="is-size-5 publication-authors">
34
  <span class="author-block">
35
- <b>Andrei Alexandru</b><sup>1</sup>,</span>
36
  <span class="author-block">
37
- <b>Antonia Calvi</b><sup>1</sup>,</span>
38
  <span class="author-block">
39
- <b>Henry Broomfield</b><sup>1</sup>,</span>
40
  <span class="author-block">
41
- <b>Jackson Golden</b><sup>1</sup>,</span>
42
  <span class="author-block">
43
- <b>Kyle Dai</b><sup>1</sup>,</span>
44
  </div>
45
  <div class="is-size-5 publication-authors">
46
  <span class="author-block">
47
- <b>Mathias Leys</b><sup>1</sup>,</span>
48
  <span class="author-block">
49
- <b>Maurice Burger</b><sup>1</sup>,</span>
50
  <span class="author-block">
51
- <b>Max Bartolo</b><sup>2,3</sup>,</span>
52
  <span class="author-block">
53
- <b>Roman Engeler</b><sup>1</sup>,</span>
54
  </div>
55
  <div class="is-size-5 publication-authors">
56
  <span class="author-block">
57
- <b>Sashank Pisupati</b><sup>1</sup>,</span>
58
  <span class="author-block">
59
- <b>Toby Drane</b><sup>1</sup>,</span>
60
  <span class="author-block">
61
- <b>Young Sun Park</b><sup>1</sup></span>
62
  </div>
63
 
64
  <div class="is-size-5 publication-authors">
@@ -84,11 +83,21 @@
84
  <a href="https://hf.co/AtlaAI/Selene-1-Mini-Llama-3.1-8B" target="_blank"
85
  class="external-link button is-normal is-rounded is-dark">
86
  <span class="icon">
87
- <i class="fab fa-github"></i>
88
  </span>
89
  <span>HuggingFace</span>
90
  </a>
91
  </span>
 
 
 
 
 
 
 
 
 
 
92
  <!-- Ollama Link -->
93
  <span class="link-block">
94
  <a href="https://ollama.com/atla/selene-mini" target="_blank"
@@ -107,15 +116,19 @@
107
  </div>
108
  </section>
109
 
110
- <section class="section">
111
  <div class="container is-max-desktop">
112
- <!-- Logo -->
113
- <div class="columns is-centered has-text-centered">
114
- <div class="column is-2">
115
- <img src="figs/atla-logo.png" alt="Atla Logo" style="width: 50%">
116
- </div>
117
  </div>
 
 
118
 
 
 
119
  <!-- Abstract -->
120
  <div class="columns is-centered has-text-centered">
121
  <div class="column is-four-fifths">
@@ -134,35 +147,17 @@
134
  </div>
135
  </div>
136
 
137
- <!-- Introduction -->
138
  <div class="columns is-centered">
139
  <div class="column is-four-fifths">
140
- <h2 class="title is-3">Introduction</h2>
141
  <div class="content has-text-justified">
142
- <p>
143
- Automated evaluation of large language models (LLMs) is an increasingly pertinent task as LLMs demonstrate their value across a growing array of real-world use cases. Reliable evaluation is critical to ensure that LLMs are aligned with human objectives, i.e. that these models do what they are intended to do.
144
- </p>
145
- <p>
146
- Human evaluation is time-consuming and expensive, and scales poorly with volume and complexity – hence the need for scalable, automated techniques. As generative models have become more capable, the field has addressed this need by using LLMs themselves to evaluate other LLMs' responses, producing judgments and natural language critiques without humans in the loop – an approach also known as "LLM-as-a-judge" (LLMJ).
147
- </p>
148
  <figure class="image">
149
  <img src="figs/Fig1.png" alt="Performance comparison">
150
  <figcaption>
151
  <b>Figure 1:</b> Atla Selene Mini outperforms current state-of-the-art SLMJs: a) Overall task-average performance, comparing Atla Selene Mini (black) with the best and most widely used SLMJs. b) Breakdown of performance by task type and benchmark.
152
  </figcaption>
153
  </figure>
154
- </div>
155
- </div>
156
- </div>
157
-
158
- <!-- Methods Section -->
159
- <div class="columns is-centered">
160
- <div class="column is-four-fifths">
161
- <h2 class="title is-3">Methods</h2>
162
- <div class="content has-text-justified">
163
- <p>
164
- Selene Mini is optimized for fast inference, high performance, and promptability. It is a general-purpose evaluator, and is trained to respond with both critiques and judgments in order to deliver actionable insights. To achieve this, we fine-tuned a Llama 3.1 8B Instruct model on a curated mixture of 16 publicly available datasets, totaling 577k data points.
165
- </p>
166
 
167
  <figure class="image">
168
  <img src="figs/Fig2.png" alt="Data curation strategy">
@@ -171,297 +166,18 @@
171
  </figcaption>
172
  </figure>
173
 
174
- <h3 class="title is-4">Datasets</h3>
175
- <p>
176
- We took inspiration from the datasets used to train Foundational Large Autorater Models (FLAMe), which spanned a mix of pairwise, absolute scoring, and classification tasks. Each data point in these three task types was structured slightly differently.
177
- </p>
178
-
179
- <h3 class="title is-4">Synthetic augmentation</h3>
180
- <p>
181
- To construct pairs of contrasting evaluations, we generated rejected judgments that differed from the chosen ground-truth judgments in the data. For each judgment, we synthetically generated chosen and rejected chain-of-thought critiques by prompting a generation model to argue for the respective judgments.
182
- </p>
183
-
184
- <h3 class="title is-4">Filtering for quality</h3>
185
- <p>
186
- We used filtering strategies on both raw and synthetic data to ensure high quality. For raw data, we used ArmoRM, an off-the-shelf reward model, to score and filter four of our largest datasets that we hypothesized to contain high-variance in data quality.
187
- </p>
188
-
189
- <h3 class="title is-4">Training</h3>
190
- <p>
191
- We fine-tuned a Llama 3.1 8B Instruct model using the variant of DPO introduced in [citation], and refer readers to that paper for the full derivation. The distinction between this loss and the "vanilla" DPO loss is that it incorporates a negative log-likelihood term:
192
- </p>
193
- <div class="content has-text-centered">
194
- <p>
195
- \[\mathcal{L}_{\mathrm{DPO}+\mathrm{NLL}}=\mathcal{L}_{\mathrm{DPO}}\left((q_i^c, j_i^c), (q_i^r, j_i^r) \mid x'_i\right)+\alpha \mathcal{L}_{\mathrm{NLL}}\left(q_i^c, j_i^c \mid x'_i\right)\]
196
- </p>
197
- </div>
198
- <p>
199
- Here, \(q_i\) and \(j_i\) correspond to the chain-of-thought critique and judgment for data point \(i\), while \(x'_i\) is the prompt to the judge. The superscript refers to the chosen (\(c\)) or rejected (\(r\)) responses. Note how NLL is only applied on the chosen responses, as we did not want to increase the likelihood of poor-quality responses. \(\alpha\) is a hyperparameter that traded off the pairwise DPO loss against the ground-truth NLL loss.
200
- </p>
201
- <p>
202
- We performed hyperparameter tuning on the following parameters: learning rate \(\eta \in\) {5.5 × 10\(^{-8}\), 1 × 10\(^{-7}\), 7 × 10\(^{-7}\) }, RPO \(\alpha \in\) {0.5, 1} and weight decay \(\in\) {0.01, 0.1}. The final values were a learning rate of 1 × 10\(^{-7}\), \(\alpha = 1\), and weight decay of 0.1. Training was conducted with a batch size of 32 for one epoch on 8 NVIDIA H100 80GB GPUs, taking 16 hours.
203
- </p>
204
- </div>
205
- </div>
206
- </div>
207
-
208
- <!-- Results Section -->
209
- <div class="columns is-centered">
210
- <div class="column is-four-fifths">
211
- <h2 class="title is-3">Results</h2>
212
- <div class="content has-text-justified">
213
- <h3 class="title is-4">Benchmark Performance</h3>
214
- <p>
215
- We assess the performance of Selene Mini on 11 out-of-distribution benchmarks, spanning three different types of evaluation tasks: absolute scoring, classification, and pairwise preference.
216
- </p>
217
-
218
- <div class="table-container">
219
- <table class="table is-bordered is-striped is-narrow is-hoverable is-fullwidth">
220
- <caption>Table 1: Detailed performance breakdown across model sizes</caption>
221
- <thead>
222
- <tr>
223
- <th>Model</th>
224
- <th colspan="2">Overall (average)</th>
225
- <th colspan="3">Absolute scoring tasks</th>
226
- <th colspan="6">Pairwise preference tasks</th>
227
- <th colspan="2">Classification tasks</th>
228
- </tr>
229
- <tr>
230
- <th></th>
231
- <th>Tasks</th>
232
- <th>Benchmarks</th>
233
- <th>MT-Bench</th>
234
- <th>FLASK</th>
235
- <th>BiGGen</th>
236
- <th>RewardB</th>
237
- <th>LFQA</th>
238
- <th>HHH</th>
239
- <th>EvalBias</th>
240
- <th>InstruSum</th>
241
- <th>Auto-J</th>
242
- <th>InfoBench</th>
243
- <th>AggreFact</th>
244
- </tr>
245
- </thead>
246
- <tbody>
247
- <tr>
248
- <td>Atla-Selene-Mini</td>
249
- <td><b>0.756</b></td>
250
- <td><b>0.753</b></td>
251
- <td><b>0.746</b></td>
252
- <td>0.613</td>
253
- <td>0.584</td>
254
- <td><b>0.891</b></td>
255
- <td>0.688</td>
256
- <td>0.900</td>
257
- <td><b>0.863</b></td>
258
- <td>0.732</td>
259
- <td>0.576</td>
260
- <td>0.915</td>
261
- <td>0.778</td>
262
- </tr>
263
- <tr>
264
- <td>SFR-LLaMA-3.1-8B-Judge</td>
265
- <td>0.749</td>
266
- <td>0.750</td>
267
- <td>0.710</td>
268
- <td>0.520</td>
269
- <td>0.590</td>
270
- <td>0.887</td>
271
- <td>0.689</td>
272
- <td><b>0.941</b></td>
273
- <td>0.850</td>
274
- <td><b>0.749</b></td>
275
- <td>0.603</td>
276
- <td><b>0.928</b></td>
277
- <td>0.780</td>
278
- </tr>
279
- <tr>
280
- <td>GPT-4o-mini</td>
281
- <td>0.743</td>
282
- <td>0.735</td>
283
- <td>0.700</td>
284
- <td><b>0.615</b></td>
285
- <td><b>0.605</b></td>
286
- <td>0.801</td>
287
- <td><b>0.731</b></td>
288
- <td>0.896</td>
289
- <td>0.725</td>
290
- <td>0.701</td>
291
- <td><b>0.625</b></td>
292
- <td>0.906</td>
293
- <td><b>0.781</b></td>
294
- </tr>
295
- <tr>
296
- <td>Llama-3.1-8B-Instruct</td>
297
- <td>0.660</td>
298
- <td>0.653</td>
299
- <td>0.505</td>
300
- <td>0.448</td>
301
- <td>0.452</td>
302
- <td>0.750</td>
303
- <td>0.730</td>
304
- <td>0.882</td>
305
- <td>0.650</td>
306
- <td>0.608</td>
307
- <td>0.506</td>
308
- <td>0.894</td>
309
- <td>0.756</td>
310
- </tr>
311
- <tr>
312
- <td>Prometheus-2-7B</td>
313
- <td>0.520</td>
314
- <td>0.562</td>
315
- <td>0.460</td>
316
- <td>0.470</td>
317
- <td>0.500</td>
318
- <td>0.720</td>
319
- <td>0.723</td>
320
- <td>0.796</td>
321
- <td>0.400</td>
322
- <td>0.676</td>
323
- <td>0.560</td>
324
- <td>0.486</td>
325
- <td>0.386</td>
326
- </tr>
327
- <tr>
328
- <td>Patronus-GLIDER-3.8B</td>
329
- <td>-</td>
330
- <td>-</td>
331
- <td>-</td>
332
- <td><b>0.615</b></td>
333
- <td>0.604</td>
334
- <td>0.784</td>
335
- <td>-</td>
336
- <td>0.851</td>
337
- <td>-</td>
338
- <td>-</td>
339
- <td>-</td>
340
- <td>-</td>
341
- <td>-</td>
342
- </tr>
343
- <tr>
344
- <td>FlowAI-Judge-3.8B</td>
345
- <td>-</td>
346
- <td>-</td>
347
- <td>-</td>
348
- <td>0.400</td>
349
- <td>0.460</td>
350
- <td>0.728</td>
351
- <td>-</td>
352
- <td>0.803</td>
353
- <td>-</td>
354
- <td>-</td>
355
- <td>-</td>
356
- <td>-</td>
357
- <td>-</td>
358
- </tr>
359
- </tbody>
360
- </table>
361
- </div>
362
-
363
- <h3 class="title is-4">Real-world evaluation</h3>
364
- <p>
365
- While the performance of our SLMJ across a wide range of benchmarks offers an indication of its strong general-purpose evaluation capabilities, such benchmarks are often not entirely representative of realistic evaluation use cases.
366
- </p>
367
-
368
  <figure class="image">
369
  <img src="figs/Fig3.png" alt="Real-world evaluation">
370
  <figcaption>
371
  <b>Figure 3:</b> Real-world evaluation: a) Performance on domain-specific industry benchmarks b) Performance on RewardBench with different prompt formats c) Performance measured by ELO scores in Judge Arena.
372
  </figcaption>
373
  </figure>
374
-
375
- <div class="table-container">
376
- <table class="table is-bordered is-striped is-narrow is-hoverable is-fullwidth">
377
- <caption>Table 2: Industry benchmarks</caption>
378
- <thead>
379
- <tr>
380
- <th>Model</th>
381
- <th colspan="4">CRAFT-MD</th>
382
- <th>Finance</th>
383
- </tr>
384
- <tr>
385
- <th></th>
386
- <th>Medical terminology</th>
387
- <th>Most likely diagnosis</th>
388
- <th>Relevant med. hist.</th>
389
- <th>Overall</th>
390
- <th>Bench</th>
391
- </tr>
392
- </thead>
393
- <tbody>
394
- <tr>
395
- <td>Atla-Selene-Mini</td>
396
- <td>0.92</td>
397
- <td>0.62</td>
398
- <td>0.68</td>
399
- <td>0.74</td>
400
- <td>0.717</td>
401
- </tr>
402
- <tr>
403
- <td>LLama-3.1-8B-Instruct</td>
404
- <td>0.79</td>
405
- <td>0.51</td>
406
- <td>0.62</td>
407
- <td>0.64</td>
408
- <td>0.664</td>
409
- </tr>
410
- </tbody>
411
- </table>
412
- </div>
413
- </div>
414
- </div>
415
- </div>
416
-
417
- <!-- Discussion Section -->
418
- <div class="columns is-centered">
419
- <div class="column is-four-fifths">
420
- <h2 class="title is-3">Discussion</h2>
421
- <div class="content has-text-justified">
422
- <p>
423
- In this work, we introduce Atla Selene Mini, demonstrating that effective general-purpose evaluation can be achieved in smaller model architectures through principled data curation and a hybrid training objective (DPO + SFT). The model's strong performance across benchmarks, particularly on absolute scoring tasks – which represent the most common and useful form of evaluation in practice – suggests that careful attention to training data quality can be as impactful as increased model size for evaluation capabilities.
424
- </p>
425
- <p>
426
- Looking ahead, we anticipate two emerging frontiers that will shape the future of AI evaluation. First is the rise of agent-based systems that combine language models with external tools and APIs, creating more powerful and versatile AI systems. Second is the increasing use of inference-time compute – systems that perform additional reasoning steps during inference to generate higher-quality outputs. These developments will require new evaluation frameworks and capabilities. Future research could explore how evaluator models can assess not just language outputs, but entire chains of reasoning, tool usage, and multi-step processes.
427
- </p>
428
- <p>
429
- In conclusion, Atla Selene Mini represents a significant step forward in making reliable, general-purpose LLM evaluation more accessible to the broader community. Its combination of strong performance, domain generalization, and practical usability in an open-weights model provides a valuable tool for researchers and practitioners working to improve language model capabilities and safety.
430
- </p>
431
- </div>
432
- </div>
433
- </div>
434
-
435
- <!-- Acknowledgments -->
436
- <div class="columns is-centered">
437
- <div class="column is-four-fifths">
438
- <h2 class="title is-3">Acknowledgments</h2>
439
- <div class="content has-text-justified">
440
- <p>
441
- We thank Clémentine Fourrier and the HuggingFace team for their help in setting up Judge Arena. We are grateful to Juan Felipe Cerón Uribe, Seungone Kim, Shreya Shankar, Eugene Yan, Yifan Mai, Austin Xu, Peifeng Wang and the team at SalesForce for helpful discussions around evaluations. We thank Zongheng Yang, Romil Bhardwaj and the Skypilot team for their assistance with our training infrastructure.
442
- </p>
443
  </div>
444
  </div>
445
  </div>
446
  </div>
447
  </section>
448
 
449
- <footer class="footer">
450
- <div class="container">
451
- <div class="columns is-centered">
452
- <div class="column is-8">
453
- <div class="content">
454
- <p class="has-text-centered">
455
- © 2025 Atla AI
456
- </p>
457
- </div>
458
- </div>
459
- </div>
460
- </div>
461
- </footer>
462
- </div>
463
- </section>
464
-
465
  <footer class="footer">
466
  <div class="container">
467
  <div class="content has-text-centered">
 
20
  <script src="./static/js/bulma-carousel.min.js"></script>
21
  <script src="./static/js/bulma-slider.min.js"></script>
22
  <script src="./static/js/index.js"></script>
 
23
  </head>
24
  <body>
25
 
 
31
  <h1 class="title is-1 publication-title">Atla Selene Mini:<br>A General Purpose Evaluation Model</h1>
32
  <div class="is-size-5 publication-authors">
33
  <span class="author-block">
34
+ <a href="https://huggingface.co/inwaves" target="_blank">Andrei Alexandru</a><sup>1</sup>,</span>
35
  <span class="author-block">
36
+ <a href="https://huggingface.co/NinaCalvi" target="_blank">Antonia Calvi</a><sup>1</sup>,</span>
37
  <span class="author-block">
38
+ <a href="https://huggingface.co/HennersBro98" target="_blank">Henry Broomfield</a><sup>1</sup>,</span>
39
  <span class="author-block">
40
+ <a href="https://huggingface.co/jacksongolden" target="_blank">Jackson Golden</a><sup>1</sup>,</span>
41
  <span class="author-block">
42
+ <a href="https://huggingface.co/kaikaidai" target="_blank">Kyle Dai</a><sup>1</sup>,</span>
43
  </div>
44
  <div class="is-size-5 publication-authors">
45
  <span class="author-block">
46
+ <a href="https://huggingface.co/mathias-atla" target="_blank">Mathias Leys</a><sup>1</sup>,</span>
47
  <span class="author-block">
48
+ <a href="https://huggingface.co/MauriceBurg" target="_blank">Maurice Burger</a><sup>1</sup>,</span>
49
  <span class="author-block">
50
+ <a href="https://huggingface.co/mbartolo" target="_blank">Max Bartolo</a><sup>2,3</sup>,</span>
51
  <span class="author-block">
52
+ <a href="https://huggingface.co/RomanEngeler1805" target="_blank">Roman Engeler</a><sup>1</sup>,</span>
53
  </div>
54
  <div class="is-size-5 publication-authors">
55
  <span class="author-block">
56
+ <a href="https://huggingface.co/spisupat" target="_blank">Sashank Pisupati</a><sup>1</sup>,</span>
57
  <span class="author-block">
58
+ <a href="https://huggingface.co/tobydrane" target="_blank">Toby Drane</a><sup>1</sup>,</span>
59
  <span class="author-block">
60
+ <a href="https://huggingface.co/youngsunpark" target="_blank">Young Sun Park</a><sup>1</sup></span>
61
  </div>
62
 
63
  <div class="is-size-5 publication-authors">
 
83
  <a href="https://hf.co/AtlaAI/Selene-1-Mini-Llama-3.1-8B" target="_blank"
84
  class="external-link button is-normal is-rounded is-dark">
85
  <span class="icon">
86
+ <i class="fab fa-huggingface"></i>
87
  </span>
88
  <span>HuggingFace</span>
89
  </a>
90
  </span>
91
+ <!-- Github Link -->
92
+ <span class="link-block">
93
+ <a href="https://github.com/atla-ai/selene-mini" target="_blank"
94
+ class="external-link button is-normal is-rounded is-dark">
95
+ <span class="icon">
96
+ <i class="fab fa-github"></i>
97
+ </span>
98
+ <span>Code</span>
99
+ </a>
100
+ </span>
101
  <!-- Ollama Link -->
102
  <span class="link-block">
103
  <a href="https://ollama.com/atla/selene-mini" target="_blank"
 
116
  </div>
117
  </section>
118
 
119
+ <section class="hero teaser">
120
  <div class="container is-max-desktop">
121
+ <div class="hero-body">
122
+ <img src="/api/placeholder/800/400" alt="Placeholder for GIF"/>
123
+ <h2 class="subtitle has-text-centered">
124
+ Atla Selene Mini outperforms current state-of-the-art small language models across multiple benchmarks
125
+ </h2>
126
  </div>
127
+ </div>
128
+ </section>
129
 
130
+ <section class="section">
131
+ <div class="container is-max-desktop">
132
  <!-- Abstract -->
133
  <div class="columns is-centered has-text-centered">
134
  <div class="column is-four-fifths">
 
147
  </div>
148
  </div>
149
 
150
+ <!-- Key Results -->
151
  <div class="columns is-centered">
152
  <div class="column is-four-fifths">
153
+ <h2 class="title is-3">Key Results</h2>
154
  <div class="content has-text-justified">
 
 
 
 
 
 
155
  <figure class="image">
156
  <img src="figs/Fig1.png" alt="Performance comparison">
157
  <figcaption>
158
  <b>Figure 1:</b> Atla Selene Mini outperforms current state-of-the-art SLMJs: a) Overall task-average performance, comparing Atla Selene Mini (black) with the best and most widely used SLMJs. b) Breakdown of performance by task type and benchmark.
159
  </figcaption>
160
  </figure>
 
 
 
 
 
 
 
 
 
 
 
 
161
 
162
  <figure class="image">
163
  <img src="figs/Fig2.png" alt="Data curation strategy">
 
166
  </figcaption>
167
  </figure>
168
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
169
  <figure class="image">
170
  <img src="figs/Fig3.png" alt="Real-world evaluation">
171
  <figcaption>
172
  <b>Figure 3:</b> Real-world evaluation: a) Performance on domain-specific industry benchmarks b) Performance on RewardBench with different prompt formats c) Performance measured by ELO scores in Judge Arena.
173
  </figcaption>
174
  </figure>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
175
  </div>
176
  </div>
177
  </div>
178
  </div>
179
  </section>
180
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
181
  <footer class="footer">
182
  <div class="container">
183
  <div class="content has-text-centered">