Datasets:
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Tags:
benchmark
llm-evaluation
large-language-models
large-language-model
large-multimodal-models
llm-training
DOI:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,16 @@ configs:
|
|
10 |
data_files: "KeyQuestions.csv"
|
11 |
---
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
license: mit
|
15 |
---
|
|
|
10 |
data_files: "KeyQuestions.csv"
|
11 |
---
|
12 |
|
13 |
+
---
|
14 |
+
MSEval Dataset:
|
15 |
+
---
|
16 |
+
|
17 |
+
A benchmark designed to facilitate evaluation and modify the behavior of a foundation model through different existing techniques in the context of material selection for conceptual design.
|
18 |
+
|
19 |
+
The data is collected by conducting a survey of experts in the field of material selection. The same questions mentioned in keyquestions.csv are asked to experts.
|
20 |
+
|
21 |
+
This can be used to evaluate a Language model performance and its spread compared to a human evaluation.
|
22 |
+
|
23 |
---
|
24 |
license: mit
|
25 |
---
|