File size: 4,118 Bytes
4544132 86dc496 4544132 86dc496 4544132 ec9ff38 f20653b ec9ff38 f20653b ec9ff38 f20653b 86dc496 ec9ff38 86dc496 f20653b ec9ff38 6e4ad2a ec9ff38 27bed91 ec9ff38 f20653b ec9ff38 86dc496 ec9ff38 86dc496 ec9ff38 3ca022b ec9ff38 f973d86 ec9ff38 f20653b ec9ff38 f973d86 ec9ff38 f973d86 ec9ff38 f973d86 ec9ff38 86dc496 ec9ff38 f20653b e9b0f11 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
license: apache-2.0
datasets:
- MidhunKanadan/logical-fallacy-classification
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
base_model:
- roberta-large
pipeline_tag: text-classification
library_name: transformers
tags:
- fallacy-detection
- logical-fallacies
- text-classification
- transformers
- roberta
- fallacy-classification
---
# roberta-large-fallacy-classification
This model is a fine-tuned version of `roberta-large` trained on the [Logical Fallacy Classification Dataset](https://huggingface.co/datasets/MidhunKanadan/logical-fallacy-classification). It is capable of classifying various types of logical fallacies in text.
## Model Details
- **Base Model**: `roberta-large`
- **Dataset**: Logical Fallacy Dataset
- **Number of Classes**: 13
- **Training Parameters**:
- **Learning Rate**: 2e-6
- **Batch Size**: 8 (gradient accumulation for an effective batch size of 16)
- **Weight Decay**: 0.01
- **Training Epochs**: 15
- **Mixed Precision (FP16)**: Enabled
- **Features**:
- Class weights to handle dataset imbalance
- Tokenization with truncation and padding (maximum length: 128)
## Supported Fallacies
The model can classify the following types of logical fallacies:
1. **Equivocation**
2. **Faulty Generalization**
3. **Fallacy of Logic**
4. **Ad Populum**
5. **Circular Reasoning**
6. **False Dilemma**
7. **False Causality**
8. **Fallacy of Extension**
9. **Fallacy of Credibility**
10. **Fallacy of Relevance**
11. **Intentional**
12. **Appeal to Emotion**
13. **Ad Hominem**
## Text Classification Pipeline
To use the model for quick classification with a text pipeline:
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="MidhunKanadan/roberta-large-fallacy-classification", device=0)
text = "The rooster crows always before the sun rises, therefore the crowing rooster causes the sun to rise."
result = pipe(text)[0]
print(f"Predicted Label: {result['label']}, Score: {result['score']:.4f}")
```
Expected Output:
```
Predicted Label: false causality, Score: 0.9632
```
## Advanced Usage: Predict Scores for All Labels
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch.nn.functional as F
model_path = "MidhunKanadan/roberta-large-fallacy-classification"
text = "The rooster crows always before the sun rises, therefore the crowing rooster causes the sun to rise."
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to("cuda")
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128).to("cuda")
with torch.no_grad():
probs = F.softmax(model(**inputs).logits, dim=-1)
results = {model.config.id2label[i]: score.item() for i, score in enumerate(probs[0])}
# Print scores for all labels
for label, score in sorted(results.items(), key=lambda x: x[1], reverse=True):
print(f"{label}: {score:.4f}")
```
Expected Output:
```
false causality: 0.9632
fallacy of logic: 0.0139
faulty generalization: 0.0054
intentional: 0.0029
fallacy of credibility: 0.0023
equivocation: 0.0022
fallacy of extension: 0.0020
ad hominem: 0.0019
circular reasoning: 0.0016
false dilemma: 0.0015
fallacy of relevance: 0.0013
ad populum: 0.0009
appeal to emotion: 0.0009
```
## Dataset
- **Dataset Name**: Logical Fallacy Classification Dataset
- **Source**: [Logical Fallacy Classification Dataset](https://huggingface.co/datasets/MidhunKanadan/logical-fallacy-classification)
- **Number of Classes**: 13 fallacies (e.g., ad hominem, appeal to emotion, faulty generalization, etc.)
## Applications
- **Education**: Teach logical reasoning and critical thinking by identifying common fallacies.
- **Argumentation Analysis**: Evaluate the validity of arguments in debates, essays, and articles.
- **AI Assistants**: Enhance conversational AI systems with critical reasoning capabilities.
- **Content Moderation**: Identify logical flaws in online debates or social media discussions.
## License
The model is licensed under the Apache 2.0 License. |