image/png

Model Card for Beyond Reality

Model Details

Basic Information

  • Model Type: Language Model
  • Base Model: LLaMA 3.1-8b
  • Training Type: Fine-tuned
  • Version: 1.0
  • Language(s): English

Model Architecture

  • Architecture: LLaMA 3.1
  • Parameters: 8 billion
  • Training Procedure: Fine-tuned on a custom dataset of interactive fiction scenarios

Intended Use

  • Primary intended uses: Interactive storytelling, text-based adventure games, narrative exploration
  • Primary intended users: Game developers, writers, AI researchers, interactive fiction enthusiasts

Limitations and Bias

  • Limited to 5-6 coherent actions in sequence before potential degradation
  • May exhibit biases present in the original LLaMA model and the fine-tuning dataset
  • Not suitable for factual information retrieval or real-world decision making

Training Data

Fine-tuned on a proprietary dataset of interactive fiction scenarios, featuring:

  • Multi-choice action systems (options A-D)
  • Custom user-defined actions (E+)
  • Various narrative genres and settings

Performance and Evaluation

  • Maintains coherence for 5-6 sequential actions on average
  • Evaluated primarily through user testing and narrative consistency

Ethical Considerations

  • Model outputs are fictional and should not be used as a source of factual information
  • Users should be aware of potential biases in generated content
Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for waldie/Llama-3.1-8B-BeyondReality-8bpw-h8-exl2

Quantized
(3)
this model