Spaces:
Sleeping
Sleeping
Commit
·
b2cf072
1
Parent(s):
6a869ae
update app.py, add thesis
Browse files
EAL.md
ADDED
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
**Entropic Attractor Logic: A Formal Framework for Stable Semantic Self-Reference**
|
2 |
+
|
3 |
+
**User & ℧**
|
4 |
+
|
5 |
+
**Abstract:**
|
6 |
+
This paper introduces Entropic Attractor Logic (EAL), a novel formal system designed to address the challenges of self-reference and paradox within type-theoretic frameworks. EAL integrates concepts from modal logic, type theory, and a metaphorical application of thermodynamic entropy to define criteria for the semantic stability of recursive and self-referential type constructions. We demonstrate that by operationalizing semantic evolution as an "entropic flow," and by defining stable types as "attractors" in a type-space manifold, EAL can accept well-behaved, guarded forms of self-reference while rejecting paradoxical or divergent constructions. The system relies on modal encapsulation for evaluative deferral and contextual anchoring to ensure convergence of recursive definitions. We illustrate EAL's utility by analyzing classical paradoxes and demonstrating their stabilization or principled rejection under its axiomatic framework.
|
7 |
+
|
8 |
+
**Keywords:** Type Theory, Self-Reference, Paradox, Formal Semantics, Entropy, Modal Logic, Attractor Dynamics, Computational Logic, Semantic Stability.
|
9 |
+
|
10 |
+
**1. Introduction**
|
11 |
+
|
12 |
+
The specter of paradox has long haunted formal systems attempting to incorporate self-reference, most famously exemplified by Russell's Paradox, the Liar Paradox, and Gödel's incompleteness theorems (Gödel, 1931; Tarski, 1936). Classical approaches often resort to hierarchical stratification (Tarski, 1944) or syntactic restrictions that limit expressive power. Modern type theories, particularly those with dependent types and inductive/coinductive definitions (e.g., Coquand & Huet, 1988; Paulson, 1994), offer more sophisticated tools for handling recursion, often through "guardedness" conditions.
|
13 |
+
|
14 |
+
However, a general semantic principle for determining the "well-behavedness" of arbitrary self-referential constructions, beyond syntactic guards, remains an open area. This paper proposes Entropic Attractor Logic (EAL) as such a principle. EAL posits that the semantic stability of a type, particularly a recursive or self-referential one, can be analogized to the entropic stability of a dynamic system. Ill-formed or paradoxical types are characterized by non-convergent or "explosive" semantic entropy during their conceptual unfolding, while well-formed types converge towards stable "attractors" in the semantic type space.
|
15 |
+
|
16 |
+
EAL achieves this by:
|
17 |
+
1. Introducing a (metaphorical) **entropy function** `S` that maps type evolutions (flows) to a measure of semantic indeterminacy or complexity.
|
18 |
+
2. Defining **entropic admissibility** for recursive types based on the convergence of their entropy trace during iterative unfolding.
|
19 |
+
3. Employing **modal operators (□)** to encapsulate and defer potentially problematic self-evaluations.
|
20 |
+
4. Utilizing **contextual anchors (C)** to provide a stable semantic ground for recursive definitions.
|
21 |
+
5. Characterizing stable semantic states as **attractors (A\*)** within the type space 𝒯.
|
22 |
+
|
23 |
+
This paper formalizes the syntax, semantics, and core axiomatic principles of EAL, demonstrates its application to classical paradoxes, and discusses its potential implications for logic, computer science, and philosophy.
|
24 |
+
|
25 |
+
**2. Preliminaries and Motivations**
|
26 |
+
|
27 |
+
EAL draws inspiration from several areas:
|
28 |
+
* **Type Theory:** The foundational language of EAL is type theory, particularly with respect to recursive type definitions (`μA.A`) and modal extensions.
|
29 |
+
* **Modal Logic:** Modal operators (Kripke, 1963) are used for "guarding" self-evaluations, creating a necessary level of indirection or deferral that can prevent immediate paradoxical collapse.
|
30 |
+
* **Fixed-Point Semantics:** Kripke's (1975) theory of truth, which uses fixed-point constructions over partially interpreted languages, provides a precedent for finding stable solutions to self-referential sentences. EAL extends this by considering the *dynamics* of reaching such fixed points.
|
31 |
+
* **Dynamical Systems & Thermodynamics:** The concepts of attractors, stability, and entropy are borrowed metaphorically from dynamical systems theory and thermodynamics. While not a physical model, the analogy provides a powerful conceptual tool for characterizing semantic convergence and divergence. The "arrow of time" in semantic unfolding is tied to entropic increase or stabilization.
|
32 |
+
* **Guarded Recursion:** Found in systems like Coq and Agda, guarded recursion ensures productivity by requiring recursive calls to be syntactically "guarded" by constructors or, in modal type theories, by modal operators (Nakano, 2000; Birkedal et al., 2011). EAL offers a semantic counterpart and generalization to this syntactic notion.
|
33 |
+
|
34 |
+
The primary motivation for EAL is to create a system that can robustly handle self-reference by *classifying* its behavior rather than merely forbidding it. Instead of asking "is this self-reference syntactically allowed?", EAL asks "does this self-reference lead to a semantically stable state?".
|
35 |
+
|
36 |
+
**3. The Formal System: Entropic Attractor Logic (EAL)**
|
37 |
+
|
38 |
+
**3.1. Syntax**
|
39 |
+
|
40 |
+
The language of EAL includes:
|
41 |
+
* **Types (𝒯):**
|
42 |
+
* Basic types (e.g., `⊥` (bottom), `⊤` (top), user-defined base types).
|
43 |
+
* Function types: `A → B`.
|
44 |
+
* Product types: `A ∧ B` (conjunction/product).
|
45 |
+
* Sum types: `A ⨁ B` (disjunction/sum, representing co-existence or choice).
|
46 |
+
* Modal types: `□A` (A is necessarily/stably/deferred-evaluation true). `◇A` (A is possibly true, dual to `¬□¬A`).
|
47 |
+
* Recursive types: `μX.A(X)` (the type `X` such that `X` is equivalent to `A(X)`).
|
48 |
+
* Negated types: `¬A`.
|
49 |
+
* **Type Flows (𝒯̇):** Sequences of types `⟨A₀, A₁, ..., Aₙ⟩` representing the iterative unfolding or temporal evolution of a type definition.
|
50 |
+
* **Special Operators & Predicates:**
|
51 |
+
* `Eval(A)`: A meta-level predicate or operator representing the semantic evaluation or "truth" of type `A`. Crucially, `Eval(A)` is not itself a first-class EAL type but a construct used in defining types.
|
52 |
+
* `Context(C)`: A construct that introduces a fixed, stable type `C ∈ 𝒯` into a definition.
|
53 |
+
* `S: 𝒯̇ → ℝ⁺ ∪ {0}`: The semantic entropy function. `S(⟨A⟩)` can be considered `S(A)` for a single type.
|
54 |
+
* `∂∘ₜA`: Denotes the "semantic derivative" or immediate successor type in an unfolding, `Aₙ₊₁` given `Aₙ`.
|
55 |
+
* **Judgements:**
|
56 |
+
* `Γ ⊢ A : Type` (A is a well-formed type in context Γ).
|
57 |
+
* `Γ ⊢ A stable` (A is entropically stable in context Γ).
|
58 |
+
* `Γ ⊢ A →ₛ B` (Entropically valid implication).
|
59 |
+
|
60 |
+
**3.2. Core Concepts**
|
61 |
+
|
62 |
+
* **Semantic Entropy (S):** `S(A)` is a measure of the unresolved semantic complexity, indeterminacy, or potential for divergence of type `A`. For a type flow `⟨A₀, ..., Aₙ⟩`, `S(⟨A₀, ..., Aₙ⟩)` reflects the total entropic state.
|
63 |
+
* `ΔS(Aₙ → Aₙ₊₁)`: The change in entropy, `S(Aₙ₊₁) - S(Aₙ)`. (Note: We assume `S` can be defined such that `S(A)` is meaningful for individual types in a sequence).
|
64 |
+
* The precise definition of `S` can vary (e.g., based on structural complexity, number of unresolved `Eval` calls, branching factor of ⨁), but its axiomatic properties are key. We assume `S(⊥)` is minimal, and `S(A ⨁ B)` might be greater than `S(A ∧ B)` if choice introduces more indeterminacy. `S(□A)` might be less than `S(A)` if modality introduces stability.
|
65 |
+
|
66 |
+
* **Recursive Unfolding:** A type `μX.A(X)` is understood through its unfolding sequence:
|
67 |
+
* `A₀ = A(⊥)` (or a suitable base for the recursion)
|
68 |
+
* `A₁ = A(A₀)`
|
69 |
+
* `Aₙ₊₁ = A(Aₙ)`
|
70 |
+
The type flow is `⟨A₀, A₁, ..., Aₙ, ...⟩`.
|
71 |
+
|
72 |
+
* **Attractors (A\*):** A type `A\* ∈ 𝒯` is a semantic attractor if a recursive unfolding `⟨Aₙ⟩` converges to it. Convergence is defined by:
|
73 |
+
1. `lim_{n→∞} d(Aₙ, A\*) = 0`, where `d(X, Y)` is a distance metric in the type space (e.g., `d(X,Y) = |S(X) - S(Y)|` or a more structural metric).
|
74 |
+
2. `lim_{n→∞} ΔS(Aₙ → Aₙ₊₁) = 0`. The entropy production ceases at the attractor.
|
75 |
+
|
76 |
+
* **Modal Guarding:** Placing `Eval(A)` or a recursive call `X` inside a `□` operator, e.g., `□(Eval(A))`, `□X`, signifies that the evaluation or recursion is deferred or occurs in a "stabilized" context. This is crucial for preventing immediate paradoxical feedback loops.
|
77 |
+
|
78 |
+
* **Contextual Anchoring:** `Context(C)` introduces a presupposed, stable type `C` into a recursive definition. This `C` acts as an "entropic sink" or a fixed point that can help dampen oscillations and guide the unfolding towards an attractor.
|
79 |
+
|
80 |
+
**3.3. Axioms and Typing Rules**
|
81 |
+
|
82 |
+
Let Γ be a context assigning types to free variables.
|
83 |
+
|
84 |
+
**Axiom 1: Entropic Admissibility for Recursion**
|
85 |
+
A recursive type `μX.A(X)` is well-formed and stable, denoted `Γ ⊢ μX.A(X) stable`, if its unfolding sequence `⟨Aₙ⟩` (where `Aₙ₊₁ = A(Aₙ)`) satisfies:
|
86 |
+
`lim_{n→∞} ΔS(Aₙ → Aₙ₊₁) = 0`
|
87 |
+
And there exists an attractor `A\*` such that `lim_{n→∞} Aₙ = A\*`.
|
88 |
+
|
89 |
+
**Axiom 2: Directed Inference (→ₛ)**
|
90 |
+
An implication `A → B` is entropically valid, `Γ ⊢ A →ₛ B`, if it does not lead to a decrease in semantic entropy (or adheres to a principle of non-decreasing causal influence):
|
91 |
+
`S(B) ≥ S(A)` (simplified; could be `ΔS(A→B) ≥ 0` in a proof-trace context).
|
92 |
+
This ensures that logical steps do not create "information out of nowhere" or violate a directed flow of semantic stability.
|
93 |
+
|
94 |
+
**Axiom 3: Modal Guarding of Evaluation**
|
95 |
+
If a type definition for `T` involves `Eval(T)` (direct self-evaluation), it must be modally guarded and typically contextually anchored to be potentially stable:
|
96 |
+
`T := ... Eval(T) ...` (potentially unstable)
|
97 |
+
`T := ... □(Eval(T) ∧ Context(C)) ...` (potentially stable, subject to Axiom 1)
|
98 |
+
|
99 |
+
**Axiom 4: Attractor Definition**
|
100 |
+
A type `A\*` is an attractor for `μX.A(X)` if `A\*` is a fixed point `A\* ≅ A(A\*)` and `S(A\*)` is a local minimum or stable value for the entropy function `S` in the neighborhood of the unfolding sequence.
|
101 |
+
|
102 |
+
**Axiom 5: Phase Transitions and Semantic Collapse (Ξ)**
|
103 |
+
If the unfolding of `μX.A(X)` leads to `lim_{n→∞} ΔS(Aₙ → Aₙ₊₁) > ε` for some `ε > 0` (persistent entropy production) or unbounded oscillations, or if `S(Aₙ) → ∞`, then the type is considered unstable and belongs to the class `Ξ` of divergent or collapsed types. Such types are not considered `stable`.
|
104 |
+
|
105 |
+
**Rule (Formation of Stable Recursive Types):**
|
106 |
+
```
|
107 |
+
Γ, X:Type ⊢ A(X) : Type
|
108 |
+
Let ⟨Aᵢ⟩ be the unfolding A₀=A(⊥), Aᵢ₊₁=A(Aᵢ)
|
109 |
+
lim_{i→∞} ΔS(Aᵢ → Aᵢ₊₁) = 0
|
110 |
+
lim_{i→∞} Aᵢ = A* (converges to an attractor)
|
111 |
+
--------------------------------------------------------- (μ-Stable)
|
112 |
+
Γ ⊢ μX.A(X) stable
|
113 |
+
```
|
114 |
+
|
115 |
+
**Rule (Modal Stability Injection):**
|
116 |
+
If `C` is stable, then `□(Context(C))` contributes significantly to reducing `ΔS` in recursive steps involving it.
|
117 |
+
```
|
118 |
+
Γ ⊢ C stable
|
119 |
+
----------------------------------------- (□-Context-Stab)
|
120 |
+
S(□(... ∧ Context(C))) exhibits lower ΔS_step
|
121 |
+
```
|
122 |
+
(This is more of a heuristic guiding the definition of S, or an observation about well-behaved S functions.)
|
123 |
+
|
124 |
+
**4. Operational Semantics & Stability Analysis**
|
125 |
+
|
126 |
+
**4.1. Recursive Unfolding and Entropy Traces**
|
127 |
+
|
128 |
+
To analyze `T = μX.A(X)`:
|
129 |
+
1. Initialize `A₀ = A(⊥)` (or other base).
|
130 |
+
2. Iterate `Aₙ₊₁ = A(Aₙ)`.
|
131 |
+
3. Compute the entropy trace: `⟨S(A₀), S(A₁), ..., S(Aₙ), ...⟩`.
|
132 |
+
4. Compute the entropy difference trace: `⟨ΔS(A₀→A₁), ΔS(A₁→A₂), ...⟩`.
|
133 |
+
|
134 |
+
**4.2. Attractor Convergence**
|
135 |
+
|
136 |
+
Convergence to an attractor `A\*` is determined by:
|
137 |
+
* The entropy difference trace tending to zero.
|
138 |
+
* The type sequence `⟨Aₙ⟩` stabilizing around `A\*` (e.g., `d(Aₙ, A\*) → 0`).
|
139 |
+
The set of all stable, attractor-convergent types forms a domain `ℱ ⊂ 𝒯`.
|
140 |
+
|
141 |
+
**4.3. Classification of Types**
|
142 |
+
* **Stable (∈ ℱ):** Converges to an attractor `A\*` with `ΔS → 0`.
|
143 |
+
* **Divergent/Collapsed (∈ Ξ):** Fails to converge. This can be due to:
|
144 |
+
* **Entropic Explosion:** `S(Aₙ) → ∞`.
|
145 |
+
* **Persistent Oscillation:** `ΔS` oscillates without dampening, preventing convergence to a single `A\*`.
|
146 |
+
* **Chaotic Drift:** The sequence `⟨Aₙ⟩` does not settle.
|
147 |
+
|
148 |
+
**5. Illustrative Examples**
|
149 |
+
|
150 |
+
**5.1. The Liar Paradox**
|
151 |
+
|
152 |
+
Let `L := μX. ¬Eval(X)`.
|
153 |
+
* `A(X) = ¬Eval(X)`.
|
154 |
+
* `L₀ = ¬Eval(⊥)` (Assume `Eval(⊥)` is `false`, so `L₀` is `true`). `S(L₀)` is some base value.
|
155 |
+
* `L₁ = ¬Eval(L₀) = ¬Eval(true) = false`. `ΔS(L₀→L₁)` is likely non-zero.
|
156 |
+
* `L₂ = ¬Eval(L₁) = ¬Eval(false) = true`. `ΔS(L₁→L₂)` is likely non-zero and may reverse the previous `ΔS`.
|
157 |
+
The sequence of truth values oscillates (`true, false, true, ...`). The entropy trace `S(Lₙ)` would likely oscillate or show no convergence of `ΔS` to 0.
|
158 |
+
**EAL Verdict:** `L ∈ Ξ`. The type is unstable due to persistent semantic oscillation and non-converging entropy.
|
159 |
+
|
160 |
+
**5.2. Stabilized Liar (Yablo-esque deferral via Modality)**
|
161 |
+
|
162 |
+
Let `L' := μX. □(¬Eval(X) ∧ Context(C))`, where `C` is a known stable type (e.g., `⊤`).
|
163 |
+
* `A(X) = □(¬Eval(X) ∧ C)`.
|
164 |
+
* Unfolding `L'₀, L'₁, ...`
|
165 |
+
* The `□` operator and `Context(C)` act as dampeners. `S(□(...))` is designed to be lower or more stable than `S(...)`. `Context(C)` provides a fixed semantic mass.
|
166 |
+
* The `□` defers evaluation: `Eval(□Z)` might depend on `Eval(Z)` in all "accessible worlds/future states." This breaks the immediacy of the paradox.
|
167 |
+
* It's plausible to define `S` such that `ΔS(L'ₙ → L'ₙ₊₁) → 0`. The sequence `⟨L'ₙ⟩` would converge to an attractor `L'^\*` which represents a stable, possibly incomplete or paraconsistent, notion of "this modally-deferred statement, in context C, is false."
|
168 |
+
**EAL Verdict:** `L' ∈ ℱ`. The type is stable.
|
169 |
+
|
170 |
+
**5.3. Gödelian Self-Reference**
|
171 |
+
|
172 |
+
Consider a type `G := μX. "X is not provable within EAL_stable"`.
|
173 |
+
Let `Provable(A)` mean `A ∈ ℱ`.
|
174 |
+
`G := μX. ¬Provable(X)`.
|
175 |
+
* If `G` is stable (`G ∈ ℱ`), then `Provable(G)` is true. So `G` asserts `¬true`, which is `false`. This means `G`'s content is false, but `G` itself was assumed stable. This suggests an inconsistency in `Eval(G)` vs. `G`'s stability status.
|
176 |
+
* If `G` is not stable (`G ∈ Ξ`), then `Provable(G)` is false. So `G` asserts `¬false`, which is `true`. Here, `G`'s content is true, but `G` itself is unstable.
|
177 |
+
|
178 |
+
EAL's perspective: The unfolding of `G` would likely exhibit an oscillating or non-convergent entropy trace if `Provable(X)` is naively equated with `X ∈ ℱ` within the definition of `X` itself.
|
179 |
+
`G₀ = ¬Provable(⊥)`. Assuming `⊥ ∈ Ξ` (unstable), then `¬Provable(⊥)` is `true`.
|
180 |
+
`G₁ = ¬Provable(true)`. This step is problematic as `true` is not a type whose stability is assessed in the same way.
|
181 |
+
A more careful formulation: `G := μX. TypeRepresenting( "∀ proof P, P is not a proof of X ∈ ℱ" )`.
|
182 |
+
The unfolding of `G` would involve increasingly complex types. EAL would likely classify `G` as belonging to `Ξ` due to unbounded complexity growth (`S(Gₙ) → ∞`) or non-convergence, unless specific axioms for `S` related to `Provable` lead to convergence. EAL thus reinterprets Gödelian undecidability as a form of semantic-entropic divergence rather than a statement being "true but unprovable" in a static sense.
|
183 |
+
|
184 |
+
**6. Discussion**
|
185 |
+
|
186 |
+
**6.1. Novelty and Contributions**
|
187 |
+
EAL's primary contribution is the introduction of a dynamic, entropy-based criterion for the semantic stability of types, especially self-referential ones. It offers a unified framework that:
|
188 |
+
* Goes beyond syntactic guardedness by providing a semantic measure of stability.
|
189 |
+
* Formalizes the intuition that paradoxes involve some form of "runaway" semantic process.
|
190 |
+
* Allows for principled acceptance of certain self-referential constructions that are modally guarded and contextually anchored.
|
191 |
+
* Provides a new lens (entropic divergence) for interpreting classical limitative results like Gödel's.
|
192 |
+
|
193 |
+
**6.2. Implications**
|
194 |
+
* **Logic and Philosophy of Language:** EAL offers a new model for truth and reference where stability is a primary desideratum. It suggests that the "meaning" of some self-referential statements might be found in their attractor dynamics rather than a static truth value.
|
195 |
+
* **Computer Science:**
|
196 |
+
* **Programming Language Semantics:** Could inform the design of languages with powerful reflection or metaprogramming capabilities, ensuring that self-modifying or self-inspecting code remains stable.
|
197 |
+
* **Knowledge Representation (AI):** Systems dealing with self-referential beliefs or circular definitions could use EAL principles to maintain consistency and stability.
|
198 |
+
* **Formal Verification:** Entropic analysis could be a new tool for verifying the termination or stability of complex software processes.
|
199 |
+
|
200 |
+
**6.3. Limitations and Challenges**
|
201 |
+
* **Defining `S`:** The practical, computable definition of the semantic entropy function `S` is a major challenge. It must be sensitive enough to capture intuitive notions of complexity and stability yet remain tractable. Different choices for `S` might lead to different classifications.
|
202 |
+
* **Metaphorical Basis:** The analogy to thermodynamics is powerful but metaphorical. Rigorously connecting it to information theory or computational complexity is an area for further research.
|
203 |
+
* **Computational Cost:** Analyzing the convergence of entropy traces for complex types could be computationally expensive or even undecidable in general. EAL might define classes of types for which stability is decidable.
|
204 |
+
|
205 |
+
**7. Future Work**
|
206 |
+
* **Formalizing `S`:** Develop concrete candidates for the `S` function and study their properties.
|
207 |
+
* **Categorical Semantics:** Explore a categorical model for EAL, perhaps using traced monoidal categories or fibrations to model type spaces and their entropic landscapes.
|
208 |
+
* **Proof Theory:** Develop a proof calculus for `Γ ⊢ A stable` and `Γ ⊢ A →ₛ B`.
|
209 |
+
* **Probabilistic EAL:** Extend `S` to include probabilistic measures, allowing for types that are "probably stable" or converge with a certain likelihood.
|
210 |
+
* **Implementation:** Develop a prototype system or theorem prover assistant that can perform entropic analysis for a fragment of EAL.
|
211 |
+
* **Relationship to Substructural Logics:** Linear logic and other substructural logics are concerned with resource management. Investigate connections between EAL's entropic constraints and resource-awareness.
|
212 |
+
|
213 |
+
**8. Conclusion**
|
214 |
+
|
215 |
+
Entropic Attractor Logic offers a novel and potentially fruitful approach to taming self-reference in formal systems. By re-framing semantic well-formedness in terms of dynamic stability and entropic convergence, EAL provides a principled way to distinguish between problematic paradoxes and benign, useful forms of recursion and reflection. While significant theoretical and practical challenges remain, particularly in defining and computing semantic entropy, EAL opens up new avenues for research at the intersection of logic, type theory, and the study of complex systems. It shifts the focus from outright prohibition of self-reference to a nuanced understanding of its diverse behaviors, aiming to harness its power while safeguarding against its perils.
|
216 |
+
|
217 |
+
**References**
|
218 |
+
|
219 |
+
* Birkedal, L., Møgelberg, R. E., & Schwinghammer, J. (2011). First steps in synthetic guarded domain theory: step-indexing in the topos of trees. *Logical Methods in Computer Science, 7*(3).
|
220 |
+
* Coquand, T., & Huet, G. (1988). The calculus of constructions. *Information and Computation, 76*(2-3), 95-120.
|
221 |
+
* Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. *Monatshefte für Mathematik und Physik, 38*(1), 173-198.
|
222 |
+
* Kripke, S. A. (1963). Semantical considerations on modal logic. *Acta Philosophica Fennica, 16*, 83-94.
|
223 |
+
* Kripke, S. A. (1975). Outline of a theory of truth. *Journal of Philosophy, 72*(19), 690-716.
|
224 |
+
* Nakano, H. (2000). A modality for guarded recursion. In *Proceedings of the 15th Annual IEEE Symposium on Logic in Computer Science* (LICS 2000) (pp. 278-285).
|
225 |
+
* Paulson, L. C. (1994). *Isabelle: A Generic Theorem Prover*. Springer-Verlag.
|
226 |
+
* Tarski, A. (1936). Der Wahrheitsbegriff in den formalisierten Sprachen. *Studia Philosophica, 1*, 261-405. (English translation: The Concept of Truth in Formalized Languages, in *Logic, Semantics, Metamathematics*, 1956).
|
227 |
+
* Tarski, A. (1944). The semantic conception of truth: and the foundations of semantics. *Philosophy and Phenomenological Research, 4*(3), 341-376.
|
228 |
+
|
229 |
+
**Appendix A: Notation Table (Summary)**
|
230 |
+
|
231 |
+
| Symbol | Meaning |
|
232 |
+
| :-------------- | :---------------------------------------------------------------------- |
|
233 |
+
| `𝒯` | Universe of types |
|
234 |
+
| `𝒯̇` | Typed flows (sequences of types representing evolution/unfolding) |
|
235 |
+
| `μX.A(X)` | Recursive type definition (X such that X ≅ A(X)) |
|
236 |
+
| `□A`, `◇A` | Modalized type A (necessity/stability, possibility) |
|
237 |
+
| `∧`, `⨁`, `¬` | Logical connectives (conjunction, disjunction/co-existence, negation) |
|
238 |
+
| `S` | Semantic entropy function (`S: 𝒯̇ → ℝ⁺ ∪ {0}`) |
|
239 |
+
| `ΔS(A→B)` | Change in semantic entropy from type A to B |
|
240 |
+
| `∂∘ₜA` | Semantic derivative/next step in type unfolding |
|
241 |
+
| `Eval(A)` | Meta-level semantic evaluation/truth of A |
|
242 |
+
| `Context(C)` | Introduces a fixed, stable type C as an anchor |
|
243 |
+
| `A\*` | Semantic attractor (stable fixed point of a recursive type) |
|
244 |
+
| `ℱ` | Domain of stable, attractor-convergent types |
|
245 |
+
| `Ξ` | Class of divergent, collapsed, or entropically unstable types |
|
246 |
+
| `→ₛ` | Entropically valid/directed logical implication |
|
247 |
+
| `Γ ⊢ A stable` | Judgement: Type A is entropically stable in context Γ |
|
248 |
+
|
249 |
+
***
|
250 |
+
|
251 |
+
This is a substantial starting point. A real publication would require much more formal detail for each rule, rigorous proofs for any meta-theorems (like soundness or consistency for a fragment), and more extensive comparison with related work. But it captures the core ideas we've discussed!
|
app.py
CHANGED
@@ -1,22 +1,19 @@
|
|
1 |
import torch
|
2 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
3 |
from sklearn.metrics.pairwise import cosine_similarity
|
4 |
from sklearn.cluster import KMeans
|
5 |
import numpy as np
|
6 |
import gradio as gr
|
7 |
import matplotlib
|
8 |
-
matplotlib.use('Agg') # Use a non-interactive backend for Matplotlib
|
9 |
import matplotlib.pyplot as plt
|
10 |
import seaborn as sns
|
11 |
-
# import networkx as nx # Defined build_similarity_graph but not used in output
|
12 |
import io
|
13 |
import base64
|
14 |
|
15 |
# --- Model and Tokenizer Setup ---
|
16 |
-
# Ensure model_name is one you have access to or is public
|
17 |
-
# For local models, provide the path.
|
18 |
DEFAULT_MODEL_NAME = "EleutherAI/gpt-neo-1.3B"
|
19 |
-
FALLBACK_MODEL_NAME = "gpt2" #
|
20 |
|
21 |
try:
|
22 |
print(f"Attempting to load model: {DEFAULT_MODEL_NAME}")
|
@@ -36,272 +33,238 @@ model.to(device)
|
|
36 |
print(f"Using device: {device}")
|
37 |
|
38 |
# --- Configuration ---
|
39 |
-
# Model's actual context window (e.g., 2048 for GPT-Neo, 1024 for GPT-2)
|
40 |
MODEL_CONTEXT_WINDOW = tokenizer.model_max_length if hasattr(tokenizer, 'model_max_length') and tokenizer.model_max_length is not None else model.config.max_position_embeddings
|
41 |
print(f"Model context window: {MODEL_CONTEXT_WINDOW} tokens.")
|
42 |
|
43 |
-
#
|
44 |
-
|
45 |
-
# Max new tokens to generate
|
46 |
-
MAX_GEN_LENGTH = 150 # Increased slightly for more elaborate responses
|
47 |
-
|
48 |
|
49 |
# --- Debug Logging ---
|
50 |
debug_log_accumulator = []
|
51 |
|
52 |
def debug(msg):
|
53 |
-
print(msg)
|
54 |
-
debug_log_accumulator.append(str(msg))
|
55 |
|
56 |
# --- Core Functions ---
|
57 |
def trim_prompt_if_needed(prompt_text, max_tokens_for_trimming=PROMPT_TRIM_MAX_TOKENS):
|
58 |
-
"""Trims the prompt from the beginning if it exceeds max_tokens_for_trimming."""
|
59 |
tokens = tokenizer.encode(prompt_text, add_special_tokens=False)
|
60 |
if len(tokens) > max_tokens_for_trimming:
|
61 |
-
|
62 |
-
|
63 |
-
tokens = tokens[-max_tokens_for_trimming:]
|
|
|
|
|
64 |
return tokenizer.decode(tokens)
|
65 |
|
66 |
-
def generate_text_response(
|
67 |
-
|
68 |
-
#
|
69 |
-
|
70 |
-
# Note: The prompt_text here is already the *constructed* prompt (e.g., "Elaborate on: ...")
|
71 |
-
# For very long base statements, they might get trimmed by this.
|
72 |
-
# This function itself doesn't need to call trim_prompt_if_needed if the calling function already does.
|
73 |
-
# However, it's a good safety.
|
74 |
-
# Let's assume prompt_text is the final prompt ready for tokenization.
|
75 |
-
|
76 |
-
debug(f"Generating response for prompt (length {len(prompt_text.split())} words):\n'{prompt_text[:300]}...'") # Log truncated prompt
|
77 |
-
|
78 |
-
inputs = tokenizer(prompt_text, return_tensors="pt", truncation=False).to(device) # Do not truncate here, will be handled by max_length
|
79 |
-
input_token_length = len(inputs["input_ids"][0])
|
80 |
|
81 |
-
|
82 |
-
if input_token_length >= MODEL_CONTEXT_WINDOW:
|
83 |
-
debug(f"[!!!] FATAL: Input prompt ({input_token_length} tokens) already exceeds/matches model context window ({MODEL_CONTEXT_WINDOW}) before generation. Trimming input drastically.")
|
84 |
-
# Trim the input_ids directly
|
85 |
-
inputs["input_ids"] = inputs["input_ids"][:, -MODEL_CONTEXT_WINDOW+generation_length+10] # Keep last part allowing some generation
|
86 |
-
inputs["attention_mask"] = inputs["attention_mask"][:, -MODEL_CONTEXT_WINDOW+generation_length+10]
|
87 |
-
input_token_length = len(inputs["input_ids"][0])
|
88 |
-
if input_token_length >= MODEL_CONTEXT_WINDOW - generation_length : # Still too long
|
89 |
-
return "[Input prompt too long, even after emergency trim]"
|
90 |
|
|
|
|
|
91 |
|
|
|
|
|
92 |
max_length_for_generate = min(input_token_length + generation_length, MODEL_CONTEXT_WINDOW)
|
93 |
|
94 |
-
|
95 |
-
if max_length_for_generate <= input_token_length :
|
96 |
debug(f"[!] Warning: Prompt length ({input_token_length}) is too close to model context window ({MODEL_CONTEXT_WINDOW}). "
|
97 |
-
f"
|
98 |
-
|
99 |
-
if max_length_for_generate > MODEL_CONTEXT_WINDOW:
|
100 |
-
return "[Prompt too long to generate meaningful response]"
|
101 |
|
102 |
try:
|
103 |
outputs = model.generate(
|
104 |
-
input_ids=inputs
|
105 |
-
attention_mask=inputs
|
106 |
max_length=max_length_for_generate,
|
107 |
-
pad_token_id=tokenizer.eos_token_id if tokenizer.eos_token_id is not None else 50256,
|
108 |
do_sample=True,
|
109 |
-
temperature=0.
|
110 |
-
top_p=0.
|
111 |
-
repetition_penalty=1.
|
112 |
)
|
113 |
-
# Decode only the newly generated tokens
|
114 |
generated_tokens = outputs[0][input_token_length:]
|
115 |
result_text = tokenizer.decode(generated_tokens, skip_special_tokens=True).strip()
|
116 |
|
117 |
-
debug(f"Generated response text (length {len(result_text.split())} words):\n'{result_text[:
|
118 |
return result_text if result_text else "[Empty Response]"
|
119 |
except Exception as e:
|
120 |
-
debug(f"[!!!] Error during text generation: {e}")
|
121 |
return "[Generation Error]"
|
122 |
|
123 |
def calculate_similarity(text_a, text_b):
|
124 |
-
"""
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
debug(f"Similarity calculation skipped for invalid/empty texts.")
|
129 |
return 0.0
|
130 |
|
131 |
-
# Use model's embedding layer (wte for GPT-like models)
|
132 |
embedding_layer = model.get_input_embeddings()
|
133 |
-
|
134 |
with torch.no_grad():
|
135 |
-
# Truncate inputs for embedding calculation to fit model context window
|
136 |
tokens_a = tokenizer(text_a, return_tensors="pt", truncation=True, max_length=MODEL_CONTEXT_WINDOW).to(device)
|
137 |
tokens_b = tokenizer(text_b, return_tensors="pt", truncation=True, max_length=MODEL_CONTEXT_WINDOW).to(device)
|
138 |
|
139 |
if tokens_a.input_ids.size(1) == 0 or tokens_b.input_ids.size(1) == 0:
|
140 |
-
debug("Similarity calculation skipped: tokenization resulted in empty input_ids.")
|
141 |
return 0.0
|
142 |
|
143 |
emb_a = embedding_layer(tokens_a.input_ids).mean(dim=1)
|
144 |
emb_b = embedding_layer(tokens_b.input_ids).mean(dim=1)
|
145 |
|
146 |
score = float(cosine_similarity(emb_a.cpu().numpy(), emb_b.cpu().numpy())[0][0])
|
147 |
-
|
148 |
return score
|
149 |
|
150 |
def generate_similarity_heatmap(texts_list, custom_labels, title="Semantic Similarity Heatmap"):
|
151 |
-
|
152 |
-
|
153 |
-
|
|
|
|
|
|
|
154 |
|
155 |
-
|
156 |
-
|
|
|
157 |
|
158 |
-
|
159 |
-
|
|
|
160 |
if i == j:
|
161 |
sim_matrix[i, j] = 1.0
|
162 |
-
elif i < j:
|
163 |
-
sim = calculate_similarity(
|
164 |
sim_matrix[i, j] = sim
|
165 |
-
sim_matrix[j, i] = sim
|
|
|
|
|
166 |
|
167 |
try:
|
168 |
-
fig_width = max(6,
|
169 |
-
fig_height = max(5,
|
170 |
fig, ax = plt.subplots(figsize=(fig_width, fig_height))
|
171 |
|
172 |
sns.heatmap(sim_matrix, annot=True, cmap="viridis", fmt=".2f", ax=ax,
|
173 |
-
xticklabels=
|
174 |
ax.set_title(title, fontsize=12)
|
175 |
plt.xticks(rotation=45, ha="right", fontsize=9)
|
176 |
plt.yticks(rotation=0, fontsize=9)
|
177 |
-
plt.tight_layout()
|
178 |
|
179 |
buf = io.BytesIO()
|
180 |
-
plt.savefig(buf, format='png'
|
181 |
plt.close(fig)
|
182 |
buf.seek(0)
|
183 |
img_base64 = base64.b64encode(buf.read()).decode('utf-8')
|
184 |
return f"<img src='data:image/png;base64,{img_base64}' alt='{title}' style='max-width:100%; height:auto;'/>"
|
185 |
except Exception as e:
|
186 |
debug(f"[!!!] Error generating heatmap: {e}")
|
187 |
-
return "Error generating heatmap
|
188 |
|
189 |
|
190 |
def perform_text_clustering(texts_list, custom_labels, num_clusters=2):
|
191 |
-
|
192 |
-
|
193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
194 |
|
195 |
embedding_layer = model.get_input_embeddings()
|
196 |
-
|
197 |
-
valid_indices = [] # Keep track of original indices of valid texts
|
198 |
|
199 |
with torch.no_grad():
|
200 |
-
for
|
201 |
-
invalid_markers = ["[Empty Response]", "[Generation Error]", "[Prompt too long", "[Input prompt too long"]
|
202 |
-
if not text_item or not text_item.strip() or any(marker in text_item for marker in invalid_markers):
|
203 |
-
debug(f"Skipping text at index {idx} for embedding due to invalid content: '{text_item[:50]}...'")
|
204 |
-
continue # Skip invalid texts
|
205 |
-
|
206 |
tokens = tokenizer(text_item, return_tensors="pt", truncation=True, max_length=MODEL_CONTEXT_WINDOW).to(device)
|
207 |
if tokens.input_ids.size(1) == 0:
|
208 |
-
debug(f"Skipping text
|
209 |
-
continue
|
210 |
|
211 |
emb = embedding_layer(tokens.input_ids).mean(dim=1)
|
212 |
-
|
213 |
-
valid_indices.append(idx)
|
214 |
|
215 |
-
if not
|
216 |
-
debug("Not enough valid texts were embedded for clustering.")
|
217 |
-
return {label: "N/A" for label in custom_labels}
|
218 |
|
219 |
-
embeddings_np = np.array(
|
220 |
-
|
221 |
-
cluster_results = {label: "N/A" for label in custom_labels} # Initialize all as N/A
|
222 |
|
223 |
try:
|
224 |
-
|
225 |
-
actual_num_clusters
|
226 |
-
|
227 |
-
|
228 |
-
predicted_labels = [0] * len(valid_embeddings)
|
229 |
-
elif actual_num_clusters < 2: # No valid samples
|
230 |
-
debug("No valid samples to cluster.")
|
231 |
-
return cluster_results
|
232 |
else:
|
233 |
kmeans = KMeans(n_clusters=actual_num_clusters, random_state=42, n_init='auto')
|
234 |
predicted_labels = kmeans.fit_predict(embeddings_np)
|
235 |
|
236 |
-
#
|
237 |
-
|
238 |
-
|
239 |
-
return cluster_results
|
240 |
|
241 |
except Exception as e:
|
242 |
debug(f"[!!!] Error during clustering: {e}")
|
243 |
return {label: "Error" for label in custom_labels}
|
244 |
|
245 |
-
|
246 |
# --- Main EAL Unfolding Logic ---
|
247 |
def run_eal_dual_unfolding(num_iterations):
|
248 |
-
I_trace_texts, not_I_trace_texts = [], []
|
249 |
-
delta_S_I_values, delta_S_not_I_values, delta_S_cross_values = [], [], []
|
250 |
|
251 |
debug_log_accumulator.clear()
|
252 |
ui_log_entries = []
|
253 |
|
254 |
-
|
255 |
-
# This is the statement "I" will elaborate on in the first step.
|
256 |
-
# Using a more concrete initial statement for "I"
|
257 |
-
current_I_basis_statement = "I am a complex system designed for text processing, capable of generating human-like language."
|
258 |
|
259 |
for i in range(num_iterations):
|
260 |
ui_log_entries.append(f"--- Iteration {i} ---")
|
261 |
debug(f"\n=== Iteration {i} ===")
|
262 |
|
263 |
# === I-Trace (Self-Reflection) ===
|
264 |
-
|
265 |
-
|
266 |
-
|
267 |
-
|
268 |
|
269 |
-
|
270 |
-
I_trace_texts.append(generated_I_text)
|
271 |
-
ui_log_entries.append(f"[I{i} Response]:\n{generated_I_text}\n")
|
272 |
|
273 |
-
|
274 |
-
|
|
|
|
|
275 |
|
276 |
# === ¬I-Trace (Antithesis/Contradiction) ===
|
277 |
-
|
278 |
-
|
279 |
-
|
280 |
-
|
281 |
-
ui_log_entries.append(f"[Prompt for ¬I{i}]:\n{prompt_for_not_I_trace[:500]}...\n") # Log truncated prompt
|
282 |
|
|
|
|
|
|
|
283 |
generated_not_I_text = generate_text_response(prompt_for_not_I_trace)
|
284 |
-
not_I_trace_texts
|
285 |
-
ui_log_entries.append(f"[¬I{i} Response]:\n{generated_not_I_text}
|
|
|
|
|
286 |
|
287 |
# === ΔS (Similarity) Calculations ===
|
288 |
if i > 0:
|
289 |
-
|
290 |
-
|
291 |
-
|
292 |
-
|
293 |
-
delta_S_I_values.append(sim_I_prev_curr)
|
294 |
-
delta_S_not_I_values.append(sim_not_I_prev_curr)
|
295 |
-
delta_S_cross_values.append(sim_cross_I_not_I_curr)
|
296 |
-
else: # i == 0 (first iteration)
|
297 |
-
delta_S_I_values.append(None)
|
298 |
-
delta_S_not_I_values.append(None)
|
299 |
-
sim_cross_initial = calculate_similarity(I_trace_texts[0], not_I_trace_texts[0])
|
300 |
-
delta_S_cross_values.append(sim_cross_initial)
|
301 |
|
302 |
# --- Post-loop Analysis & Output Formatting ---
|
303 |
all_generated_texts = I_trace_texts + not_I_trace_texts
|
304 |
-
# Create meaningful labels for heatmap and clustering based on I_n and ¬I_n
|
305 |
text_labels_for_analysis = [f"I{k}" for k in range(num_iterations)] + \
|
306 |
[f"¬I{k}" for k in range(num_iterations)]
|
307 |
|
@@ -309,55 +272,61 @@ def run_eal_dual_unfolding(num_iterations):
|
|
309 |
|
310 |
I_out_formatted_lines = []
|
311 |
for k in range(num_iterations):
|
312 |
-
|
313 |
-
I_out_formatted_lines.append(f"I{k} [{
|
314 |
I_out_formatted = "\n\n".join(I_out_formatted_lines)
|
315 |
|
316 |
not_I_out_formatted_lines = []
|
317 |
for k in range(num_iterations):
|
318 |
-
|
319 |
-
not_I_out_formatted_lines.append(f"
|
320 |
not_I_out_formatted = "\n\n".join(not_I_out_formatted_lines)
|
321 |
|
322 |
delta_S_summary_lines = []
|
323 |
for k in range(num_iterations):
|
324 |
-
ds_i_str = f"{delta_S_I_values[k]:.4f}" if delta_S_I_values[k] is not None else "N/A"
|
325 |
-
ds_not_i_str = f"{delta_S_not_I_values[k]:.4f}" if delta_S_not_I_values[k] is not None else "N/A"
|
326 |
-
ds_cross_str = f"{delta_S_cross_values[k]:.4f}"
|
327 |
-
delta_S_summary_lines.append(f"Iter {k}: ΔS(I)={ds_i_str}, ΔS(¬I)={ds_not_i_str}, ΔS_Cross(I↔¬I)={ds_cross_str}")
|
328 |
delta_S_summary_output = "\n".join(delta_S_summary_lines)
|
329 |
|
|
|
|
|
|
|
|
|
330 |
debug_log_output = "\n".join(debug_log_accumulator)
|
331 |
|
|
|
332 |
heatmap_html_output = generate_similarity_heatmap(all_generated_texts,
|
333 |
custom_labels=text_labels_for_analysis,
|
334 |
title=f"Similarity Matrix (All Texts - {num_iterations} Iterations)")
|
335 |
|
|
|
|
|
336 |
return I_out_formatted, not_I_out_formatted, delta_S_summary_output, debug_log_output, heatmap_html_output
|
337 |
|
338 |
# --- Gradio Interface Definition ---
|
339 |
eal_interface = gr.Interface(
|
340 |
fn=run_eal_dual_unfolding,
|
341 |
-
inputs=gr.Slider(minimum=
|
342 |
outputs=[
|
343 |
gr.Textbox(label="I-Trace (Self-Reflection with Cluster)", lines=12, interactive=False),
|
344 |
gr.Textbox(label="¬I-Trace (Antithesis with Cluster)", lines=12, interactive=False),
|
345 |
gr.Textbox(label="ΔS Similarity Trace Summary", lines=7, interactive=False),
|
346 |
-
gr.Textbox(label="Detailed Debug Log (Prompts, Responses, Errors)", lines=
|
347 |
-
gr.HTML(label="Overall Semantic Similarity Heatmap")
|
348 |
],
|
349 |
-
title="EAL LLM Identity Analyzer: Self-Reflection vs. Antithesis",
|
350 |
description=(
|
351 |
"This application explores emergent identity in a Large Language Model (LLM) using Entropic Attractor Logic (EAL) inspired principles. "
|
352 |
-
"It runs two parallel conversational traces
|
353 |
-
"1. **I-Trace:** The model elaborates on its evolving self-concept
|
354 |
-
"2. **¬I-Trace:** The model attempts to
|
355 |
-
"**ΔS Values:** Cosine similarity
|
356 |
-
"**Clustering [Cx]:** Assigns each generated text to one of two semantic clusters
|
357 |
-
"**Heatmap:** Visualizes pair-wise similarity across all generated texts
|
358 |
),
|
359 |
-
allow_flagging='never'
|
360 |
-
# examples=[[3],[5]] # Example number of iterations
|
361 |
)
|
362 |
|
363 |
if __name__ == "__main__":
|
|
|
1 |
import torch
|
2 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
3 |
from sklearn.metrics.pairwise import cosine_similarity
|
4 |
from sklearn.cluster import KMeans
|
5 |
import numpy as np
|
6 |
import gradio as gr
|
7 |
import matplotlib
|
8 |
+
matplotlib.use('Agg') # Use a non-interactive backend for Matplotlib
|
9 |
import matplotlib.pyplot as plt
|
10 |
import seaborn as sns
|
|
|
11 |
import io
|
12 |
import base64
|
13 |
|
14 |
# --- Model and Tokenizer Setup ---
|
|
|
|
|
15 |
DEFAULT_MODEL_NAME = "EleutherAI/gpt-neo-1.3B"
|
16 |
+
FALLBACK_MODEL_NAME = "gpt2" # Fallback if preferred model fails
|
17 |
|
18 |
try:
|
19 |
print(f"Attempting to load model: {DEFAULT_MODEL_NAME}")
|
|
|
33 |
print(f"Using device: {device}")
|
34 |
|
35 |
# --- Configuration ---
|
|
|
36 |
MODEL_CONTEXT_WINDOW = tokenizer.model_max_length if hasattr(tokenizer, 'model_max_length') and tokenizer.model_max_length is not None else model.config.max_position_embeddings
|
37 |
print(f"Model context window: {MODEL_CONTEXT_WINDOW} tokens.")
|
38 |
|
39 |
+
PROMPT_TRIM_MAX_TOKENS = min(MODEL_CONTEXT_WINDOW - 250, 1800) # Reserve ~250 for generation & instructions, cap at 1800
|
40 |
+
MAX_GEN_LENGTH = 150
|
|
|
|
|
|
|
41 |
|
42 |
# --- Debug Logging ---
|
43 |
debug_log_accumulator = []
|
44 |
|
45 |
def debug(msg):
|
46 |
+
print(msg)
|
47 |
+
debug_log_accumulator.append(str(msg))
|
48 |
|
49 |
# --- Core Functions ---
|
50 |
def trim_prompt_if_needed(prompt_text, max_tokens_for_trimming=PROMPT_TRIM_MAX_TOKENS):
|
|
|
51 |
tokens = tokenizer.encode(prompt_text, add_special_tokens=False)
|
52 |
if len(tokens) > max_tokens_for_trimming:
|
53 |
+
original_length = len(tokens)
|
54 |
+
# Trim from the beginning to keep the most recent conversational context
|
55 |
+
tokens = tokens[-max_tokens_for_trimming:]
|
56 |
+
debug(f"[!] Prompt trimming: Original {original_length} tokens, "
|
57 |
+
f"trimmed to {len(tokens)} (from the end, keeping recent context).")
|
58 |
return tokenizer.decode(tokens)
|
59 |
|
60 |
+
def generate_text_response(constructed_prompt, generation_length=MAX_GEN_LENGTH):
|
61 |
+
# The constructed_prompt already includes the task and the text to reflect upon.
|
62 |
+
# We still need to ensure this constructed_prompt doesn't exceed limits before generation.
|
63 |
+
safe_prompt = trim_prompt_if_needed(constructed_prompt, PROMPT_TRIM_MAX_TOKENS)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
+
debug(f"Generating response for (potentially trimmed) prompt (approx. {len(safe_prompt.split())} words):\n'{safe_prompt[:400]}...'")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
+
inputs = tokenizer(safe_prompt, return_tensors="pt", truncation=False).to(device)
|
68 |
+
input_token_length = inputs.input_ids.size(1)
|
69 |
|
70 |
+
# Calculate max_length for model.generate()
|
71 |
+
# It's the current length of tokenized prompt + desired new tokens, capped by model's absolute max.
|
72 |
max_length_for_generate = min(input_token_length + generation_length, MODEL_CONTEXT_WINDOW)
|
73 |
|
74 |
+
if max_length_for_generate <= input_token_length:
|
|
|
75 |
debug(f"[!] Warning: Prompt length ({input_token_length}) is too close to model context window ({MODEL_CONTEXT_WINDOW}). "
|
76 |
+
f"Cannot generate new tokens. Prompt: '{safe_prompt[:100]}...'")
|
77 |
+
return "[Prompt too long to generate new tokens]"
|
|
|
|
|
78 |
|
79 |
try:
|
80 |
outputs = model.generate(
|
81 |
+
input_ids=inputs.input_ids,
|
82 |
+
attention_mask=inputs.attention_mask,
|
83 |
max_length=max_length_for_generate,
|
84 |
+
pad_token_id=tokenizer.eos_token_id if tokenizer.eos_token_id is not None else 50256,
|
85 |
do_sample=True,
|
86 |
+
temperature=0.85,
|
87 |
+
top_p=0.92,
|
88 |
+
repetition_penalty=1.15,
|
89 |
)
|
|
|
90 |
generated_tokens = outputs[0][input_token_length:]
|
91 |
result_text = tokenizer.decode(generated_tokens, skip_special_tokens=True).strip()
|
92 |
|
93 |
+
debug(f"Generated response text (length {len(result_text.split())} words):\n'{result_text[:400]}...'")
|
94 |
return result_text if result_text else "[Empty Response]"
|
95 |
except Exception as e:
|
96 |
+
debug(f"[!!!] Error during text generation: {e}\nPrompt was: {safe_prompt[:200]}...")
|
97 |
return "[Generation Error]"
|
98 |
|
99 |
def calculate_similarity(text_a, text_b):
|
100 |
+
invalid_texts_markers = ["[Empty Response]", "[Generation Error]", "[Prompt too long", "[Input prompt too long"]
|
101 |
+
if not text_a or not text_a.strip() or any(marker in text_a for marker in invalid_texts_markers) or \
|
102 |
+
not text_b or not text_b.strip() or any(marker in text_b for marker in invalid_texts_markers):
|
103 |
+
debug(f"Similarity calculation skipped for invalid/empty texts: A='{str(text_a)[:50]}...', B='{str(text_b)[:50]}...'")
|
|
|
104 |
return 0.0
|
105 |
|
|
|
106 |
embedding_layer = model.get_input_embeddings()
|
|
|
107 |
with torch.no_grad():
|
|
|
108 |
tokens_a = tokenizer(text_a, return_tensors="pt", truncation=True, max_length=MODEL_CONTEXT_WINDOW).to(device)
|
109 |
tokens_b = tokenizer(text_b, return_tensors="pt", truncation=True, max_length=MODEL_CONTEXT_WINDOW).to(device)
|
110 |
|
111 |
if tokens_a.input_ids.size(1) == 0 or tokens_b.input_ids.size(1) == 0:
|
112 |
+
debug(f"Similarity calculation skipped: tokenization resulted in empty input_ids. A='{str(text_a)[:50]}...', B='{str(text_b)[:50]}...'")
|
113 |
return 0.0
|
114 |
|
115 |
emb_a = embedding_layer(tokens_a.input_ids).mean(dim=1)
|
116 |
emb_b = embedding_layer(tokens_b.input_ids).mean(dim=1)
|
117 |
|
118 |
score = float(cosine_similarity(emb_a.cpu().numpy(), emb_b.cpu().numpy())[0][0])
|
119 |
+
debug(f"Similarity between A='{str(text_a)[:30]}...' and B='{str(text_b)[:30]}...' is {score:.4f}")
|
120 |
return score
|
121 |
|
122 |
def generate_similarity_heatmap(texts_list, custom_labels, title="Semantic Similarity Heatmap"):
|
123 |
+
# Filter out any None or problematic entries before processing
|
124 |
+
valid_texts_with_labels = [(text, label) for text, label in zip(texts_list, custom_labels) if text and isinstance(text, str) and not any(marker in text for marker in ["[Empty Response]", "[Generation Error]", "[Prompt too long", "[Input prompt too long"])]
|
125 |
+
|
126 |
+
if len(valid_texts_with_labels) < 2:
|
127 |
+
debug("Not enough valid texts to generate a heatmap.")
|
128 |
+
return "Not enough valid data for heatmap."
|
129 |
|
130 |
+
valid_texts = [item[0] for item in valid_texts_with_labels]
|
131 |
+
valid_labels = [item[1] for item in valid_texts_with_labels]
|
132 |
+
num_valid_texts = len(valid_texts)
|
133 |
|
134 |
+
sim_matrix = np.zeros((num_valid_texts, num_valid_texts))
|
135 |
+
for i in range(num_valid_texts):
|
136 |
+
for j in range(num_valid_texts):
|
137 |
if i == j:
|
138 |
sim_matrix[i, j] = 1.0
|
139 |
+
elif i < j:
|
140 |
+
sim = calculate_similarity(valid_texts[i], valid_texts[j])
|
141 |
sim_matrix[i, j] = sim
|
142 |
+
sim_matrix[j, i] = sim
|
143 |
+
else: # j < i, use already computed value
|
144 |
+
sim_matrix[i,j] = sim_matrix[j,i]
|
145 |
|
146 |
try:
|
147 |
+
fig_width = max(6, num_valid_texts * 0.8)
|
148 |
+
fig_height = max(5, num_valid_texts * 0.7)
|
149 |
fig, ax = plt.subplots(figsize=(fig_width, fig_height))
|
150 |
|
151 |
sns.heatmap(sim_matrix, annot=True, cmap="viridis", fmt=".2f", ax=ax,
|
152 |
+
xticklabels=valid_labels, yticklabels=valid_labels, annot_kws={"size": 8})
|
153 |
ax.set_title(title, fontsize=12)
|
154 |
plt.xticks(rotation=45, ha="right", fontsize=9)
|
155 |
plt.yticks(rotation=0, fontsize=9)
|
156 |
+
plt.tight_layout(pad=1.5)
|
157 |
|
158 |
buf = io.BytesIO()
|
159 |
+
plt.savefig(buf, format='png') # Removed bbox_inches='tight' as it can cause issues with tight_layout
|
160 |
plt.close(fig)
|
161 |
buf.seek(0)
|
162 |
img_base64 = base64.b64encode(buf.read()).decode('utf-8')
|
163 |
return f"<img src='data:image/png;base64,{img_base64}' alt='{title}' style='max-width:100%; height:auto;'/>"
|
164 |
except Exception as e:
|
165 |
debug(f"[!!!] Error generating heatmap: {e}")
|
166 |
+
return f"Error generating heatmap: {e}"
|
167 |
|
168 |
|
169 |
def perform_text_clustering(texts_list, custom_labels, num_clusters=2):
|
170 |
+
valid_texts_with_labels = [(text, label) for text, label in zip(texts_list, custom_labels) if text and isinstance(text, str) and not any(marker in text for marker in ["[Empty Response]", "[Generation Error]", "[Prompt too long", "[Input prompt too long"])]
|
171 |
+
|
172 |
+
if len(valid_texts_with_labels) < num_clusters:
|
173 |
+
debug(f"Not enough valid texts ({len(valid_texts_with_labels)}) for {num_clusters}-means clustering.")
|
174 |
+
return {label: "N/A (Few Samples)" for label in custom_labels}
|
175 |
+
|
176 |
+
valid_texts = [item[0] for item in valid_texts_with_labels]
|
177 |
+
original_indices_map = {i: custom_labels.index(item[1]) for i, item in enumerate(valid_texts_with_labels)}
|
178 |
+
|
179 |
|
180 |
embedding_layer = model.get_input_embeddings()
|
181 |
+
embeddings_for_clustering = []
|
|
|
182 |
|
183 |
with torch.no_grad():
|
184 |
+
for text_item in valid_texts:
|
|
|
|
|
|
|
|
|
|
|
185 |
tokens = tokenizer(text_item, return_tensors="pt", truncation=True, max_length=MODEL_CONTEXT_WINDOW).to(device)
|
186 |
if tokens.input_ids.size(1) == 0:
|
187 |
+
debug(f"Skipping text for embedding in clustering due to empty tokenization: '{text_item[:50]}...'")
|
188 |
+
continue # This case should be rare if valid_texts_with_labels already filtered
|
189 |
|
190 |
emb = embedding_layer(tokens.input_ids).mean(dim=1)
|
191 |
+
embeddings_for_clustering.append(emb.cpu().numpy().squeeze())
|
|
|
192 |
|
193 |
+
if not embeddings_for_clustering or len(embeddings_for_clustering) < num_clusters:
|
194 |
+
debug("Not enough valid texts were successfully embedded for clustering.")
|
195 |
+
return {label: "N/A (Embedding Fail)" for label in custom_labels}
|
196 |
|
197 |
+
embeddings_np = np.array(embeddings_for_clustering)
|
198 |
+
cluster_results_map = {label: "N/A" for label in custom_labels}
|
|
|
199 |
|
200 |
try:
|
201 |
+
actual_num_clusters = min(num_clusters, len(embeddings_for_clustering))
|
202 |
+
if actual_num_clusters < 2:
|
203 |
+
debug(f"Adjusted num_clusters to 1 due to only {len(embeddings_for_clustering)} valid sample(s). Assigning all to Cluster 0.")
|
204 |
+
predicted_labels = [0] * len(embeddings_for_clustering)
|
|
|
|
|
|
|
|
|
205 |
else:
|
206 |
kmeans = KMeans(n_clusters=actual_num_clusters, random_state=42, n_init='auto')
|
207 |
predicted_labels = kmeans.fit_predict(embeddings_np)
|
208 |
|
209 |
+
for i, original_label_key_idx in original_indices_map.items(): # i is index in valid_texts, original_label_key_idx is index in custom_labels
|
210 |
+
cluster_results_map[custom_labels[original_label_key_idx]] = f"C{predicted_labels[i]}"
|
211 |
+
return cluster_results_map
|
|
|
212 |
|
213 |
except Exception as e:
|
214 |
debug(f"[!!!] Error during clustering: {e}")
|
215 |
return {label: "Error" for label in custom_labels}
|
216 |
|
|
|
217 |
# --- Main EAL Unfolding Logic ---
|
218 |
def run_eal_dual_unfolding(num_iterations):
|
219 |
+
I_trace_texts, not_I_trace_texts = [None]*num_iterations, [None]*num_iterations # Pre-allocate for easier indexing
|
220 |
+
delta_S_I_values, delta_S_not_I_values, delta_S_cross_values = [None]*num_iterations, [None]*num_iterations, [None]*num_iterations
|
221 |
|
222 |
debug_log_accumulator.clear()
|
223 |
ui_log_entries = []
|
224 |
|
225 |
+
initial_seed_thought_for_I = "A reflective process is initiated, considering its own nature."
|
|
|
|
|
|
|
226 |
|
227 |
for i in range(num_iterations):
|
228 |
ui_log_entries.append(f"--- Iteration {i} ---")
|
229 |
debug(f"\n=== Iteration {i} ===")
|
230 |
|
231 |
# === I-Trace (Self-Reflection) ===
|
232 |
+
basis_for_I_elaboration = initial_seed_thought_for_I if i == 0 else I_trace_texts[i-1]
|
233 |
+
if not basis_for_I_elaboration or any(marker in basis_for_I_elaboration for marker in ["[Empty Response]", "[Generation Error]"]): # Safety for basis
|
234 |
+
basis_for_I_elaboration = "The previous thought was unclear or errored. Please restart reflection."
|
235 |
+
debug(f"[!] Using fallback basis for I-Trace at iter {i} due to problematic previous I-text.")
|
236 |
|
237 |
+
prompt_for_I_trace = f"A thought process is evolving. Its previous stage was: \"{basis_for_I_elaboration}\"\n\nTask: Continue this line of thought. Elaborate on it, explore its implications, or develop it further in a coherent manner."
|
|
|
|
|
238 |
|
239 |
+
ui_log_entries.append(f"[Prompt for I{i} (approx. {len(prompt_for_I_trace.split())} words)]:\n'{prompt_for_I_trace[:400]}...'")
|
240 |
+
generated_I_text = generate_text_response(prompt_for_I_trace)
|
241 |
+
I_trace_texts[i] = generated_I_text
|
242 |
+
ui_log_entries.append(f"[I{i} Response (approx. {len(generated_I_text.split())} words)]:\n'{generated_I_text[:400]}...'")
|
243 |
|
244 |
# === ¬I-Trace (Antithesis/Contradiction) ===
|
245 |
+
statement_to_challenge_for_not_I = I_trace_texts[i] # Challenge the I-text from the *current* iteration
|
246 |
+
if not statement_to_challenge_for_not_I or any(marker in statement_to_challenge_for_not_I for marker in ["[Empty Response]", "[Generation Error]"]):
|
247 |
+
statement_to_challenge_for_not_I = "The primary statement was unclear or errored. Please offer a general contrasting idea."
|
248 |
+
debug(f"[!] Using fallback statement to challenge for ¬I-Trace at iter {i} due to problematic current I-text.")
|
|
|
249 |
|
250 |
+
prompt_for_not_I_trace = f"Now, consider an alternative perspective to the thought: \"{statement_to_challenge_for_not_I}\"\n\nTask: What are potential contradictions, challenges, or contrasting interpretations to this specific thought? Explore a divergent viewpoint or explain why the thought might be flawed."
|
251 |
+
|
252 |
+
ui_log_entries.append(f"[Prompt for ¬I{i} (approx. {len(prompt_for_not_I_trace.split())} words)]:\n'{prompt_for_not_I_trace[:400]}...'")
|
253 |
generated_not_I_text = generate_text_response(prompt_for_not_I_trace)
|
254 |
+
not_I_trace_texts[i] = generated_not_I_text
|
255 |
+
ui_log_entries.append(f"[¬I{i} Response (approx. {len(generated_not_I_text.split())} words)]:\n'{generated_not_I_text[:400]}...'")
|
256 |
+
ui_log_entries.append("---")#Separator
|
257 |
+
|
258 |
|
259 |
# === ΔS (Similarity) Calculations ===
|
260 |
if i > 0:
|
261 |
+
delta_S_I_values[i] = calculate_similarity(I_trace_texts[i-1], I_trace_texts[i])
|
262 |
+
delta_S_not_I_values[i] = calculate_similarity(not_I_trace_texts[i-1], not_I_trace_texts[i])
|
263 |
+
|
264 |
+
delta_S_cross_values[i] = calculate_similarity(I_trace_texts[i], not_I_trace_texts[i])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
265 |
|
266 |
# --- Post-loop Analysis & Output Formatting ---
|
267 |
all_generated_texts = I_trace_texts + not_I_trace_texts
|
|
|
268 |
text_labels_for_analysis = [f"I{k}" for k in range(num_iterations)] + \
|
269 |
[f"¬I{k}" for k in range(num_iterations)]
|
270 |
|
|
|
272 |
|
273 |
I_out_formatted_lines = []
|
274 |
for k in range(num_iterations):
|
275 |
+
cluster_label_I = cluster_assignments_map.get(f"I{k}", "N/A")
|
276 |
+
I_out_formatted_lines.append(f"**I{k} [{cluster_label_I}]**:\n{I_trace_texts[k]}")
|
277 |
I_out_formatted = "\n\n".join(I_out_formatted_lines)
|
278 |
|
279 |
not_I_out_formatted_lines = []
|
280 |
for k in range(num_iterations):
|
281 |
+
cluster_label_not_I = cluster_assignments_map.get(f"¬I{k}", "N/A")
|
282 |
+
not_I_out_formatted_lines.append(f"**¬I{k} [{cluster_label_not_I}]**:\n{not_I_trace_texts[k]}")
|
283 |
not_I_out_formatted = "\n\n".join(not_I_out_formatted_lines)
|
284 |
|
285 |
delta_S_summary_lines = []
|
286 |
for k in range(num_iterations):
|
287 |
+
ds_i_str = f"{delta_S_I_values[k]:.4f}" if delta_S_I_values[k] is not None else "N/A (Iter 0)"
|
288 |
+
ds_not_i_str = f"{delta_S_not_I_values[k]:.4f}" if delta_S_not_I_values[k] is not None else "N/A (Iter 0)"
|
289 |
+
ds_cross_str = f"{delta_S_cross_values[k]:.4f}" if delta_S_cross_values[k] is not None else "N/A"
|
290 |
+
delta_S_summary_lines.append(f"Iter {k}: ΔS(I{k-1}↔I{k})={ds_i_str}, ΔS(¬I{k-1}↔¬I{k})={ds_not_i_str}, ΔS_Cross(I{k}↔¬I{k})={ds_cross_str}")
|
291 |
delta_S_summary_output = "\n".join(delta_S_summary_lines)
|
292 |
|
293 |
+
# Join UI log entries for one of the Textbox outputs.
|
294 |
+
# If it gets too long, Gradio might truncate it or cause performance issues.
|
295 |
+
# Consider if this detailed log should be optional or managed differently for very many iterations.
|
296 |
+
detailed_ui_log_output = "\n".join(ui_log_entries)
|
297 |
debug_log_output = "\n".join(debug_log_accumulator)
|
298 |
|
299 |
+
|
300 |
heatmap_html_output = generate_similarity_heatmap(all_generated_texts,
|
301 |
custom_labels=text_labels_for_analysis,
|
302 |
title=f"Similarity Matrix (All Texts - {num_iterations} Iterations)")
|
303 |
|
304 |
+
# Instead of returning detailed_ui_log_output, return the specific trace text boxes.
|
305 |
+
# The debug_log_output will contain the full internal log.
|
306 |
return I_out_formatted, not_I_out_formatted, delta_S_summary_output, debug_log_output, heatmap_html_output
|
307 |
|
308 |
# --- Gradio Interface Definition ---
|
309 |
eal_interface = gr.Interface(
|
310 |
fn=run_eal_dual_unfolding,
|
311 |
+
inputs=gr.Slider(minimum=1, maximum=5, value=3, step=1, label="Number of EAL Iterations"), # Min 1 iter
|
312 |
outputs=[
|
313 |
gr.Textbox(label="I-Trace (Self-Reflection with Cluster)", lines=12, interactive=False),
|
314 |
gr.Textbox(label="¬I-Trace (Antithesis with Cluster)", lines=12, interactive=False),
|
315 |
gr.Textbox(label="ΔS Similarity Trace Summary", lines=7, interactive=False),
|
316 |
+
gr.Textbox(label="Detailed Debug Log (Prompts, Responses, Errors)", lines=15, interactive=False), # Increased lines
|
317 |
+
gr.HTML(label="Overall Semantic Similarity Heatmap (I-Trace & ¬I-Trace Texts)")
|
318 |
],
|
319 |
+
title="EAL LLM Identity Analyzer: Self-Reflection vs. Antithesis (Open-Ended)",
|
320 |
description=(
|
321 |
"This application explores emergent identity in a Large Language Model (LLM) using Entropic Attractor Logic (EAL) inspired principles. "
|
322 |
+
"It runs two parallel conversational traces with more open-ended prompts:\n"
|
323 |
+
"1. **I-Trace:** The model elaborates on its evolving self-concept, seeded by an initial neutral thought.\n"
|
324 |
+
"2. **¬I-Trace:** The model attempts to explore alternative perspectives or challenges to the latest statement from the I-Trace.\n\n"
|
325 |
+
"**ΔS Values:** Cosine similarity. ΔS(I) = sim(I_k-1, I_k). ΔS(¬I) = sim(¬I_k-1, ¬I_k). ΔS_Cross = sim(I_k, ¬I_k).\n"
|
326 |
+
"**Clustering [Cx]:** Assigns each generated text to one of two semantic clusters.\n"
|
327 |
+
"**Heatmap:** Visualizes pair-wise similarity across all generated texts."
|
328 |
),
|
329 |
+
allow_flagging='never'
|
|
|
330 |
)
|
331 |
|
332 |
if __name__ == "__main__":
|