AI & ML interests

None defined yet.

Recent Activity

Qwen's activity

KnutJaegersberg 
posted an update 3 days ago
view post
Post
2474
A Brief Survey of Associations Between Meta-Learning and General AI

The paper titled "A Brief Survey of Associations Between Meta-Learning and General AI" explores how meta-learning techniques can contribute to the development of Artificial General Intelligence (AGI). Here are the key points summarized:

1. General AI (AGI) and Meta-Learning:
- AGI aims to develop algorithms that can handle a wide variety of tasks, similar to human intelligence. Current AI systems excel at specific tasks but struggle with generalization to unseen tasks.
- Meta-learning or "learning to learn" improves model adaptation and generalization, allowing AI systems to tackle new tasks efficiently using prior experiences.

2. Neural Network Design in Meta-Learning:
- Techniques like Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks enable self-improvement and adaptability for deep models, supporting generalization across tasks.
- Highway networks and ResNet-style models use shortcuts for efficient backpropagation, allowing deeper models that can be used in meta-learning frameworks.

3. Coevolution:
- Coevolution involves the mutual evolution of multiple components, such as learners or task-solvers, to improve overall performance.
- Coevolution between learners enhances collaboration and competition within AI systems, while coevolution between tasks and solvers (e.g., POWERPLAY and AI-GA frameworks) pushes solvers to adapt to increasingly complex tasks.

4. Curiosity in Meta-Learning:
- Curiosity-based exploration encourages AI systems to discover new, diverse features of the environment, avoiding local optima.
- Curiosity-based objectives can be combined with performance-based objectives to ensure efficient exploration and adaptation in complex tasks.

5. Forgetting Mechanisms:
- Forgetting is crucial to avoid memory overload in AI systems

https://arxiv.org/abs/2101.04283
KnutJaegersberg 
posted an update 5 days ago
view post
Post
1515
Artificial general intelligence through recursive data compression and grounded reasoning: a position paper


This paper proposes a system to achieve AGI through general data compression and grounded reasoning.

General Data Compression involves creating a flexible algorithm that adapts to input data to simplify and compress it recursively, identifying simple, orthogonal features to avoid redundancy. The algorithm measures AGI progress by solving problems based on increasing complexity, and it expands its search space according to the data itself. Compression is applied not only to data but also to model parameters, and sequences are segmented based on compressibility.

Grounded Reasoning refers to forming representations with various granularities, crucial for commonsense reasoning and AGI. The system simulates the real world as its model, switching between representations and maximizing resourcefulness. Key ideas include the world as its own model for reasoning and actions aimed at maximizing entropy to test hypotheses.

The paper emphasizes simplicity, data-dependent bias, recursion, orthogonality, resourcefulness, and grounding in real-world contexts as fundamental principles in building an AGI system.

https://arxiv.org/abs/1506.04366
  • 1 reply
·

remove base_model

#1 opened 15 days ago by
victor
Tonic 
posted an update 8 days ago
view post
Post
1979
🙋🏻‍♂️hey there folks ,

Goedel's Theorem Prover is now being demo'ed on huggingface : Tonic/Math

give it a try !
KnutJaegersberg 
posted an update 12 days ago
view post
Post
984
Anthropomorphic reasoning about neuromorphic AGI safety

Summary of "Anthropomorphic Reasoning About Neuromorphic AGI Safety"
This paper explores safety strategies for neuromorphic artificial general intelligence (AGI), defined as systems designed by reverse-engineering essential computations of the human brain. Key arguments and proposals include:

1. Anthropomorphic Reasoning Validity:
- Neuromorphic AGI’s design and assessment rely on human cognition models, making anthropomorphic reasoning (using human-like traits) critical for safety analysis. Comparisons to human behavior and neural mechanisms provide insights into AGI behavior and risks.

2. Countering Safety Criticisms:
- The authors challenge claims that neuromorphic AGI is inherently more dangerous than other AGI approaches. They argue all AGI systems face intractable verification challenges (e.g., real-world unpredictability, incomputable action validation). Neuromorphic AGI may even offer safety advantages by enabling comparisons to human cognitive processes.

3. Motivational Architecture:
- Basic drives (e.g., curiosity, social interaction) are essential for cognitive development and safety. These pre-conceptual, hardwired drives (analogous to human hunger or affiliation) shape learning and behavior. The orthogonality thesis (intelligence and goals as independent) is contested, as neuromorphic AGI’s drives likely intertwine with its cognitive architecture.

4. Safety Strategies:
- **Social Drives**: Embedding drives like caregiving, affiliation, and cooperation ensures AGI develops prosocial values through human interaction.
- **Bounded Reward Systems**: Human-like satiation mechanisms (e.g., diminishing rewards after fulfillment) prevent extreme behaviors (e.g., paperclip maximization).
- **Developmental Environment**: Exposure to diverse, positive human interactions and moral examples fosters

https://ccnlab.org/papers/JilkHerdReadEtAl17.pdf

Add link to paper

2
#3 opened 14 days ago by
nielsr