Survey of reasoning with foundation models
Top 54.5% on sourcepulse
This repository is a curated list of foundation models and techniques for reasoning, targeting AI researchers and engineers. It organizes state-of-the-art models and methodologies across language, vision, and multimodal domains, detailing their application to various reasoning tasks like commonsense, mathematical, logical, and agent reasoning. The project aims to provide a comprehensive overview and facilitate contributions to the rapidly evolving field of reasoning with foundation models.
How It Works
The repository categorizes foundation models into Language Foundation Models (LFMs), Vision Foundation Models (VFMs), and Multimodal Foundation Models (MFMs). It further breaks down reasoning tasks by type (e.g., commonsense, mathematical, logical) and lists relevant techniques such as pre-training, fine-tuning, alignment training, Mixture of Experts (MoE), in-context learning, and autonomous agents. Each entry typically includes links to papers, code repositories, and project pages, offering a structured entry point into specific research areas.
Quick Start & Requirements
This is a curated list of research papers and code repositories, not a runnable software package. No installation or execution is required.
Highlighted Details
Maintenance & Community
This is a community-driven project. Contributions are welcomed via pull requests. The repository is based on the paper "A Survey of Reasoning with Foundation Models: Concepts, Methodologies, and Outlook."
Licensing & Compatibility
The repository itself is not software and thus not licensed. Individual linked resources will have their own licenses.
Limitations & Caveats
As a survey, this repository does not provide executable code or benchmarks. Its value is in its breadth and organization of existing research, requiring users to consult the linked resources for practical implementation details.
1 month ago
1 day