🧪 MeTHanol: Modularized Thinking Language Models with Intermediate Layer Thinking, Decoding and Bootstrapping Reasoning

Anonymous

Abstract

Current research efforts are focused on enhancing the thinking and reasoning capability of large language model (LLM) by prompting, data-driven emergence and inference-time computation. In this study, we consider stimulating language model's thinking and cognitive abilities from a modular perspective, which mimick the human brain architecture. We select a specific intermediate attention layer with newly implemented language heads. We conduct dual-layer fine-tuning by annotated (query, thought, response) samples and show that the intermediate layer can also learn to decode fluent and reasonable language tokens. A two-pass inference mechanism is designed to generate thoughts then formal responses. The entire framework is called modularized thinking language model (MeTHanol) which can enhance LLM's cognitive behaviors as indicated by Theory of Mind (ToM) and Vignette-based experiments. Case studies also show that MeTHanol can plan and self-reflect and generate human-like thoughts and answers, even on unseen and open-domain tasks. MeTHanol can also adapt to a personalized prompt and behave as the specified character. Our study holds promise for significant cognitive gains from a modular perspective.

Overview

Image 1

An overview of MeTHanol with modular corre-spondence to human brain architecture.

Framework

Image 2

Comparison of the MeTHanol framework to standard LLM fine-tuning.

Training Result

Image 4

Training loss curves and special case performances according to different steps.

Benchmark

Image 6

Fine-tuned results of Sally-Anne false belief experiments. Values of results are in percentage.

Image 5

Zero-shot results of Vignette-based experiments. Values of results are in percentage.