IDEAS

Why General Artificial Intelligence Is Principally Impossible
This paper argues that General Artificial Intelligence (GAI), understood as a system capable of universally reliable generalization across arbitrary domains, tasks, and normative regimes, is principally unattainable under the prevailing paradigm of predictive learning. We first explain in detail how contemporary generative models, particularly Transformer-based architectures, acquire what is operationally referred to as ``meaning''---including inter-concept relations, structural dependencies such as procedural order and constraints, and functional roles with respect to goals---as geometric structure in high-dimensional vector spaces through next-token prediction and self-attention mechanisms. We then show that, because learning in these models is fundamentally predictive and optimized with respect to finite data distributions, generalization is necessarily distribution-dependent. As a consequence, both under-generalization and over-generalization arise not as incidental failures but as structurally inevitable outcomes of the learning objective itself. From this analysis, we conclude that scaling predictive generative models alone cannot yield a form of intelligence with universal, distribution-independent generalization guarantees, and that the aspiration toward GAI must therefore confront principled limitations rather than purely engineering challenges.
( cotinued )
Operational Meaning and Affordance-Centered Semantics
for Autonomous Learning Robots
Recent advances in artificial intelligence have demonstrated that large-scale, outcome-driven learning systems can exhibit sophisticated behavior without explicit semantic grounding. However, when such approaches are transferred from symbolic domains to embodied robotic systems, fundamental conceptual and technical limitations emerge. This report argues that autonomous learning robots require a notion of \emph{operational meaning} grounded in action, interaction, and viability constraints. We review current trends in autonomous robotics, clarify the essential differences between generative AI and robotic AI, and show that affordance learning provides a uniquely concrete bridge between theory and practice. Building on this foundation, we formalize operational meaning using operational semantics and refine it into an affordance-centered semantic framework. A detailed comparison between general operational semantics and affordance-based semantics reveals why the latter is indispensable for safe, adaptive, and meaningful autonomous robot learning. ( cotinued )

Consciousness as a Computational Constraint Space
Consciousness is frequently discussed in computational terms, yet there is little agreement on what computation is assumed to explain. Debates often oscillate between attempts to compute consciousness itself and claims that consciousness resists any computational account.
This paper argues that this impasse arises from a misplaced framing. Rather than asking whether consciousness can be computed, we propose reexamining what computation operates on, from which perspective, and under which conditions it becomes relevant to conscious experience.

( continued )

Logical Embedding and Deduction as Stability
This paper proposes a reframing of deduction as a stabilized regime of cognitive dynamics, rather than as a rule-based inference operation. We argue that what appears as deductive behavior emerges when inductive learning, shaped by memory and constraint, converges to a consolidated state in which judgments become reproducible and invariant under further experience. We refer to this perspective as \emph{deduction as stability}. To make this view computationally explicit, we introduce \emph{logical embedding} as a lightweight representational substrate in which concepts, situations, and actions are represented as vectors in a shared semantic space. Approximate logical relations are realized through simple geometric operations such as weighted composition, similarity, and thresholding, without assuming explicit symbolic rules or full Bayesian inference. Within this framework, induction corresponds to the continuous reshaping of semantic geometry, while deduction emerges when such reshaping is gated by stability and constraint. We present a layered architecture comprising induction, memory, constraint, policy, and stability layers, and formalize their interactions. This architecture clarifies how rule-like behavior can arise from coordinated stabilization across layers, rather than from explicit rule execution. Finally, we examine the intrinsic limitations of logical embedding, including the absence of formal learning guarantees and genuine novelty generation, and argue that these limitations can be mitigated through modular extensions that preserve the semantic core.
( continued )
Meaning as Syntax: Logic from Memory Dynamics
Human cognition seamlessly integrates memory recall, reasoning, and decision-making, yet existing computational formalisms---symbolic logic, probabilistic models, and Bayesian inference---struggle to explain this integration without introducing rigid representational structures or external semantic interpreters. This paper proposes an alternative perspective grounded in a single principle: \emph{meaning is embedded in syntax}. We assume that long-term memory is flat and non-structured, storing neither explicit relations nor semantic content. Structure and meaning arise only through reconstructive processes at recall time. Using Vector Symbolic Architectures as a subsymbolic substrate, we show how binding, aggregation, and similarity operations constitute approximate symbolic syntax whose dynamics generate meaning directly. Within this framework, induction corresponds to geometric aggregation, deduction emerges as a transition to representational stability, and decision-making arises as a structurally constrained continuation of the same syntactic processes. Logical behavior is not imposed on memory representations but crystallizes from stabilized reconstructive dynamics. This view reframes logical embedding as an emergent phenomenon rather than a representational technique. Memory, inference, and action are unified as phases of constrained reconstruction, suggesting that logic is not a primitive of cognition but a dynamic regularity arising from memory itself.
( continued )
Memory as a Computational Resource: Reconstructing Thought in the Age of Generative AI
The rise of generative AI has renewed fundamental questions about the nature of thinking. As systems increasingly produce fluent and contextually appropriate outputs, intelligence is often assessed in terms of performance alone. This paper argues that such an output-centered perspective obscures the conditions under which thought, judgment, and responsibility become possible. The central claim is that memory should be understood not as a passive store of information but as a computational resource. Human thought is a temporally extended process in which commitments are retained, revised, and made accountable over time. Memory functions as a computational workspace that enables non-monotonic reasoning, justificatory continuity, and ethical responsibility. Thinking, on this view, is not defined by the production of outputs but by the processes through which those outputs are formed and transformed. Against this account, the paper analyzes generative AI as a form of computation oriented toward generation rather than deliberation. While such systems perform powerful statistical operations and can produce outputs resembling the products of thought, they lack memory as an active, revisable computational resource. This structural difference explains both their effectiveness and their limits. The paper further diagnoses the contemporary tendency to conflate generation with thinking as an epistemological failure rooted in output-centered evaluation and presentism. In response, it proposes a reconstruction of thought as a process grounded in memory practices that preserve temporal depth, revision, and accountability. Reconstructing memory as a computational resource is therefore not a technical proposal but a philosophical necessity. Only by moving beyond output as the primary criterion of intelligence can we preserve a conception of thought adequate to reasoning, judgment, and responsibility in the age of generative systems.

( continued )

Cognitive Retrieval-Augmented Generation as a Prerequisite for Viable Action in AI Robots

Recent advances in AI have renewed interest in the role of memory, context, and interaction history in intelligent behavior. While Retrieval-Augmented Generation (RAG) has primarily been discussed as a method for improving factual accuracy and grounding language models in external knowledge, this paper argues that a deeper form of RAG---here termed \emph{Cognitive RAG}---is a necessary structural component for enabling \emph{viable action} in AI robots.

We propose that viable action should be understood not as optimal action under a fixed objective, but as action that remains sustainable, acceptable, and revisable within ongoing human--robot relationships. Such action cannot be derived solely from sensor data, static rules, or probabilistic prediction. Instead, it requires a memory architecture capable of recalling and abstracting interaction histories, including affective and evaluative dimensions.

This paper first develops a general account of Cognitive RAG as a memory-centered computational framework distinct from both symbolic reasoning and generative probabilistic models. We then argue that Cognitive RAG plays an indispensable role in maintaining viable action by shaping the constraint space within which robotic behavior is selected. Finally, we show how safety, responsibility, and explainability emerge naturally from this framework, not as externally imposed requirements but as consequences of memory-mediated action selection.
( continued)

Viable Action without State Transitions: A Social-Affordance-Centered Formal Model with Memory-Based Semantics
Classical models of robotic action rely on explicit internal states and their transitions. However, such formulations face fundamental limitations in human-interactive environments, where action viability depends not only on physical feasibility but also on social acceptability, history, and contextual memory. This paper proposes a formal model of viable robotic action centered on \emph{physical affordance} and \emph{social affordance}, while explicitly rejecting the notion of state as a transitionable entity. Instead, we redefine state as a reconstructed bundle of conditions, memories, and constraints that enable viable action. To operationalize this view, we introduce a memory-based semantic operator, referred to as CognitiveRAG, which retrieves context from interaction history and biases affordance evaluation via affection-related parameters. The resulting framework provides a mathematically grounded yet implementation-oriented alternative to state-transition-based robotics, suitable for human-robot interaction under social and emotional constraints.
(continued)
「心の計算」最少モデル
ref.