2026-02-25
Recent advances in artificial intelligence have demonstrated that large-scale, outcome-driven learning systems can exhibit sophisticated behavior without explicit semantic grounding. However, when such approaches are transferred from symbolic domains to embodied robotic systems, fundamental conceptual and technical limitations emerge. This report argues that autonomous learning robots require a notion of operational meaning grounded in action, interaction, and viability constraints. We review current trends in autonomous robotics, clarify the essential differences between generative AI and robotic AI, and show that affordance learning provides a uniquely concrete bridge between theory and practice. Building on this foundation, we formalize operational meaning using operational semantics and refine it into an affordance-centered semantic framework. A detailed comparison between general operational semantics and affordance-based semantics reveals why the latter is indispensable for safe, adaptive, and meaningful autonomous robot learning.
Autonomous robots are undergoing a qualitative transition from pre-programmed machines toward systems capable of open-ended adaptation. Early robots operated under tightly specified conditions with fixed control logic. In contrast, contemporary autonomous robots are expected to function in partially unknown, dynamically changing environments that include humans, other agents, and evolving norms.
Several technical trends characterize this shift. First, learning is increasingly self-supervised or interaction-driven, reducing reliance on manually curated labels. Second, robots integrate multiple modalities—vision, touch, proprioception, force, and language—into unified perception–action loops. Third, autonomy is no longer defined by isolated task execution but by sustained operation over long time horizons without external resets.
These demands have exposed the limits of classical control and planning. As a result, learning-based approaches such as reinforcement learning, imitation learning, developmental robotics, active inference, and affordance learning have gained prominence. Among these, affordance learning is distinctive in that it explicitly treats the environment not as a state space to be optimized over, but as a structured field of action possibilities relative to the agent’s body and capabilities.
The success of generative AI systems—most notably large language models—has reshaped expectations about what learning systems can achieve. These models demonstrate fluent language use, problem solving, and apparent reasoning without explicit semantic representations or grounded understanding. Their success invites the question: why not apply the same principles directly to robotics?
Generative AI operates in domains characterized by several forgiving properties. Interaction occurs entirely within symbolic or virtual spaces, where errors are cheap, reversible, and non-destructive. Evaluation is retrospective: outputs are judged after they are produced, and failures do not alter the system’s ability to continue operating. Learning is driven by dense statistical regularities, such as next-token prediction, that provide abundant and stable feedback signals.
Crucially, generative AI systems do not need to anticipate consequences beyond the symbolic domain. They do not break objects, injure humans, or destabilize environments. As a result, behaviorist, outcome-driven learning suffices: if the output matches human expectations, the system is considered successful.
Robots, by contrast, operate in the physical and social world. Actions have irreversible consequences: objects can be damaged, humans can be harmed, trust can be lost. Errors accumulate rather than vanish. This introduces a fundamental asymmetry between symbolic intelligence and embodied intelligence.
In robotics, learning cannot be purely retrospective. Actions must be evaluated prospectively, before execution, because some failures are unacceptable even once. Moreover, the environment is not stationary: humans adapt to the robot, tasks evolve, and social norms shift. This co-adaptation breaks the assumptions underlying many statistical learning approaches.
As a result, robotic AI requires an internal structure that constrains behavior prior to action. This structure is not optional; it is a prerequisite for autonomy. The question, then, is not whether robots need “meaning” in a human semantic sense, but what kind of meaning is necessary for responsible action in the world.
This contrast leads to a key conclusion: while generative AI demonstrates that intelligence-as-output can emerge without semantic grounding, autonomous robots require intelligence-as-responsibility. They must preserve physical integrity, task continuity, and social acceptability over time. These requirements introduce normativity into learning and action selection, which cannot be reduced to outcome statistics alone.
Affordances, originally introduced by Gibson, describe the action possibilities that the environment offers to a particular agent. In robotics, affordances provide a conceptual and computational bridge between raw perception and action.
Unlike symbolic representations, affordances are inherently relational: they depend on both environmental properties and agent capabilities. A surface affords sitting only for agents of appropriate size and posture; an object affords grasping only if the agent’s gripper geometry and force limits are compatible.
This relational nature aligns naturally with robotics. Robots do not encounter abstract objects; they encounter situations that enable or disable specific interactions. Affordances encode exactly this structure.
Reward functions collapse diverse constraints—safety, feasibility, desirability—into a single scalar. This collapse obscures why certain actions are unacceptable. Affordances, by contrast, explicitly separate what can be done from what is preferred. They provide a structured filter on action space before optimization occurs.
This separation is essential for autonomous learning. A robot that first learns affordances can explore safely within the bounds of viability, rather than discovering constraints only after violations occur.
In this report, we treat affordances as the primary carriers of meaning for robots. An affordance is meaningful not because it corresponds to a concept, but because it enables viable interaction. Meaning, in this sense, is operational and normative: it specifies what distinctions in the environment matter for action.
Affordances thus provide a concrete instantiation of operational meaning. They are learned from interaction, grounded in embodiment, and directly actionable. Among existing frameworks, affordance learning uniquely satisfies the requirement of being both theoretically principled and practically implementable.
We now formalize operational meaning in a general, action-centric form.
Let \(S\) denote the space of embodied interaction states, \(A\) the set of actions, and \(T(\cdot \mid s,a)\) a (possibly stochastic) transition kernel. We introduce a viability predicate \[V : S \rightarrow \{0,1\},\] which encodes physical safety, task continuity, and social acceptability.
An operational transition judgment \[\langle s,a \rangle \Downarrow s'\] denotes that executing action \(a\) in state \(s\) can yield successor state \(s'\).
The operational meaning of an action \(a\) is defined as: \[\mathcal{M}(a) = \{(s,s') \mid \langle s,a \rangle \Downarrow s' \wedge V(s') = 1\}.\]
This formulation captures meaning as the set of executable, viability-preserving transitions induced by an action. It introduces normativity without symbols, but remains action-centric.
To align semantics with affordance learning, we must shift from actions to relations.
An affordance \(\alpha\) is defined as a partial mapping: \[\alpha : S \rightharpoonup \mathcal{P}(A),\] where \(\alpha(s)\) denotes the set of actions afforded in state \(s\). This reframes semantics ecologically: the environment–agent relation determines what actions are available.
The operational realization of an affordance is: \[\mathsf{Exec}(\alpha) = \{(s,a,s') \mid a \in \alpha(s),\ \langle s,a \rangle \Downarrow s'\}.\]
The affordance-centered operational meaning is: \[\mathcal{M}(\alpha) = \{(s,a,s') \mid a \in \alpha(s),\ \langle s,a \rangle \Downarrow s',\ V(s') = 1\}.\]
This definition states that an affordance is meaningful if and only if it enables at least one viable realization. Meaning is no longer tied to individual actions but to structured sets of possible interactions.
Affordance learning becomes the problem of estimating which relations preserve viability across contexts and over time. This naturally supports continual learning: affordances can be refined, weakened, or strengthened as environments and norms change, without collapsing meaning into scalar rewards.
To make the affordance-centered operational semantics concrete, we consider a worked case study of human–robot handover. This scenario is representative because it combines physical interaction, temporal coordination, and social normativity, all of which are essential for autonomous robots operating in human environments.
In a handover task, a robot transfers an object to a human (or vice versa). Success is not defined solely by physical contact, but by a coordinated sequence of actions that preserves safety, comfort, and mutual predictability. Importantly, the acceptability of actions depends on subtle contextual cues such as human posture, hand motion, gaze, and timing.
This makes handover an ideal test case for operational meaning: the robot must determine what the situation affords before selecting how to act.
Let the interaction state \(s \in S\) include:
robot end-effector pose and velocity,
object pose and grasp stability,
human hand pose, velocity, and distance,
inferred human engagement signals (e.g., approach, hesitation),
interaction timing variables.
Let the action set \(A\) include parameterized primitives such as:
extend_arm,
hold_still,
release_object,
retract_arm.
These primitives are intentionally low-level; semantic structure will arise from affordances, not from action labels.
We define a handover-ready affordance: \[\alpha_{\text{handover}} : S \rightharpoonup \mathcal{P}(A),\] where \(\alpha_{\text{handover}}(s)\) contains those actions that are appropriate when the human is ready to receive the object.
Operationally, \(\alpha_{\text{handover}}(s)\) is non-empty only if relational conditions hold, such as:
the human hand is within reachable distance,
relative motion indicates convergence rather than withdrawal,
the object is stably grasped by the robot,
no collision or sudden force is predicted.
These conditions are not symbolic rules but learned relational constraints over sensorimotor variables.
We define a viability predicate \(V(s)\) encoding physical and social acceptability:
no excessive force is applied,
the object is not dropped unexpectedly,
the human does not exhibit avoidance or startle responses,
the interaction remains temporally smooth.
Crucially, some states are unacceptable even if the object is successfully transferred. Thus, viability cannot be reduced to task completion alone.
The affordance-centered operational meaning of \(\alpha_{\text{handover}}\) is defined as: \[\mathcal{M}(\alpha_{\text{handover}}) = \{(s,a,s') \mid a \in \alpha_{\text{handover}}(s),\ \langle s,a \rangle \Downarrow s',\ V(s') = 1\}.\]
This set characterizes the meaning of being “handover-ready”: it is the collection of viable transitions enabled by the affordance.
Importantly, the affordance does not specify a single correct action. Multiple realizations (e.g., slight adjustments in timing or position) may be meaningful as long as they preserve viability.
From a learning standpoint, the robot estimates: \[m(\alpha_{\text{handover}} \mid s) = \sup_{a \in \alpha_{\text{handover}}(s)} \Pr_{s' \sim T(\cdot \mid s,a)}[V(s') = 1].\]
Affordance learning thus becomes the problem of identifying interaction states \(s\) in which the handover affordance exists and is robust. Learning proceeds through repeated interaction, human feedback, and observation of viability violations, without requiring explicit symbolic instruction.
In this case study, operational meaning is neither a label nor an internal belief. It is the structured space of viable interaction transitions that the robot has learned to preserve.
The semantic content of “handover-ready” is therefore:
a relation between agent, human, object, and timing that constrains which actions are permissible.
This illustrates the central claim of this report: for autonomous robots, meaning is not something to be represented, but something to be maintained through action.
The handover example demonstrates how affordance-centered operational semantics:
supports prospective constraint checking,
separates feasibility from preference,
integrates physical and social norms,
enables safe autonomous learning.
It also highlights why purely outcome-driven or action-centric semantics are insufficient. Without affordance-centered meaning, the robot would have no principled way to decide when not to act. The affordance-based operational semantics provides this missing structure.
| Framework | Meaning Defined As |
|---|---|
| Symbolic semantics | Truth conditions |
| Behaviorism | Observed outcomes |
| Reinforcement learning | Expected reward |
| Active inference | Prior-weighted predictions |
| Operational semantics | Executable transitions |
| Affordance-centered semantics | Viable affordance realizations |
The critical distinction between general operational semantics and affordance-centered semantics lies in ontology. The former treats actions as primitive; the latter treats relations as primitive. This shift aligns semantics with the structure of embodied interaction and enables autonomous learning that is both safe and adaptive.
In this subsection, we compare major semantic and learning frameworks by explicitly asking how each treats affordances, understood as agent–environment relations that specify viable action possibilities. This comparison clarifies why affordance-centered operational semantics is not merely compatible with existing approaches, but resolves structural limitations inherent in them.
In symbolic and truth-conditional semantic frameworks, meaning is defined in terms of abstract symbols and their correspondence to states of affairs. Objects, actions, and relations are represented symbolically, and correctness is evaluated via logical consistency or truth values.
From an affordance perspective, symbolic semantics faces a
fundamental mismatch. Affordances are relational, contextual, and
agent-dependent, whereas symbolic representations tend to be
agent-independent and static. While it is possible to encode affordances
symbolically (e.g., predicates such as graspable(object)),
such encodings presuppose that the affordance has already been
identified and discretized.
Crucially, symbolic semantics does not explain how affordances are discovered, validated, or invalidated through interaction. Affordances appear only as derived annotations, not as primary semantic units grounded in action. As a result, symbolic approaches struggle with adaptation, embodiment, and continuous learning in open environments.
Behaviorist approaches define meaning implicitly through observed stimulus–response regularities and externally evaluated outcomes. In modern machine learning, this view is reflected in purely outcome-driven systems where internal representations are unconstrained as long as performance metrics improve.
From the standpoint of affordances, behaviorism collapses multiple distinctions into undifferentiated outcomes. It does not distinguish between:
actions that are infeasible but unrewarded,
actions that are feasible but unsafe,
actions that are safe but suboptimal.
Affordances, however, require precisely these distinctions. An affordance specifies what can be done without violating viability, independent of whether it is desirable or rewarded. Behaviorist systems can learn affordances only implicitly and retrospectively, often by violating constraints before learning them. This makes pure behaviorism ill-suited for safety-critical and socially embedded robotics.
Reinforcement learning (RL) formalizes learning as the optimization of expected cumulative reward. In practice, affordance-like knowledge may be encoded implicitly in the learned value function or policy.
However, from an affordance-centric view, RL suffers from a semantic entanglement problem. Feasibility, safety, and desirability are all compressed into a single scalar reward signal. As a consequence:
affordances are not explicitly represented,
constraints are discovered only through negative reward,
the reason why an action is unacceptable is not preserved.
Affordance-centered semantics instead separates viability from utility. Actions outside the affordance set are excluded before optimization. RL can then operate within the affordance-constrained action space, but cannot by itself define or justify those constraints. Thus, RL is best viewed as a secondary optimization layer, not a semantic foundation.
Active inference defines meaning in terms of prediction error minimization relative to prior beliefs and preferences. In this framework, actions are selected to minimize expected free energy, balancing goal fulfillment and epistemic uncertainty reduction.
From an affordance perspective, active inference comes closer than RL to acknowledging constraints on action. However, affordances remain implicit, encoded indirectly through priors and generative models. The framework does not naturally isolate affordances as distinct semantic entities; instead, they are embedded in the structure of the model.
Moreover, active inference emphasizes belief updating over explicit interactional structure. While powerful, this makes affordances less transparent and harder to manipulate directly in robotic systems. Affordance-centered semantics can be seen as complementary: it externalizes and operationalizes the action constraints that active inference internalizes probabilistically.
General operational semantics defines meaning by executable state transitions. This is a significant step toward embodiment: meaning is no longer abstract truth, but concrete effect.
However, in its action-centric form, general operational semantics still treats actions as primitive. Affordances are derived secondarily as sets of state–action pairs with valid transitions. This ordering obscures the ecological structure of interaction, where agents first perceive what the environment affords and only then select specific actions.
Thus, while general operational semantics provides the correct mode of meaning (execution-based), it does not yet provide the correct unit of meaning for robotics.
Affordance-centered operational semantics resolves the above limitations by making affordances the primary semantic objects. Meaning is defined not at the level of individual actions, but at the level of structured relations between agent and environment.
In this framework:
affordances specify admissible actions per state,
operational meaning is the set of viability-preserving realizations of an affordance,
learning consists of discovering and stabilizing these relations.
This approach aligns semantics with embodiment, supports prospective constraint checking, and enables safe autonomous learning. Unlike symbolic semantics, it does not presuppose meaning. Unlike behaviorism and RL, it does not wait for failure to define constraints. Unlike active inference, it externalizes meaning into actionable structure.
In summary, affordance-centered operational semantics uniquely satisfies the requirements of autonomous robotics: it is relational, operational, normative, learnable, and directly implementable. For embodied agents acting in the real world, affordances are not merely features of perception but the fundamental carriers of meaning.
Autonomous learning robots face challenges fundamentally different from those addressed by generative AI. While outcome-driven learning suffices in symbolic domains, robots require operational meaning to act responsibly in the physical and social world. Affordance learning provides a uniquely concrete foundation for such meaning. By formalizing operational meaning within an affordance-centered operational semantics, we obtain a framework that is executable, normative, learnable, and scalable. This framework clarifies what autonomous robots must learn—not merely how to act, but what the world affords—and why meaning, understood operationally, is indispensable for autonomy.
A frequent position in robotics is that internal emotion (as a human-like phenomenal state) is not required for competence. For many tasks, removing internal affect may even be desirable: it reduces complexity, improves predictability, and avoids anthropomorphic confusion. However, as soon as robots enter human-facing contexts, emotion becomes operationally unavoidable because humans inevitably interpret behavior through affective lenses. This creates a pragmatic tension:
Robotic systems may not need emotions to function, but they must handle the emotional dynamics of interaction to remain acceptable, safe, and effective.
In other words, the relevant question is not “Does the robot feel?” but:
What affective states does the human infer from the robot’s behavior?
How does the robot infer and regulate the human’s affective state?
How does the dyadic loop stabilize (or destabilize) over time?
This aligns directly with the report’s core thesis: robots require operational meaning because action is irreversible and socially consequential. Emotional dynamics are one of the strongest sources of such consequences in human–robot interaction (HRI), particularly in long-horizon settings (education, caregiving, companionship).
We use “emotion exchange” to mean a bidirectional, time-extended coupling between (i) human affect and (ii) robot behavior (including expressive signals), where each party’s state influences the other. Importantly, this is not merely recognition (one-way perception) nor mere expression (one-way display), but a closed interaction loop.
Let:
\(h_t\) be the human affective state (latent, partially observable),
\(r_t\) be the robot internal interaction state (including memory and intention),
\(x_t\) be observed multimodal signals (voice prosody, facial expression, posture, timing, etc.),
\(a_t\) be robot actions (task actions plus expressive actions).
A minimal dynamical picture is: \[h_{t+1} \sim P(h_{t+1}\mid h_t, a_t, x_t), \qquad r_{t+1} \sim P(r_{t+1}\mid r_t, a_t, x_t),\] with observations: \[x_t \sim P(x_t \mid h_t, r_t, \text{context}).\]
Recent work in social robotics emphasizes that long-term interaction with LLM-powered robots changes user perception and willingness to adopt, illustrating that conversational behavior and perceived affect co-evolve across sessions . Related discussions in education settings show that generative-AI-powered social robots raise affective concerns and require careful alignment between dialogue, expression, and role . The broader literature on cognitive and affective theory of mind (ToM) likewise treats affect inference as central to HRI modeling .
The report defines operational meaning through viability-preserving transitions. In HRI, affect enters primarily through normative constraints: even when a task action is physically feasible, it may be socially unacceptable if it produces fear, discomfort, or a perceived violation of boundaries.
Thus, we can extend the viability predicate \(V(s)\) to incorporate affective acceptability: \[V(s)=1 \iff \big(\text{physical safety}\big)\wedge \big(\text{task continuity}\big)\wedge \big(\text{social/affective acceptability}\big).\]
This perspective is consistent with empirical work showing that a robot’s behavioral affordances and the alignment of its cues (face, voice, behavior) affect how people perceive it across use cases . The affordance lens is crucial because it makes explicit what actions are permissible in a given interaction state, rather than treating emotion as a decorative add-on.
Affordances in HRI are not limited to grasping or navigation. Many are social affordances:
approachable: the robot may initiate proximity or speech,
interruptible: the robot may ask a question now,
handover-ready: the robot may release an object,
comforting: the robot may attempt a soothing action,
non-escalatory: the robot must avoid behaviors likely to increase stress.
The notion of emotional affordances has been explicitly proposed as a way to improve HRI models by treating affect as part of the interaction structure rather than an afterthought . Related discussions position emotion as an embodied interaction phenomenon, suggesting that affective tagging can guide how robots apprehend objects and situations in socially meaningful ways .
Within the report’s semantics, an emotional affordance \(\alpha_{\text{emo}}\) can be treated as: \[\alpha_{\text{emo}}: S \rightharpoonup \mathcal{P}(A),\] where \(\alpha_{\text{emo}}(s)\) is the set of expressive and interaction-management actions that are afforded (i.e., appropriate and viable) in state \(s\).
The operational meaning of \(\alpha_{\text{emo}}\) is: \[\mathcal{M}(\alpha_{\text{emo}})= \{(s,a,s') \mid a\in\alpha_{\text{emo}}(s),\ \langle s,a\rangle\Downarrow s',\ V(s')=1\}.\]
The phrase “affordable emotion control” can be interpreted in at least four operational senses, all relevant to real deployments:
Emotion control must run under on-device constraints (CPU/NPU budgets), with limited latency and without relying on expensive sensors. Many practical systems therefore combine:
lightweight affect inference (e.g., from audio prosody, limited vision cues),
low-dimensional affect state representations,
simple policy rules or small learned controllers.
Real-world affect labels are expensive and noisy. Affordable approaches rely on:
self-supervised signals (interaction outcomes, engagement proxies),
weak supervision (ratings, short feedback),
continual adaptation across sessions.
Recent work on emotion-augmented continual learning for empathic robot behavior explicitly targets long-term, human-centered environments where affect-aware adaptation must be sustained over time .
Robotic “emotion” is risky when it misfires (e.g., inappropriate comfort or escalation). Therefore, affordability includes minimizing social risk by:
constraining expressive actions via viability predicates,
limiting degrees of freedom (DoF) in affect display,
adopting conservative defaults in ambiguous states.
Systems must be maintainable: interpretable policies, testable constraints, and predictable failure modes. LLM-integrated social robots often require extra safeguards for alignment and role-consistent expression .
We propose a minimal architecture consistent with the report’s affordance-centered semantics. It separates three layers:
Estimate a compact latent \(\hat{h}_t\) from multimodal observations \(x_t\): \[\hat{h}_t = f_{\theta}(x_{t-k:t}, \text{context}).\] This aligns with broader efforts in affective computing and emotionally aware interaction systems .
Compute an emotional affordance set: \[\alpha_{\text{emo}}(s_t) = \{a \in A_{\text{emo}} : \text{Afford}(a \mid s_t, \hat{h}_t)=1\}.\] Here \(A_{\text{emo}}\) denotes expressive / interaction-management actions (tone, gesture, proxemics, timing).
Select action by optimizing a utility (task + rapport) subject to viability: \[a_t \in \arg\max_{a \in \alpha_{\text{emo}}(s_t)} U(a \mid s_t) \quad \text{s.t.}\quad \Pr[V(s_{t+1})=1 \mid s_t,a] \ge \tau.\] This ensures that affective behaviors are never merely “stylistic”: they are viability-preserving moves in a constrained semantic space.
LLM-based robots can produce rich dialog and can be paired with gesture generation and emotion-specific guidelines . LLMs are also explored for generating culturally adaptive affective/tactile behaviors, indicating potential for culturally conditioned “emotion exchange” . In educational and companion contexts, LLM-based systems are studied for multi-session engagement and adoption .
However, LLMs are not inherently grounded or normatively safe. In our report’s terms, they can propose actions outside the viability boundary. Therefore, LLM-driven affect should be mediated by affordance-centered semantics:
The LLM may propose candidate expressive behaviors.
The affordance/viability layer filters them.
Only viability-preserving behaviors are executed.
This design makes emotion exchange operationally safe: expression is regulated as action selection under affordance constraints, not as free-form generation.
Emotion exchange is precisely where the report’s earlier distinction between generative AI and robotic AI becomes critical. In text-only systems, affective missteps are often cheap and reversible. In embodied robots, the same missteps can be:
physically invasive (proximity, touch),
socially violating (timing, tone),
trust-destroying (unexpected actions).
Thus, affect becomes part of prospective constraint satisfaction, not merely retrospective evaluation.
In affordance-centered operational semantics, the semantics of affect is not a “mental state” but a constraint-structured field of permissible interactions. Emotion exchange is then understood as the long-horizon co-regulation of this field: both parties adapt their expectations and permissible moves over time.
This appendix supports a non-anthropomorphic stance:
Robots do not require human-like inner feelings to function.
They do require affect-aware interaction structure to coexist with humans.
“Emotion” is best treated as an operational layer:
infer human affect,
regulate robot expression,
constrain actions via emotional affordances and viability.
In the language of this report: emotion exchange is a domain where operational meaning is most visibly normative. Affordance-centered semantics provides a principled mechanism to make emotion handling both implementable and safe, and “affordable emotion control” becomes achievable by constraining expressivity through viability-preserving affordances rather than attempting to replicate human affective interiors.
99
Exploring LLM-powered multi-session human-robot interactions with a social humanoid robot (EMAH), PMC, 2024/2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12170534/
Generative AI-powered social robots in education, Behaviour & Information Technology, 2025. https://www.tandfonline.com/doi/full/10.1080/0144929X.2025.2604060
Computational Models of Cognitive and Affective Theory of Mind, ACM, 2024/2025. https://dl.acm.org/doi/10.1145/3708319.3733667
Emotion-Augmented Continual Learning for Empathic Robot Behavior, Expert Systems with Applications, 2025. https://www.sciencedirect.com/science/article/pii/S0957417425046895
Design and Implementation of a Companion Robot with LLM-Based Hierarchical Motion Generation (emotional gestures), Applied Sciences, 2025. https://www.mdpi.com/2076-3417/15/23/12759
Exploring LLM-generated culture-specific affective human-robot tactile behaviours, arXiv, 2025. https://arxiv.org/pdf/2507.22905
Freedom comes at a cost?: An exploratory study on how a robot’s affordances affect people’s perception, Frontiers in Robotics and AI, 2024. https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2024.1288818/full
J. Vallverdú and G. Trovato, Emotional affordances for human–robot interaction, Adaptive Behavior, 2016. https://journals.sagepub.com/doi/10.1177/1059712316668238
Emotions in Robots: Embodied Interaction in Social and Non-Social Contexts, MDPI, 2019. https://www.mdpi.com/2414-4088/3/3/53
Advancing Emotionally Aware Child–Robot Interaction with Affective Computing and Biophysical Data, Sensors, 2025. https://www.mdpi.com/1424-8220/25/4/1161