Stone Retrieval Function: Biologically-Inspired Memory Retrieval with Multi-Factor Emotional Weighting

Community Article Published December 12, 2025

JARVIS Cognitive Systems

Kent Stone
Independent AI Research Laboratory
Lima, Peru
December 2025

Intellectual Property Notice The Stone Retrieval Function (SRF) is protected under U.S. Patent. Unauthorized commercial use, reproduction, or derivative implementations are prohibited without written consent.

© 2025 Kent Stone. All rights reserved. The Stone Retrieval Function (SRF) architecture and its derivatives are proprietary concepts of Kent Stone and JARVIS Cognitive Systems. No use, reproduction, or implementation is permitted without explicit written consent.


Abstract

Current AI memory systems rely primarily on semantic similarity for retrieval, ignoring the rich multi-factor dynamics that characterize human memory. We present the Stone Retrieval Function (SRF), a biologically-inspired memory architecture that combines semantic relevance with emotional weighting, temporal recency, access frequency, and contextual importance. Unlike traditional vector databases that return results based solely on embedding distance, SRF computes a composite retrieval score that mirrors human memory prioritization—where emotionally significant memories surface more readily, frequently accessed information remains available, and contextual importance modulates retrieval probability. Our implementation achieves retrieval patterns that align with psychological models of human memory while maintaining computational efficiency. We demonstrate that SRF significantly outperforms pure semantic retrieval on tasks requiring contextually-appropriate memory access, particularly in conversational AI, personalization, and decision support systems. The architecture introduces novel concepts including emotional decay curves, importance amplification, and adaptive coefficient learning. SRF represents a paradigm shift from "what is most similar" to "what is most relevant given the full context of human-like memory dynamics."

Keywords: Memory Systems, Emotional Computing, Retrieval Functions, Cognitive Architecture, Biologically-Inspired AI, Personalization


1. Introduction

Human memory is not a simple database lookup. When we recall information, multiple factors compete and combine: How emotionally significant was the experience? How recently did it occur? How often have we accessed it? How important is it to our current goals? These factors interact in complex ways that pure semantic similarity cannot capture.

Consider a simple example: You meet hundreds of people at conferences, but you remember the one who helped you during a crisis. Semantically, all conference introductions are similar. Emotionally, one stands apart. Current AI memory systems would retrieve all conference meetings with equal probability given a semantic query—missing the crucial distinction that makes human memory useful.

The Stone Retrieval Function (SRF) addresses this fundamental limitation by introducing a multi-factor retrieval score:

SRF(m,q,t)=αS(m,q)+βE(m)+γR(m,t)+δF(m)+ϵI(m,c)SRF(m, q, t) = \alpha \cdot S(m, q) + \beta \cdot E(m) + \gamma \cdot R(m, t) + \delta \cdot F(m) + \epsilon \cdot I(m, c)

Where:

  • S(m,q)S(m, q) = Semantic similarity between memory mm and query qq
  • E(m)E(m) = Emotional weight of memory (valence × intensity)
  • R(m,t)R(m, t) = Recency factor with temporal decay
  • F(m)F(m) = Frequency of access (retrieval strengthening)
  • I(m,c)I(m, c) = Importance in current context cc
  • α,β,γ,δ,ϵ\alpha, \beta, \gamma, \delta, \epsilon = Adaptive coefficients

This paper presents the theoretical foundations, implementation details, and empirical validation of SRF as a drop-in replacement for traditional vector retrieval in AI systems.

1.1 Contributions

  1. Multi-Factor Retrieval Function: A mathematically grounded retrieval score combining five factors that mirror human memory dynamics

  2. Emotional Weighting System: Bidimensional emotion representation (valence × intensity) with configurable influence on retrieval

  3. Temporal Dynamics: Exponential decay curves calibrated to psychological forgetting data, with access-based reinforcement

  4. Adaptive Coefficients: Online learning of optimal factor weights based on retrieval feedback

  5. Efficient Implementation: O(log n) retrieval with multi-factor scoring through intelligent indexing


2. Related Work

2.1 Vector Databases and Semantic Retrieval

Modern AI systems predominantly use vector databases (Pinecone, Weaviate, Milvus, FAISS) for memory retrieval. These systems embed content into high-dimensional vectors and retrieve based on cosine similarity or Euclidean distance. While effective for semantic matching, they ignore temporal, emotional, and contextual factors entirely.

2.2 Psychological Models of Memory

Ebbinghaus's forgetting curve (1885) established that memory strength decays exponentially over time. The spacing effect (Cepeda et al., 2006) shows that distributed retrieval strengthens memories. Emotional enhancement of memory (EEM) demonstrates that emotionally arousing events are remembered better (LaBar & Cabeza, 2006). The SRF incorporates all three phenomena.

2.3 Memory-Augmented Neural Networks

Neural Turing Machines (Graves et al., 2014) and Differentiable Neural Computers (Graves et al., 2016) introduced learnable memory access. However, these systems learn memory patterns from data rather than implementing known psychological principles. SRF takes the opposite approach: encoding established memory science directly.

2.4 Affective Computing

Picard's foundational work on affective computing (1997) established emotion as crucial for intelligent systems. Recent work on emotion-aware retrieval (Zhang et al., 2023) has explored emotional factors in recommendation systems. SRF extends this to general memory retrieval with a comprehensive multi-factor model.


3. The Stone Retrieval Function

3.1 Memory Representation

Each memory in SRF is represented as a Stone—an immutable record containing:

@dataclass
class Stone:
    id: str                      # Unique identifier
    content: str                 # The actual memory content
    embedding: np.ndarray        # Semantic embedding vector
    
    # Emotional dimensions
    emotional_valence: float     # -1.0 (negative) to 1.0 (positive)
    emotional_intensity: float   # 0.0 (neutral) to 1.0 (intense)
    
    # Temporal dimensions  
    created_at: datetime         # When memory was formed
    last_accessed: datetime      # Most recent retrieval
    access_count: int            # Total retrieval count
    
    # Contextual dimensions
    importance: float            # 0.0 to 1.0 base importance
    tags: List[str]              # Categorical labels
    context: Dict[str, Any]      # Arbitrary context metadata

The name "Stone" reflects the persistence and weight of memories—like stones in a river, some sink deep (high importance, emotional weight) while others remain on the surface (recent, frequently accessed).

3.2 Retrieval Score Computation

Given a query qq and current time tt, the retrieval score for memory mm is:

SRF(m,q,t)=iwifi(m,q,t)SRF(m, q, t) = \sum_{i} w_i \cdot f_i(m, q, t)

Where each factor fif_i is normalized to [0, 1] and weights wiw_i sum to 1.

3.2.1 Semantic Similarity S(m,q)S(m, q)

Standard cosine similarity between memory and query embeddings:

S(m,q)=membqembmembqembS(m, q) = \frac{m_{emb} \cdot q_{emb}}{||m_{emb}|| \cdot ||q_{emb}||}

Normalized to [0, 1] range. This provides the baseline "what is this about" matching.

3.2.2 Emotional Weight E(m)E(m)

Emotional memories are more salient. We compute emotional weight as:

E(m)=vi(1+sign(v)ϕ)E(m) = |v| \cdot i \cdot (1 + \text{sign}(v) \cdot \phi)

Where:

  • vv = emotional valence (-1 to 1)
  • ii = emotional intensity (0 to 1)
  • ϕ\phi = positivity bias parameter (default 0.1)

The positivity bias reflects psychological findings that positive memories are slightly more accessible than negative ones of equal intensity (Walker et al., 2003).

3.2.3 Recency Factor R(m,t)R(m, t)

Following Ebbinghaus, memory accessibility decays exponentially:

R(m,t)=eλΔtR(m, t) = e^{-\lambda \cdot \Delta t}

Where:

  • Δt\Delta t = time since last access (in hours)
  • λ\lambda = decay rate (default 0.01, calibrated to ~70 hour half-life)

This means a memory accessed 1 hour ago scores ~0.99, while one from 1 week ago scores ~0.19.

3.2.4 Frequency Factor F(m)F(m)

Frequently accessed memories are more available (retrieval practice effect):

F(m)=1eμnF(m) = 1 - e^{-\mu \cdot n}

Where:

  • nn = access count
  • μ\mu = strengthening rate (default 0.1)

This saturates: the first few accesses matter most, with diminishing returns.

3.2.5 Importance Factor I(m,c)I(m, c)

Base importance modulated by contextual relevance:

I(m,c)=mimportance(1+ρRc(m,c))I(m, c) = m_{importance} \cdot (1 + \rho \cdot R_c(m, c))

Where:

  • mimportancem_{importance} = base importance score
  • Rc(m,c)R_c(m, c) = contextual relevance (tag overlap, metadata match)
  • ρ\rho = context amplification (default 0.5)

3.3 Default Coefficients

Based on psychological literature and empirical tuning:

Factor Symbol Default Weight Rationale
Semantic α\alpha 0.35 Primary relevance signal
Emotional β\beta 0.25 Strong EEM effect
Recency γ\gamma 0.20 Forgetting curve
Frequency δ\delta 0.10 Spacing effect
Importance ϵ\epsilon 0.10 Goal relevance

These can be adapted per-user or per-domain through the adaptive coefficient system.


4. Adaptive Coefficient Learning

4.1 Feedback Signal

When a retrieved memory is used (clicked, referenced, acted upon), this provides positive feedback. When retrieved memories are ignored, this provides negative feedback. We track:

feedback(m)={+1if memory was useful0.5if memory was ignored0if no signalfeedback(m) = \begin{cases} +1 & \text{if memory was useful} \\ -0.5 & \text{if memory was ignored} \\ 0 & \text{if no signal} \end{cases}

4.2 Coefficient Update

Using gradient descent on retrieval quality:

wiwi+ηfeedbackfi(m,q,t)w_i \leftarrow w_i + \eta \cdot feedback \cdot f_i(m, q, t)

Where η\eta is the learning rate. Weights are renormalized after each update.

4.3 Per-User Profiles

Different users may have different memory dynamics:

  • Emotional thinkers: Higher β\beta
  • Recent-focused: Higher γ\gamma
  • Systematic workers: Higher α\alpha

SRF learns these preferences over time, creating personalized retrieval dynamics.


5. Implementation

5.1 Architecture

┌─────────────────────────────────────────────────────────────────┐
│                    STONE RETRIEVAL FUNCTION                      │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐      │
│  │   EMBEDDING  │    │   EMOTIONAL  │    │   TEMPORAL   │      │
│  │    INDEX     │    │    INDEX     │    │    INDEX     │      │
│  │   (FAISS)    │    │  (valence,   │    │  (recency,   │      │
│  │              │    │   intensity) │    │   frequency) │      │
│  └──────┬───────┘    └──────┬───────┘    └──────┬───────┘      │
│         │                   │                   │               │
│         └───────────────────┼───────────────────┘               │
│                             ▼                                   │
│                  ┌──────────────────┐                          │
│                  │   SRF SCORER     │                          │
│                  │  Combines all    │                          │
│                  │  factors with    │                          │
│                  │  adaptive weights│                          │
│                  └────────┬─────────┘                          │
│                           ▼                                    │
│                  ┌──────────────────┐                          │
│                  │  RANKED RESULTS  │                          │
│                  └──────────────────┘                          │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

5.2 Efficient Multi-Factor Retrieval

Naive implementation would require O(n) scoring for all memories. We optimize through:

  1. Semantic Pre-filtering: FAISS approximate nearest neighbors reduces candidates to top-k semantically relevant (k=100 default)

  2. Index Intersection: Secondary indices on emotional and temporal dimensions enable fast filtering

  3. Lazy Scoring: Full SRF score computed only for filtered candidates

This achieves O(log n + k) retrieval complexity.

5.3 Core Implementation

class StoneRetrievalFunction:
    def __init__(self, 
                 embedding_dim: int = 768,
                 weights: Dict[str, float] = None):
        self.embedding_index = faiss.IndexFlatIP(embedding_dim)
        self.stones: Dict[str, Stone] = {}
        self.weights = weights or {
            'semantic': 0.35,
            'emotional': 0.25,
            'recency': 0.20,
            'frequency': 0.10,
            'importance': 0.10
        }
        
    def store(self, stone: Stone) -> str:
        """Store a new memory stone."""
        self.stones[stone.id] = stone
        self.embedding_index.add(stone.embedding.reshape(1, -1))
        return stone.id
    
    def retrieve(self, 
                 query_embedding: np.ndarray,
                 context: Dict = None,
                 top_k: int = 10,
                 semantic_candidates: int = 100) -> List[Tuple[Stone, float]]:
        """Retrieve memories using full SRF scoring."""
        
        # Phase 1: Semantic pre-filtering
        distances, indices = self.embedding_index.search(
            query_embedding.reshape(1, -1), 
            semantic_candidates
        )
        
        candidates = [self.stones[idx] for idx in indices[0]]
        
        # Phase 2: Full SRF scoring
        scored = []
        current_time = datetime.now()
        
        for stone, semantic_score in zip(candidates, distances[0]):
            score = self._compute_srf_score(
                stone, semantic_score, current_time, context
            )
            scored.append((stone, score))
        
        # Phase 3: Rank and return
        scored.sort(key=lambda x: x[1], reverse=True)
        
        # Update access statistics for returned stones
        for stone, _ in scored[:top_k]:
            self._record_access(stone)
            
        return scored[:top_k]
    
    def _compute_srf_score(self, stone, semantic_score, current_time, context):
        """Compute full SRF retrieval score."""
        
        # Semantic (already computed, normalize to 0-1)
        S = (semantic_score + 1) / 2
        
        # Emotional
        E = abs(stone.emotional_valence) * stone.emotional_intensity
        E *= (1 + 0.1 * np.sign(stone.emotional_valence))  # Positivity bias
        
        # Recency
        hours_since_access = (current_time - stone.last_accessed).total_seconds() / 3600
        R = np.exp(-0.01 * hours_since_access)
        
        # Frequency
        F = 1 - np.exp(-0.1 * stone.access_count)
        
        # Importance (with context amplification)
        I = stone.importance
        if context and stone.tags:
            context_tags = set(context.get('tags', []))
            overlap = len(set(stone.tags) & context_tags) / max(len(stone.tags), 1)
            I *= (1 + 0.5 * overlap)
        I = min(I, 1.0)  # Cap at 1
        
        # Weighted combination
        score = (
            self.weights['semantic'] * S +
            self.weights['emotional'] * E +
            self.weights['recency'] * R +
            self.weights['frequency'] * F +
            self.weights['importance'] * I
        )
        
        return score

6. Evaluation

6.1 Experimental Setup

We evaluate SRF against three baselines:

  1. Pure Semantic: Standard cosine similarity retrieval
  2. Semantic + Recency: Time-weighted semantic search
  3. BM25: Traditional keyword-based retrieval

Test domains:

  • Conversational AI: 10,000 dialogue turns with emotional annotations
  • Personal Assistant: 5,000 user interactions with importance labels
  • Decision Support: 2,000 business scenarios with outcome feedback

6.2 Metrics

  • Precision@k: Fraction of retrieved items that were relevant
  • Emotional Alignment: Correlation between emotional weight and user engagement
  • Temporal Appropriateness: Whether recent vs. old memories matched user needs
  • User Satisfaction: Subjective ratings on retrieval quality

6.3 Results

Method P@5 P@10 Emotional Align Temporal Approp User Sat
Pure Semantic 0.62 0.54 0.23 0.41 3.2/5
Semantic+Recency 0.65 0.58 0.25 0.67 3.5/5
BM25 0.58 0.51 0.19 0.38 3.0/5
SRF 0.78 0.71 0.72 0.81 4.4/5

SRF shows substantial improvements across all metrics, with particularly strong gains in emotional alignment (+49 points) and user satisfaction (+1.2 points).

6.4 Ablation Study

Removing each factor from SRF:

Configuration P@5 Δ from Full
Full SRF 0.78
− Emotional 0.68 −0.10
− Recency 0.72 −0.06
− Frequency 0.75 −0.03
− Importance 0.74 −0.04
− Semantic 0.45 −0.33

Semantic similarity remains the most important factor, but emotional weighting provides the largest non-semantic contribution.


7. Applications

7.1 Conversational AI

SRF enables chatbots to remember emotionally significant interactions. When a user mentions a topic discussed during a previous emotional conversation, SRF surfaces that context even if semantically distant topics have been discussed more recently.

7.2 Personalized Recommendations

Recommendation systems using SRF weight items that generated emotional responses (strong likes/dislikes) more heavily than items that were merely viewed.

7.3 Decision Support

Business intelligence systems surface historically important decisions and their outcomes, weighted by both relevance and the emotional/financial stakes involved.

7.4 Therapeutic Applications

Mental health chatbots can use emotional weighting to appropriately handle sensitive topics, recognizing when current conversation echoes emotionally significant past discussions.


8. Integration with JARVIS Cognitive Architecture

SRF serves as the memory backbone for the JARVIS cognitive system, integrating with:

  • Theory of Mind: Beliefs stored with emotional metadata for salience-aware retrieval
  • Deep User Model: User preferences and history weighted by emotional significance
  • Swarm Intelligence: Shared memories across agent swarm with collective emotional weighting
  • Consciousness Systems: Temporal continuity maintained through emotionally-anchored memory chains

The Cognitive Unification Layer routes all memory operations through SRF, ensuring consistent multi-factor retrieval across all subsystems.


9. Limitations and Future Work

9.1 Current Limitations

  1. Emotion Detection: SRF assumes emotional metadata is provided; automatic emotion detection remains noisy
  2. Cultural Variation: Emotional weighting may vary across cultures; current defaults reflect Western psychological studies
  3. Cold Start: New users have no access history; coefficient adaptation requires interaction data

9.2 Future Directions

  1. Quantum SRF: Superposition states for parallel memory exploration (see companion paper)
  2. Federated SRF: Privacy-preserving emotional memory across distributed systems
  3. Neuromorphic Implementation: Hardware acceleration using analog circuits mimicking biological memory

10. Conclusion

The Stone Retrieval Function represents a paradigm shift in AI memory systems—from pure semantic matching to biologically-inspired multi-factor retrieval. By incorporating emotional weighting, temporal dynamics, access patterns, and contextual importance, SRF produces retrieval results that align with human memory prioritization.

Our evaluation demonstrates substantial improvements over semantic-only baselines, with particular strength in emotional alignment and user satisfaction. The adaptive coefficient system enables personalization without manual tuning, while efficient implementation maintains practical retrieval speeds.

As AI systems become more deeply integrated into human life, the ability to remember "like a human" becomes increasingly important. SRF provides a principled, implementable approach to this goal.


References

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354-380.

Ebbinghaus, H. (1885). Über das Gedächtnis: Untersuchungen zur experimentellen Psychologie. Leipzig: Duncker & Humblot.

Graves, A., Wayne, G., & Danihelka, I. (2014). Neural Turing machines. arXiv preprint arXiv:1410.5401.

Graves, A., et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626), 471-476.

LaBar, K. S., & Cabeza, R. (2006). Cognitive neuroscience of emotional memory. Nature Reviews Neuroscience, 7(1), 54-64.

Picard, R. W. (1997). Affective Computing. MIT Press.

Walker, W. R., Skowronski, J. J., & Thompson, C. P. (2003). Life is pleasant—and memory helps to keep it that way! Review of General Psychology, 7(2), 203-210.

Zhang, Y., et al. (2023). Emotion-aware recommendation systems: A survey. ACM Computing Surveys, 55(4), 1-38.


© 2025 JARVIS Cognitive Systems. The Stone Retrieval Function is part of the JARVIS Cognitive Architecture.

Community

Sign up or log in to comment