Skip to content

fix(compression): preserve saved memory in state_snapshot contract#21812

Closed
fsgeek wants to merge 15 commits intogoogle-gemini:mainfrom
fsgeek:fix/compression-saved-memory-contract
Closed

fix(compression): preserve saved memory in state_snapshot contract#21812
fsgeek wants to merge 15 commits intogoogle-gemini:mainfrom
fsgeek:fix/compression-saved-memory-contract

Conversation

@fsgeek
Copy link
Copy Markdown

@fsgeek fsgeek commented Mar 10, 2026

Problem:

  • The compression snapshot schema did not define a saved-memory field.
  • The compression prompt path did not receive loaded user memory from config.
  • This created a contract gap: compression could not be instructed to preserve durable memory explicitly.

Changes:

  • Add <saved_memory> to the required <state_snapshot> schema in modern and legacy compression prompts.
  • Extend compression prompt generation to accept saved-memory context.
  • Pass flattened config user memory into PromptProvider.getCompressionPrompt().
  • Add/upgrade tests to assert:
    • compression schema includes <saved_memory>
    • compression prompt includes user-memory context
    • chat compression service passes user memory into systemInstruction

Validation:

  • npm run test --workspace @google/gemini-cli-core -- \ src/prompts/promptProvider.test.ts \ src/services/chatCompressionService.test.ts
  • Result: 2 files passed, 38 tests passed.

Context:

  • Complements prior compression-loop fixes by addressing a separate persistence-contract seam rather than control-flow behavior.

Discovered as part of Project Arbiter,
a system-prompt analysis framework used to evaluate prompt contracts in tools
such as gemini-cli.

Summary

Details

Related Issues

How to Validate

Pre-Merge Checklist

  • Updated relevant documentation and README (if needed)
  • Added/updated tests (if needed)
  • Noted breaking changes (if any)
  • Validated on required platforms/methods:
    • MacOS
      • npm run
      • npx
      • Docker
      • Podman
      • Seatbelt
    • Windows
      • npm run
      • npx
      • Docker
    • Linux
      • npm run
      • npx
      • Docker

  Problem:
  - The compression snapshot schema did not define a saved-memory field.
  - The compression prompt path did not receive loaded user memory from config.
  - This created a contract gap: compression could not be instructed to preserve
    durable memory explicitly.

  Changes:
  - Add <saved_memory> to the required <state_snapshot> schema in modern and
    legacy compression prompts.
  - Extend compression prompt generation to accept saved-memory context.
  - Pass flattened config user memory into PromptProvider.getCompressionPrompt().
  - Add/upgrade tests to assert:
    - compression schema includes <saved_memory>
    - compression prompt includes user-memory context
    - chat compression service passes user memory into systemInstruction

  Validation:
  - npm run test --workspace @google/gemini-cli-core -- \
    src/prompts/promptProvider.test.ts \
    src/services/chatCompressionService.test.ts
  - Result: 2 files passed, 38 tests passed.

  Context:
  - Complements prior compression-loop fixes by addressing a separate
    persistence-contract seam rather than control-flow behavior.

  Discovered as part of [Project Arbiter](https://github.com/fsgeek/arbiter),
  a system-prompt analysis framework used to evaluate prompt contracts in tools
  such as gemini-cli.
@fsgeek fsgeek requested a review from a team as a code owner March 10, 2026 02:16
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical gap in the compression snapshot contract by ensuring that user-defined persistent memory is properly preserved. Previously, the compression mechanism lacked a dedicated field for saved memory and did not receive user memory from the configuration, leading to potential loss of important context during chat history compression. The changes introduce the necessary schema updates, extend prompt generation to include user memory, and integrate this memory into the compression process, thereby strengthening the system's ability to maintain durable user preferences and context.

Highlights

  • Schema Update: The <state_snapshot> schema in both modern and legacy compression prompts now explicitly includes a <saved_memory> field.
  • Prompt Context Extension: Compression prompt generation has been extended to accept and incorporate saved user memory context.
  • User Memory Integration: Flattened user memory from the configuration is now passed into the PromptProvider.getCompressionPrompt() method.
  • Enhanced Testing: New and updated tests validate that the compression schema includes <saved_memory>, the prompt incorporates user memory context, and the chat compression service correctly passes user memory into system instructions.
Changelog
  • packages/core/src/prompts/promptProvider.test.ts
    • Mocked getUserMemory in the mockConfig setup.
    • Added a test case to verify that the compression prompt's <state_snapshot> schema includes <saved_memory>.
    • Added a test case to confirm that the compression prompt incorporates user memory content.
  • packages/core/src/prompts/promptProvider.ts
    • Imported flattenMemory from ../config/memory.js.
    • Modified the getCompressionPrompt method to retrieve and flatten user memory from the config, then pass it as savedMemoryContext to the snippet's getCompressionPrompt function.
  • packages/core/src/prompts/snippets.legacy.ts
    • Updated the getCompressionPrompt function signature to accept an optional savedMemoryContext parameter.
    • Introduced a savedMemoryPreservation string literal to dynamically add a SAVED MEMORY CONTEXT section to the prompt if savedMemoryContext is provided.
    • Added the <saved_memory> XML tag with a descriptive comment to the <state_snapshot> schema.
  • packages/core/src/prompts/snippets.ts
    • Updated the getCompressionPrompt function signature to accept an optional savedMemoryContext parameter.
    • Introduced a savedMemoryPreservation string literal to dynamically add a SAVED MEMORY CONTEXT section to the prompt if savedMemoryContext is provided.
    • Added the <saved_memory> XML tag with a descriptive comment to the <state_snapshot> schema.
  • packages/core/src/services/chatCompressionService.test.ts
    • Mocked getUserMemory in the mockConfig setup.
    • Added a new test case to assert that loaded user memory is correctly passed into the compression system instruction.
Activity
  • No specific activity (comments, reviews, progress updates) was provided in the context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to preserve user memory during chat history compression by adding a <saved_memory> field to the state snapshot contract. However, it introduces a prompt injection vulnerability due to direct interpolation of untrusted user memory content from GEMINI.md files into the system prompt, which could allow an attacker to manipulate the agent's internal state. Additionally, an inconsistent indentation in the legacy prompt snippet could negatively impact model behavior.

### SAVED MEMORY CONTEXT
The following persistent user memory was loaded from context files (for example, global GEMINI memory). You MUST preserve this in <saved_memory> unless newer user instructions in the chat history explicitly supersede it.
<saved_memory_context>
${savedMemoryContext.trim()}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The savedMemoryContext variable on this line, which contains untrusted user-defined memory from GEMINI.md files, is directly interpolated into the system prompt for chat history compression without any sanitization or escaping. This creates a high-severity prompt injection vulnerability, as an attacker could inject malicious instructions that manipulate the resulting <state_snapshot>, potentially compromising the agent's behavior. It is recommended to sanitize or escape savedMemoryContext before interpolation, ensuring it cannot break out of the <saved_memory_context> XML tag. Additionally, the template literal for savedMemoryPreservation (which includes this line) has an unintentional leading indentation of 4 spaces, which will add extra whitespace to the prompt for legacy models and could negatively impact model behavior. This indentation should be removed for consistency.

…olation and added an explicit instruction to treat <saved_memory_context> as inert data (not directives). I also added a regression test with a tag-breakout payload to ensure it is escaped and cannot break prompt structure.
@gemini-cli gemini-cli bot added the priority/p1 Important and should be addressed in the near term. label Mar 10, 2026
@gemini-cli
Copy link
Copy Markdown
Contributor

gemini-cli bot commented Mar 24, 2026

Hi there! Thank you for your interest in contributing to Gemini CLI.

To ensure we maintain high code quality and focus on our prioritized roadmap, we have updated our contribution policy (see Discussion #17383).

We only guarantee review and consideration of pull requests for issues that are explicitly labeled as 'help wanted'. All other community pull requests are subject to closure after 14 days if they do not align with our current focus areas. For this reason, we strongly recommend that contributors only submit pull requests against issues explicitly labeled as 'help-wanted'.

This pull request is being closed as it has been open for 14 days without a 'help wanted' designation. We encourage you to find and contribute to existing 'help wanted' issues in our backlog! Thank you for your understanding and for being part of our community!

@gemini-cli gemini-cli bot closed this Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

priority/p1 Important and should be addressed in the near term.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant