Skip to content

Update upstream PR reference for slot_prompt_similarity setter#99

Merged
bernardladenthin merged 1 commit intomasterfrom
claude/cleanup-patch-file-KoDqR
Apr 28, 2026
Merged

Update upstream PR reference for slot_prompt_similarity setter#99
bernardladenthin merged 1 commit intomasterfrom
claude/cleanup-patch-file-KoDqR

Conversation

@bernardladenthin
Copy link
Copy Markdown
Owner

Summary

This change updates documentation and code comments to reference the upstream llama.cpp PR that adds runtime mutation support for the slot_prompt_similarity threshold, replacing the local patch documentation with a direct link to the merged upstream change.

Key Changes

  • Removed llama-cpp.patch.md: Deleted the tracked artefact document that proposed a patch for upstream llama.cpp. This patch has now been merged upstream as PR ggml-org/llama.cpp#22393.

  • Updated src/main/cpp/jllama.cpp: Modified the comment block in configureParallelInference to reference the actual upstream PR URL instead of the local patch file. The comment now points to https://github.com/ggml-org/llama.cpp/pull/22393 and clarifies that the upstream setter will be available once the llama.cpp pin is bumped past b8913.

  • Updated REFACTORING.md: Changed the "Forward references" section to link directly to the upstream PR instead of referencing the local llama-cpp.patch.md file. This clarifies that the feature is now tracked upstream rather than as a local proposal.

  • Updated 49be664_24918e4.md: Minor documentation update reflecting the removal of the patch file.

Implementation Details

The actual functionality (getter/setter for slot_prompt_similarity) remains unchanged and will be available once:

  1. The upstream PR is merged into a tagged llama.cpp release
  2. This repository's llama.cpp pin is bumped to that release
  3. The reserved code block in configureParallelInference is uncommented

No behavioral changes in this commit; this is purely documentation and reference cleanup to track the upstream work.

https://claude.ai/code/session_01C9wNVhq6d4eK97g6hsabcC

The patch proposal has been submitted upstream as
ggml-org/llama.cpp#22393 so the local tracking
file is no longer needed.

- Delete llama-cpp.patch.md
- REFACTORING.md forward-references section: replace file reference with
  the actual PR URL
- 49be664_24918e4.md documentation table: mark file deleted, add PR link
- jllama.cpp configureParallelInference comment: replace "see
  llama-cpp.patch.md" with the PR URL

https://claude.ai/code/session_01C9wNVhq6d4eK97g6hsabcC
@bernardladenthin bernardladenthin merged commit b99230c into master Apr 28, 2026
10 checks passed
@bernardladenthin bernardladenthin deleted the claude/cleanup-patch-file-KoDqR branch April 28, 2026 07:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants