Search
Write a publication
Pull to refresh
1
0

User

Send message

Ollama/LM Studio don’t support passing a request to RAG from their side. You need to flip the flow manually by inserting a middle layer (a proxy) that handles RAG before the prompt ever hits the LLM.

Information

Rating
Does not participate
Registered
Activity

Specialization

Pentester, SOC Analyst
Middle
From 5 $
Git
Python
Linux
Docker
SQL
English
C++
Algorithms and data structures
Code Optimization
Applied math