-
Hi,
Obvious that field will be dropped as it's not included into the output of |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
@shaojun This looks like an LCEL question. There are a number of LCEL primitives that help manipulate dictionaries. Here's an example that does something with dict manipulation: https://github.com/langchain-ai/langserve/blob/main/examples/passthrough_dict/server.py Look through LangChain docs to learn how to use PassthroughRunnable, PassthroughRunnable.assign, beta.runnables.context.Context to manipulate LCEL expressions |
Beta Was this translation helpful? Give feedback.
-
OK first here's a simple solution without from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableGenerator
from langchain_openai.chat_models import ChatOpenAI
from langserve import CustomUserType
class MyCustomRequest(CustomUserType):
content: str
prompt: str
shared_pass_through_parameter: str = "No-natural language content"
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", '{prompt}'),
("human", '{content}')
])
def project(request: MyCustomRequest):
return {
"prompt": request.prompt,
"content": request.content,
}
async def process_response(input):
async for chunk in input:
yield chunk
chain = {
'response': project | prompt | model,
'original': RunnablePassthrough(),
} | RunnableGenerator(process_response) |
Beta Was this translation helpful? Give feedback.
OK first here's a simple solution without
Context
: