For APIs that has supported specifying response_format as a Pydantic object, as in OpenAI, this argument can be directly used within APPL's gen function. Otherwise, you can use Instrcutor that supports similar functionality with another argument response_model.
importapplfromapplimportgen,pplfrompydanticimportBaseModelappl.init()# Define your desired output structureclassUserInfo(BaseModel):name:strage:int@ppldefget_user_info()->UserInfo:# Extract structured data from natural language"John Doe is 30 years old."returngen(response_format=UserInfo).results@ppldefget_user_info_instructor()->UserInfo:# Extract structured data from natural language"John Doe is 30 years old."returngen(response_model=UserInfo).resultsprint("Using response_format:")user_info=get_user_info()print(user_info.name)# > John Doeprint(user_info.age)# > 30print("Using Instructor's response_model:")user_info=get_user_info_instructor()print(user_info.name)# > John Doeprint(user_info.age)# > 30
fromtypingimportLiteralimportapplfromapplimportgen,pplfrompydanticimportBaseModel,Fieldappl.init()classAnswer(BaseModel):# You can use the description annotaion to guide the generation of the structured output.thoughts:str=Field(...,description="The thoughts for thinking step by step.")answer:Literal["Yes","No"]@ppldefanswer(question:str):"Answer the question below."question# Use response_obj to retrieve the generation results will give correct type hint.returngen(response_format=Answer).response_objans=answer("The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.")print(f"Thoughts:\n{ans.thoughts}")print(f"Answer: {ans.answer}")
An example output will be:
Thoughts:
1.**Identify the Odd Numbers:**-15
-5
-13
-7
-1
2.**Add the Odd Numbers Together:** \[
15 + 5 + 13 + 7 + 1 = 41
\]
3.**Determine Whether the Sum is Even or Odd:**-41 is an odd number because it cannot be evenly divided by 2.
4.**Conclusion:** The odd numbers in the given group add up to an odd number, not an even number.
Answer: No
For response_format, the streaming is captured and displayed, and the returned object is a complete object. For response_model, you need to make the response_model a Partial or Iterable object so that they can be streamed, and the response object is a generator that yields partial objects.
Let's slightly modify the example from instructor:
# https://jxnl.github.io/instructor/why/?h=iterable#partial-extractionfromtypingimportListimportapplfromapplimportGeneration,gen,pplfromappl.coreimportmake_panelfrominstructorimportPartialfrompydanticimportBaseModelfromrich.liveimportLiveappl.init()classUser(BaseModel):name:strage:intclassInfo(BaseModel):users:List[User]@ppldefgenerate_info()->Info:f"randomly generate 3 users."returngen(response_format=Info,stream=True).response_objprint(f"Generated Info: {generate_info()}")# streaming is displayed but not return a generator object@ppldefgenerate_info_instructor():f"randomly generate 3 users."returngen(response_model=Partial[Info],stream=True).response_objwithLive(make_panel("Waiting for Response ..."),auto_refresh=0.5)aslive:forresponseingenerate_info_instructor():# returned object is a generator containing partial responseobj=response.model_dump_json(indent=2)live.update(make_panel(str(obj),language="json"))