Skip to content

Commit d4b53e5

Browse files
committed
change label in prompt to AgentInput
1 parent 40e1837 commit d4b53e5

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

src/uipath/eval/models/llm_judge_types.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ class LLMJudgePromptTemplates(str, Enum):
159159

160160
LLM_JUDGE_SIMULATION_TRAJECTORY_DEFAULT_USER_PROMPT = """As an expert evaluator, determine how well the agent did on a scale of 0-100. Focus on if the simulation was successful and if the agent behaved according to the expected output accounting for alternative valid expressions, and reasonable variations in language while maintaining high standards for accuracy and completeness. Provide your score with a justification, explaining briefly and concisely why you gave that score.
161161
----
162-
UserOrSyntheticInputGivenToAgent:
162+
AgentInput:
163163
{{UserOrSyntheticInput}}
164164
----
165165
SimulationInstructions:
@@ -185,7 +185,7 @@ class LLMJudgePromptTemplates(str, Enum):
185185

186186
LLM_JUDGE_TRAJECTORY_DEFAULT_USER_PROMPT = """As an expert evaluator, determine how well the agent performed on a scale of 0-100. Focus on whether the agent's actions and outputs matched the expected behavior, while allowing for alternative valid expressions and reasonable variations in language. Maintain high standards for accuracy and completeness. Provide your score with a brief and clear justification explaining your reasoning.
187187
----
188-
UserOrSyntheticInputGivenToAgent:
188+
AgentInput:
189189
{{UserOrSyntheticInput}}
190190
----
191191
ExpectedAgentBehavior:

0 commit comments

Comments
 (0)