repo_id
stringlengths
15
132
file_path
stringlengths
34
176
content
stringlengths
2
3.52M
__index_level_0__
int64
0
0
promptflow_repo/promptflow/examples/flows/chat
promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: chat_history: type: list default: [] question: type: string default: What is ChatGPT? is_chat_input: true outputs: answer: type: string reference: ${augmented_chat.output} is_chat_output: true ...
0
promptflow_repo/promptflow/examples/flows/chat
promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/search_result_from_url.py
import random import time from concurrent.futures import ThreadPoolExecutor from functools import partial import bs4 import requests from promptflow import tool session = requests.Session() def decode_str(string): return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8") def get_page_s...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-perceived-intelligence/parse_score.py
from promptflow import tool import re @tool def parse_score(gpt_score: str): return float(extract_float(gpt_score)) def extract_float(s): match = re.search(r"[-+]?\d*\.\d+|\d+", s) if match: return float(match.group()) else: return None
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-perceived-intelligence/data.jsonl
{"question": "What is the name of the new language representation model introduced in the document?", "variant_id": "v1", "line_number":1, "answer":"The document mentions multiple language representation models, so it is unclear which one is being referred to as \"new\". Can you provide more specific information or con...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-perceived-intelligence/gpt_perceived_intelligence.md
user: # Instructions * There are many chatbots that can answer users questions based on the context given from different sources like search results, or snippets from books/papers. They try to understand users's question and then get context by either performing search from search engines, databases or books/papers fo...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-perceived-intelligence/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-perceived-intelligence/aggregate.py
from typing import List from promptflow import tool @tool def aggregate(perceived_intelligence_score: List[float]): aggregated_results = {"perceived_intelligence_score": 0.0, "count": 0} # Calculate average perceived_intelligence_score for i in range(len(perceived_intelligence_score)): aggregated...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-perceived-intelligence/README.md
# Perceived Intelligence Evaluation This is a flow leverage llm to eval perceived intelligence. Perceived intelligence is the degree to which a bot can impress the user with its responses, by showing originality, insight, creativity, knowledge, and adaptability. Tools used in this flow: - `python` tool - built-in `ll...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-perceived-intelligence/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json environment: python_requirements_txt: requirements.txt inputs: question: type: string default: What is the name of the new language representation model introduced in the document? answer: type: string default: ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_fluency_prompt.jinja2
system: You are an AI assistant. You will be given the definition of an evaluation metric for assessing the quality of an answer in a question-answering task. Your job is to compute an accurate evaluation score using the provided evaluation metric. user: Fluency measures the quality of individual sentences in the answe...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/data.jsonl
{"question":"Which tent is the most waterproof?","ground_truth":"The Alpine Explorer Tent has the highest rainfly waterproof rating at 3000m","answer":"The Alpine Explorer Tent is the most waterproof.","context":"From the our product list, the alpine explorer tent is the most waterproof. The Adventure Dining Table has ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_similarity_prompt.jinja2
system: You are an AI assistant. You will be given the definition of an evaluation metric for assessing the quality of an answer in a question-answering task. Your job is to compute an accurate evaluation score using the provided evaluation metric. user: Equivalence, as a metric, measures the similarity between the pre...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/ada_cosine_similarity_score.py
from promptflow import tool import numpy as np from numpy.linalg import norm @tool def compute_ada_cosine_similarity(a, b) -> float: return np.dot(a, b)/(norm(a)*norm(b))
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/validate_input.py
from promptflow import tool @tool def validate_input(question: str, answer: str, context: str, ground_truth: str, selected_metrics: dict) -> dict: input_data = {"question": question, "answer": answer, "context": context, "ground_truth": ground_truth} expected_input_cols = set(input_data.keys()) dict_metri...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/aggregate_variants_results.py
from typing import List from promptflow import tool, log_metric import numpy as np @tool def aggregate_variants_results(results: List[dict], metrics: List[str]): aggregate_results = {} for result in results: for name, value in result.items(): if name in metrics[0]: if name ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_relevance_prompt.jinja2
system: You are an AI assistant. You will be given the definition of an evaluation metric for assessing the quality of an answer in a question-answering task. Your job is to compute an accurate evaluation score using the provided evaluation metric. user: Relevance measures how well the answer addresses the main aspects...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/README.md
# Q&A Evaluation: This is a flow evaluating the Q&A systems by leveraging Large Language Models (LLM) to measure the quality and safety of responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with human evaluations compared to traditional mathematical measuremen...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: question: type: string default: Which tent is the most waterproof? is_chat_input: false answer: type: string default: The Alpine Explorer Tent is the most waterproof. is_chat_input: false context: ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/concat_scores.py
from promptflow import tool import numpy as np import re @tool def concat_results(gpt_coherence_score: str = None, gpt_similarity_score: str = None, gpt_fluency_score: str = None, gpt_relevance_score: str = None, gpt_groundedness_score: str =...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/select_metrics.py
from promptflow import tool @tool def select_metrics(metrics: str) -> str: supported_metrics = ('gpt_coherence', 'gpt_similarity', 'gpt_fluency', 'gpt_relevance', 'gpt_groundedness', 'f1_score', 'ada_similarity') user_selected_metrics = [metric.strip() for metric in metrics.split(',')...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/f1_score.py
from promptflow import tool from collections import Counter @tool def compute_f1_score(ground_truth: str, answer: str) -> str: import string import re class QASplitTokenizer: def __call__(self, line): """Tokenizes an input line using split() on whitespace :param line: a s...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_groundedness_prompt.jinja2
system: You are an AI assistant. You will be given the definition of an evaluation metric for assessing the quality of an answer in a question-answering task. Your job is to compute an accurate evaluation score using the provided evaluation metric. user: You will be presented with a CONTEXT and an ANSWER about that CON...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_coherence_prompt.jinja2
system: You are an AI assistant. You will be given the definition of an evaluation metric for assessing the quality of an answer in a question-answering task. Your job is to compute an accurate evaluation score using the provided evaluation metric. user: Coherence of an answer is measured by how well all the sentences...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-basic/data.jsonl
{"groundtruth": "Tomorrow's weather will be sunny.","prediction": "The weather will be sunny tomorrow."}
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-basic/line_process.py
from promptflow import tool @tool def line_process(groundtruth: str, prediction: str): """ This tool processes the prediction of a single line and returns the processed result. :param groundtruth: the groundtruth of a single line. :param prediction: the prediction of a single line. """ # Add...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-basic/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-basic/aggregate.py
from typing import List from promptflow import tool @tool def aggregate(processed_results: List[str]): """ This tool aggregates the processed result of all lines to the variant level and log metric for each variant. :param processed_results: List of the output of line_process node. """ # Add yo...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-basic/README.md
# Basic Eval This example shows how to create a basic evaluation flow. Tools used in this flow: - `python` tool ## Prerequisites Install promptflow sdk and other dependencies in this folder: ```bash pip install -r requirements.txt ``` ## What you will learn In this flow, you will learn - how to compose a point ba...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-basic/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: groundtruth: type: string default: groundtruth prediction: type: string default: prediction outputs: results: type: string reference: ${line_process.output} nodes: - name: line_process type: python ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/parse_retrival_score.py
from promptflow import tool import re @tool def parse_retrieval_output(retrieval_output: str) -> str: score_response = [sent.strip() for sent in retrieval_output.strip("\"").split("# Result")[-1].strip().split('.') if sent.strip()] parsed_score_response = re.findall(r"\d+", score_respons...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/data.jsonl
{"question": "What is the purpose of the LLM Grounding Score, and what does a higher score mean in this context?", "answer": "The LLM Grounding Score is a metric used in the context of in-context learning with large-scale pretrained language models (LLMs) [doc1]. It measures the ability of the LLM to understand and con...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/rag_retrieval_prompt.jinja2
system: You are a helpful assistant. user: A chat history between user and bot is shown below A list of documents is shown below in json format, and each document has one unique id. These listed documents are used as contex to answer the given question. The task is to score the relevance between the documents and the ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/validate_input.py
from promptflow import tool def is_valid(input_item): return True if input_item and input_item.strip() else False @tool def validate_input(question: str, answer: str, documents: str, selected_metrics: dict) -> dict: input_data = {"question": is_valid(question), "answer": is_valid(answer), "documents": is_va...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/aggregate_variants_results.py
from typing import List from promptflow import tool, log_metric import numpy as np @tool def aggregate_variants_results(results: List[dict], metrics: List[str]): aggregate_results = {} for result in results: for name, value in result.items(): if name not in aggregate_results.keys(): ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/rag_groundedness_prompt.jinja2
system: You are a helpful assistant. user: Your task is to check and rate if factual information in chatbot's reply is all grounded to retrieved documents. You will be given a question, chatbot's response to the question, a chat history between this chatbot and human, and a list of retrieved documents in json format. ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/README.md
# Q&A Evaluation: This is a flow evaluating the Q&A RAG (Retrieval Augmented Generation) systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of responses. Utilizing GPT model to assist with measurements aims to achieve a high agreement with human evaluations compare...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: metrics: type: string default: gpt_groundedness,gpt_relevance,gpt_retrieval_score is_chat_input: false answer: type: string default: Of the tents mentioned in the retrieved documents, the Alpine Explorer ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/parse_generation_score.py
from promptflow import tool import re @tool def parse_generation_output(rag_generation_score: str) -> str: quality_score = float('nan') quality_reasoning = '' for sent in rag_generation_score.split('\n'): sent = sent.strip() if re.match(r"\s*(<)?Quality score:", sent): numbers_...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/parse_groundedness_score.py
from promptflow import tool import re @tool def parse_grounding_output(rag_grounding_score: str) -> str: try: numbers_found = re.findall(r"Quality score:\s*(\d+)\/\d", rag_grounding_score) score = float(numbers_found[0]) if len(numbers_found) > 0 else 0 except Exception: score = float(...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/concat_scores.py
from promptflow import tool import numpy as np @tool def concat_results(rag_retrieval_score: dict = None, rag_grounding_score: dict = None, rag_generation_score: dict = None): load_list = [{'name': 'gpt_groundedness', 'result': rag_grounding_score}, {'name': 'gpt_retrieval_sco...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/select_metrics.py
from promptflow import tool @tool def select_metrics(metrics: str) -> str: supported_metrics = ('gpt_relevance', 'gpt_groundedness', 'gpt_retrieval_score') user_selected_metrics = [metric.strip() for metric in metrics.split(',') if metric] metric_selection_dict = {} for metric in supported_metrics: ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-qna-rag-metrics/rag_generation_prompt.jinja2
system: You will be provided a question, a conversation history, fetched documents related to the question and a response to the question in the domain. You task is to evaluate the quality of the provided response by following the steps below: - Understand the context of the question based on the conversation history. ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-entity-match-rate/data.jsonl
{"entities": ["software engineer","CEO"],"ground_truth": "\"CEO, Software Engineer, Finance Manager\""} {"entities": ["Software Engineer","CEO", "Finance Manager"],"ground_truth": "\"CEO, Software Engineer, Finance Manager\""}
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-entity-match-rate/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-entity-match-rate/match.py
from promptflow import tool from typing import List @tool def match(answer: List[str], ground_truth: List[str]): exact_match = 0 partial_match = 0 if is_match(answer, ground_truth, ignore_case=True, ignore_order=True, allow_partial=False): exact_match = 1 if is_match(answer, ground_truth, ig...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-entity-match-rate/is_match_test.py
import unittest from match import is_match class IsMatchTest(unittest.TestCase): def test_normal(self): self.assertEqual(is_match(["a", "b"], ["B", "a"], True, True, False), True) self.assertEqual(is_match(["a", "b"], ["B", "a"], True, False, False), False) self.assertEqual(is_match(["a",...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-entity-match-rate/README.md
# Entity match rate evaluation This is a flow evaluates: entity match rate. Tools used in this flow: - `python` tool ## Prerequisites Install promptflow sdk and other dependencies: ```bash pip install -r requirements.txt ``` ### 1. Test flow/node ```bash # test with default input value in flow.dag.yaml pf flow te...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-entity-match-rate/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: entities: type: list default: - software engineer - CEO ground_truth: type: string default: '"CEO, Software Engineer, Finance Manager"' outputs: match_cnt: type: object reference: ${match.outpu...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-entity-match-rate/log_metrics.py
from promptflow import tool from typing import List from promptflow import log_metric # The inputs section will change based on the arguments of the tool function, after you save the code # Adding type to arguments and return value will help the system show the types properly # Please update the function name/signatur...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-entity-match-rate/cleansing.py
from typing import List from promptflow import tool @tool def cleansing(entities_str: str) -> List[str]: # Split, remove leading and trailing spaces/tabs/dots parts = entities_str.split(",") cleaned_parts = [part.strip(" \t.\"") for part in parts] entities = [part for part in cleaned_parts if len(part...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-chat-math/data.jsonl
{"groundtruth": "10","prediction": "10"} {"groundtruth": "253","prediction": "506"} {"groundtruth": "1/3","prediction": "2/6"}
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-chat-math/line_process.py
from promptflow import tool def string_to_number(raw_string: str) -> float: ''' Try to parse the prediction string and groundtruth string to float number. Support parse int, float, fraction and recognize non-numeric string with wrong format. Wrong format cases: 'the answer is \box{2/3}', '0, 5, or any num...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-chat-math/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-chat-math/aggregate.py
from typing import List from promptflow import tool from promptflow import log_metric @tool def accuracy_aggregate(processed_results: List[int]): num_exception = 0 num_correct = 0 for i in range(len(processed_results)): if processed_results[i] == -1: num_exception += 1 elif p...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-chat-math/README.md
# Eval chat math This example shows how to evaluate the answer of math questions, which can compare the output results with the standard answers numerically. Learn more on corresponding [tutorials](../../../tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md) Tools used in this flow: - `python` t...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-chat-math/flow.dag.yaml
inputs: groundtruth: type: string default: "10" is_chat_input: false prediction: type: string default: "10" is_chat_input: false outputs: score: type: string reference: ${line_process.output} nodes: - name: line_process type: python source: type: code path: line_process...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-classification-accuracy/data.jsonl
{"groundtruth": "App","prediction": "App"} {"groundtruth": "Channel","prediction": "Channel"} {"groundtruth": "Academic","prediction": "Academic"}
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-classification-accuracy/calculate_accuracy.py
from typing import List from promptflow import log_metric, tool @tool def calculate_accuracy(grades: List[str]): result = [] for index in range(len(grades)): grade = grades[index] result.append(grade) # calculate accuracy for each variant accuracy = round((result.count("Correct") / l...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-classification-accuracy/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-classification-accuracy/grade.py
from promptflow import tool @tool def grade(groundtruth: str, prediction: str): return "Correct" if groundtruth.lower() == prediction.lower() else "Incorrect"
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-classification-accuracy/README.md
# Classification Accuracy Evaluation This is a flow illustrating how to evaluate the performance of a classification system. It involves comparing each prediction to the groundtruth and assigns a "Correct" or "Incorrect" grade, and aggregating the results to produce metrics such as accuracy, which reflects how good th...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-classification-accuracy/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: groundtruth: type: string description: Please specify the groundtruth column, which contains the true label to the outputs that your flow produces. default: APP prediction: type: string description: Pl...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-groundedness/data.jsonl
{"question": "What is the name of the new language representation model introduced in the document?", "variant_id": "v1", "line_number":1, "answer":"The document mentions multiple language representation models, so it is unclear which one is being referred to as \"new\". Can you provide more specific information or con...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-groundedness/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-groundedness/aggregate.py
from typing import List from promptflow import tool @tool def aggregate(groundedness_scores: List[float]): """ This tool aggregates the processed result of all lines to the variant level and log metric for each variant. :param processed_results: List of the output of line_process node. :param variant...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-groundedness/calc_groundedness.py
from promptflow import tool import re @tool def parse_score(gpt_score: str): return float(extract_float(gpt_score)) def extract_float(s): match = re.search(r"[-+]?\d*\.\d+|\d+", s) if match: return float(match.group()) else: return None
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-groundedness/README.md
# Groundedness Evaluation This is a flow leverage llm to eval groundedness: whether answer is stating facts that are all present in the given context. Tools used in this flow: - `python` tool - built-in `llm` tool ### 0. Setup connection Prepare your Azure Open AI resource follow this [instruction](https://learn.mi...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-groundedness/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json environment: python_requirements_txt: requirements.txt inputs: question: type: string default: What is the name of the new language representation model introduced in the document? answer: type: string default: ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-groundedness/gpt_groundedness.md
user: # Instructions * There are many chatbots that can answer users questions based on the context given from different sources like search results, or snippets from books/papers. They try to understand users's question and then get context by either performing search from search engines, databases or books/papers fo...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-accuracy-maths-to-code/test_data.jsonl
{"question": "What is the sum of 5 and 3?", "groundtruth": "8", "answer": "8"} {"question": "Subtract 7 from 10.", "groundtruth": "3", "answer": "3"} {"question": "Multiply 6 by 4.", "groundtruth": "24", "answer": "24"} {"question": "Divide 20 by 5.", "groundtruth": "4", "answer": "4"} {"question": "What is the square ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-accuracy-maths-to-code/line_process.py
from promptflow import tool @tool def line_process(groundtruth: str, prediction: str) -> int: processed_result = 0 if prediction == "JSONDecodeError" or prediction.startswith("Unknown Error:"): processed_result = -1 return processed_result try: groundtruth = float(groundtruth) ...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-accuracy-maths-to-code/aggregate.py
from typing import List from promptflow import tool from promptflow import log_metric @tool def accuracy_aggregate(processed_results: List[int]): num_exception = 0 num_correct = 0 for i in range(len(processed_results)): if processed_results[i] == -1: num_exception += 1 elif p...
0
promptflow_repo/promptflow/examples/flows/evaluation
promptflow_repo/promptflow/examples/flows/evaluation/eval-accuracy-maths-to-code/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: groundtruth: type: string default: "1" prediction: type: string default: "2" outputs: score: type: string reference: ${line_process.output} nodes: - name: line_process type: python source: type...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/describe-image/flip_image.py
import io from promptflow import tool from promptflow.contracts.multimedia import Image from PIL import Image as PIL_Image @tool def passthrough(input_image: Image) -> Image: image_stream = io.BytesIO(input_image) pil_image = PIL_Image.open(image_stream) flipped_image = pil_image.transpose(PIL_Image.FLIP_...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/describe-image/data.jsonl
{"question": "How many colors are there in the image?", "input_image": {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}} {"question": "What's this image about?", "input_image": {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/404.png"}}
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/describe-image/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/describe-image/README.md
# Describe image flow A flow that take image input, flip it horizontally and uses OpenAI GPT-4V tool to describe it. Tools used in this flow: - `OpenAI GPT-4V` tool - custom `python` Tool Connections used in this flow: - OpenAI Connection ## Prerequisites Install promptflow sdk and other dependencies, create connec...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/describe-image/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: question: type: string default: Please describe this image. input_image: type: image default: https://developer.microsoft.com/_devcom/images/logo-ms-social.png outputs: answer: type: string reference: ...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/describe-image/question_on_image.jinja2
# system: As an AI assistant, your task involves interpreting images and responding to questions about the image. Remember to provide accurate answers based on the information present in the image. # user: {{question}} ![image]({{test_image}})
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/generate_result.py
from promptflow import tool @tool def generate_result(llm_result="", default_result="") -> str: if llm_result: return llm_result else: return default_result
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/data.jsonl
{"question": "What is Prompt flow?"} {"question": "What is ChatGPT?"}
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/requirements.txt
promptflow promptflow-tools
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/default_result.py
from promptflow import tool @tool def default_result(question: str) -> str: return f"I'm not familiar with your query: {question}."
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/README.md
# Conditional flow for if-else scenario This example is a conditional flow for if-else scenario. By following this example, you will learn how to create a conditional flow using the `activate config`. ## Flow description In this flow, it checks if an input query passes content safety check. If it's denied, we'll re...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: question: type: string default: What is Prompt flow? outputs: answer: type: string reference: ${generate_result.output} nodes: - name: content_safety_check type: python source: type: code path: conte...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/llm_result.py
from promptflow import tool @tool def llm_result(question: str) -> str: # You can use an LLM node to replace this tool. return ( "Prompt flow is a suite of development tools designed to streamline " "the end-to-end development cycle of LLM-based AI applications." )
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/content_safety_check.py
from promptflow import tool import random @tool def content_safety_check(text: str) -> str: # You can use a content safety node to replace this tool. return random.choice([True, False])
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/basic/hello.py
import os from openai.version import VERSION as OPENAI_VERSION from dotenv import load_dotenv from promptflow import tool # The inputs section will change based on the arguments of the tool function, after you save the code # Adding type to arguments and return value will help the system show the types properly # Ple...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/basic/data.jsonl
{"text": "Python Hello World!"} {"text": "C Hello World!"} {"text": "C# Hello World!"}
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/basic/hello.jinja2
{# Please replace the template with your own prompt. #} Write a simple {{text}} program that displays the greeting message when executed.
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/basic/requirements.txt
promptflow[azure] promptflow-tools python-dotenv
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/basic/.env.example
AZURE_OPENAI_API_KEY=<your_AOAI_key> AZURE_OPENAI_API_BASE=<your_AOAI_endpoint> AZURE_OPENAI_API_TYPE=azure
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/basic/run.yml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json flow: . data: data.jsonl environment_variables: # environment variables from connection AZURE_OPENAI_API_KEY: ${open_ai_connection.api_key} AZURE_OPENAI_API_BASE: ${open_ai_connection.api_base} AZURE_OPENAI_API_TYPE: azure column_ma...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/basic/README.md
# Basic standard flow A basic standard flow using custom python tool that calls Azure OpenAI with connection info stored in environment variables. Tools used in this flow: - `prompt` tool - custom `python` Tool Connections used in this flow: - None ## Prerequisites Install promptflow sdk and other dependencies: ``...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/basic/flow.dag.yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json environment: python_requirements_txt: requirements.txt inputs: text: type: string default: Hello World! outputs: output: type: string reference: ${llm.output} nodes: - name: hello_prompt type: prompt source: t...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/web-classification/classify_with_llm.jinja2
system: Your task is to classify a given url into one of the following categories: Movie, App, Academic, Channel, Profile, PDF or None based on the text content information. The classification will be based on the url, the webpage text content summary, or both. user: The selection range of the value of "category" must...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/web-classification/fetch_text_content_from_url.py
import bs4 import requests from promptflow import tool @tool def fetch_text_content_from_url(url: str): # Send a request to the URL try: headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) " "Chrome/113.0.0.0 Safari/537.3...
0
promptflow_repo/promptflow/examples/flows/standard
promptflow_repo/promptflow/examples/flows/standard/web-classification/convert_to_dict.py
import json from promptflow import tool @tool def convert_to_dict(input_str: str): try: return json.loads(input_str) except Exception as e: print("The input is not valid, error: {}".format(e)) return {"category": "None", "evidence": "None"}
0