repo_id stringlengths 15 132 | file_path stringlengths 34 176 | content stringlengths 2 3.52M | __index_level_0__ int64 0 0 |
|---|---|---|---|
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/describe-image/flip_image.py | import io
from promptflow import tool
from promptflow.contracts.multimedia import Image
from PIL import Image as PIL_Image
@tool
def passthrough(input_image: Image) -> Image:
image_stream = io.BytesIO(input_image)
pil_image = PIL_Image.open(image_stream)
flipped_image = pil_image.transpose(PIL_Image.FLIP_... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/describe-image/question_on_image.jinja2 | # system:
As an AI assistant, your task involves interpreting images and responding to questions about the image.
Remember to provide accurate answers based on the information present in the image.
# user:
{{question}}

| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/describe-image/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/describe-image/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
question:
type: string
default: Please describe this image.
input_image:
type: image
default: https://developer.microsoft.com/_devcom/images/logo-ms-social.png
outputs:
answer:
type: string
reference: ... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/data.jsonl | {"text": "Python Hello World!"}
{"text": "C Hello World!"}
{"text": "C# Hello World!"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/custom.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: basic_custom_connection
type: custom
configs:
api_type: azure
api_version: 2023-03-15-preview
api_base: https://<to-be-replaced>.openai.azure.com/
secrets: # must-have
api_key: <to-be-replaced>
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/README.md | # Basic flow with custom connection
A basic standard flow that using custom python tool calls Azure OpenAI with connection info stored in custom connection.
Tools used in this flow:
- `prompt` tool
- custom `python` Tool
Connections used in this flow:
- None
## Prerequisites
Install promptflow sdk and other depend... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/hello.py | from typing import Union
from openai.version import VERSION as OPENAI_VERSION
from promptflow import tool
from promptflow.connections import CustomConnection, AzureOpenAIConnection
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and retu... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/requirements.txt | promptflow[azure]
promptflow-tools
python-dotenv | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
text:
type: string
default: Hello World!
outputs:
output:
type: string
reference: ${llm.output}
nodes:
- name: hello_prompt
type: prompt
source:
type: code
path: hello.jinja2
inputs:
text: ${in... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/hello.jinja2 | {# Please replace the template with your own prompt. #}
Write a simple {{text}} program that displays the greeting message when executed. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/inputs.json | {
"customer_info": "## Customer_Info\\n\\nFirst Name: Sarah \\nLast Name: Lee \\nAge: 38 \\nEmail Address: sarahlee@example.com \\nPhone Number: 555-867-5309 \\nShipping Address: 321 Maple St, Bigtown USA, 90123 \\nMembership: Platinum \\n\\n## Recent_Purchases\\n\\norder_number: 2 \\ndate: 2023-02-10 \\nitem:\\n- de... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/user_intent_few_shot.jinja2 | You are given a list of orders with item_numbers from a customer and a statement from the customer. It is your job to identify
the intent that the customer has with their statement. Possible intents can be:
"product return", "product exchange", "general question", "product question", "other".
If the intent is product... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/README.md | # Customer Intent Extraction
This sample is using OpenAI chat model(ChatGPT/GPT4) to identify customer intent from customer's question.
By going through this sample you will learn how to create a flow from existing working code (written in LangChain in this case).
This is the [existing code](./intent.py).
## Prereq... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/user_intent_zero_shot.jinja2 | You are given a list of orders with item_numbers from a customer and a statement from the customer. It is your job to identify the intent that the customer has with their statement. Possible intents can be: "product return", "product exchange", "general question", "product question", "other".
In triple backticks below... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/.env.example | CHAT_DEPLOYMENT_NAME=gpt-35-turbo
AZURE_OPENAI_API_KEY=<your_AOAI_key>
AZURE_OPENAI_API_BASE=<your_AOAI_endpoint>
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/intent.py | import os
import pip
from langchain.chat_models import AzureChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import HumanMessage
def extract_intent(chat_prompt: str):
if "AZURE_OPENAI_API_KEY" not... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/requirements.txt | promptflow
promptflow-tools
python-dotenv
langchain
jinja2 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/.amlignore | *.ipynb
.venv/
.data/
.env
.vscode/
outputs/
connection.json | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/extract_intent_tool.py | import os
from promptflow import tool
from promptflow.connections import CustomConnection
from intent import extract_intent
@tool
def extract_intent_tool(chat_prompt, connection: CustomConnection) -> str:
# set environment variables
for key, value in dict(connection).items():
os.environ[key] = valu... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
history:
type: string
customer_info:
type: string
outputs:
output:
type: string
reference: ${extract_intent.output}
nodes:
- name: chat_prompt
type: prompt
source:
type: code
path: user_intent_zero... | 0 |
promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/.promptflow/flow.tools.json | {
"package": {},
"code": {
"chat_prompt": {
"type": "prompt",
"inputs": {
"customer_info": {
"type": [
"string"
]
},
"chat_history": {
"type": [
"string"
]
}
},
"source": "user_intent_zero_sho... | 0 |
promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/data/denormalized-flat.jsonl | {"customer_info": "## Customer_Info\n\nFirst Name: Sarah \nLast Name: Lee \nAge: 38 \nEmail Address: sarahlee@example.com \nPhone Number: 555-867-5309 \nShipping Address: 321 Maple St, Bigtown USA, 90123 \nMembership: Platinum \n\n## Recent_Purchases\n\norder_number: 2 \ndate: 2023-02-10 \nitem:\n- description: TrailM... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/data.jsonl | {"text": "The software engineer is working on a new update for the application.", "entity_type": "job title", "results": "software engineer"}
{"text": "The project manager and the data analyst are collaborating to interpret the project data.", "entity_type": "job title", "results": "project manager, data analyst"}
{"te... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/README.md | # Named Entity Recognition
A flow that perform named entity recognition task.
[Named Entity Recognition (NER)](https://en.wikipedia.org/wiki/Named-entity_recognition) is a Natural Language Processing (NLP) task. It involves identifying and classifying named entities (such as people, organizations, locations, date exp... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/NER-test.ipynb | # Setup execution path and pf client
import os
import promptflow
root = os.path.join(os.getcwd(), "../")
flow_path = os.path.join(root, "named-entity-recognition")
data_path = os.path.join(flow_path, "data.jsonl")
eval_match_rate_flow_path = os.path.join(root, "../evaluation/eval-entity-match-rate")
pf = promptflow.... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/cleansing.py | from typing import List
from promptflow import tool
@tool
def cleansing(entities_str: str) -> List[str]:
# Split, remove leading and trailing spaces/tabs/dots
parts = entities_str.split(",")
cleaned_parts = [part.strip(" \t.\"") for part in parts]
entities = [part for part in cleaned_parts if len(part... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/cleansing_test.py | import unittest
from cleansing import cleansing
class CleansingTest(unittest.TestCase):
def test_normal(self):
self.assertEqual(cleansing("a, b, c"), ["a", "b", "c"])
self.assertEqual(cleansing("a, b, (425)137-98-25, "), ["a", "b", "(425)137-98-25"])
self.assertEqual(cleansing("a, b, F. S... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/NER_LLM.jinja2 | system:
Your task is to find entities of certain type from the given text content.
If there're multiple entities, please return them all with comma separated, e.g. "entity1, entity2, entity3".
You should only return the entity list, nothing else.
If there's no such entity, please return "None".
user:
Entity type: {{en... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
entity_type:
type: string
default: job title
text:
type: string
default: Maxime is a data scientist at Auto Dataset, and his wife is a finance
manager in the same company.
outputs:
entities:
type: st... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/eval_test.py | import unittest
import traceback
import os
import promptflow.azure as azure
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
import promptflow
class BaseTest(unittest.TestCase):
def setUp(self) -> None:
root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../")
... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/data.jsonl | {"name": "FilmTriviaGPT", "role": "an AI specialized in film trivia that provides accurate and up-to-date information about movies, directors, actors, and more.", "goals": ["Introduce 'Lord of the Rings' film trilogy including the film title, release year, director, current age of the director, production company and a... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/functions.py | from promptflow import tool
@tool
def functions_format() -> list:
functions = [
{
"name": "search",
"description": """The action will search this entity name on Wikipedia and returns the first {count}
sentences if it exists. If not, it will return some related entities ... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/README.md | # Autonomous Agent
This is a flow showcasing how to construct a AutoGPT agent with promptflow to autonomously figures out how to apply the given functions
to solve the goal, which is film trivia that provides accurate and up-to-date information about movies, directors,
actors, and more in this sample.
It involves i... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/python_repl.py | import sys
from io import StringIO
import functools
import logging
import ast
from typing import Dict, Optional
logger = logging.getLogger(__name__)
@functools.lru_cache(maxsize=None)
def warn_once() -> None:
# Warn that the PythonREPL
logger.warning("Python REPL can execute arbitrary code. Use with caution.... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/triggering_prompt.jinja2 | Determine which next function to use, and respond using stringfield JSON object.
If you have completed all your tasks, make sure to use the 'finish' function to signal and remember show your results. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/autogpt_class.py | from promptflow.tools.aoai import chat as aoai_chat
from promptflow.tools.openai import chat as openai_chat
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
from util import count_message_tokens, count_string_tokens, create_chat_message, generate_context, get_logger, \
parse_reply, constru... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/requirements.txt | promptflow
promptflow-tools
tiktoken
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/util.py | import time
from typing import List
import re
import tiktoken
import logging
import sys
import json
FORMATTER = logging.Formatter(
fmt="[%(asctime)s] %(name)-8s %(levelname)-8s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S %z",
)
def get_logger(name: str, level=logging.INFO) -> logging.Logger:
logger = loggin... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/wiki_search.py | from bs4 import BeautifulSoup
import re
import requests
def decode_str(string):
return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8")
def get_page_sentence(page, count: int = 10):
# find all paragraphs
paragraphs = page.split("\n")
paragraphs = [p.strip() for p in paragrap... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/user_prompt.jinja2 | Goals:
{{goals}}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/system_prompt.jinja2 | You are {{name}}, {{role}}
Play to your strengths as an LLM and pursue simple strategies with no legal complications to complete all goals.
Your decisions must always be made independently without seeking user assistance.
Performance Evaluation:
1. Continuously review and analyze your actions to ensure you are perfo... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/generate_goal.py | from promptflow import tool
@tool
def generate_goal(items: list = []) -> str:
"""
Generate a numbered list from given items based on the item_type.
Args:
items (list): A list of items to be numbered.
Returns:
str: The formatted numbered list.
"""
return "\n".join(f"{i + 1}. {... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
name:
type: string
default: "FilmTriviaGPT"
goals:
type: list
default: ["Introduce 'Lord of the Rings' film trilogy including the film title, release year, director, current age of the director, production compa... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/autogpt_easy_start.py | from typing import Union
from promptflow import tool
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
@tool
def autogpt_easy_start(connection: Union[AzureOpenAIConnection, OpenAIConnection], system_prompt: str, user_prompt: str,
triggering_prompt: str, functions: list... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/data.jsonl | {"text": "Python Hello World!"}
{"text": "C Hello World!"}
{"text": "C# Hello World!"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/README.md | # Basic standard flow
A basic standard flow using custom python tool that calls Azure OpenAI with connection info stored in environment variables.
Tools used in this flow:
- `prompt` tool
- custom `python` Tool
Connections used in this flow:
- None
## Prerequisites
Install promptflow sdk and other dependencies:
``... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/run.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl
environment_variables:
# environment variables from connection
AZURE_OPENAI_API_KEY: ${open_ai_connection.api_key}
AZURE_OPENAI_API_BASE: ${open_ai_connection.api_base}
AZURE_OPENAI_API_TYPE: azure
column_ma... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/.env.example | AZURE_OPENAI_API_KEY=<your_AOAI_key>
AZURE_OPENAI_API_BASE=<your_AOAI_endpoint>
AZURE_OPENAI_API_TYPE=azure
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/hello.py | import os
from openai.version import VERSION as OPENAI_VERSION
from dotenv import load_dotenv
from promptflow import tool
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Ple... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/requirements.txt | promptflow[azure]
promptflow-tools
python-dotenv | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
text:
type: string
default: Hello World!
outputs:
output:
type: string
reference: ${llm.output}
nodes:
- name: hello_prompt
type: prompt
source:
t... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/hello.jinja2 | {# Please replace the template with your own prompt. #}
Write a simple {{text}} program that displays the greeting message when executed. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/data.jsonl | {"source": "./divider.py"}
{"source": "./azure_open_ai.py"}
{"source": "./generate_docstring_tool.py"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/load_code_tool.py | from promptflow import tool
from file import File
@tool
def load_code(source: str):
file = File(source)
return file.content
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/README.md | # Generate Python docstring
This example can help you automatically generate Python code's docstring and return the modified code.
Tools used in this flow:
- `load_code` tool, it can load code from a file path.
- Load content from a local file.
- Loading content from a remote URL, currently loading HTML content, n... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/combine_code.jinja2 | {{divided|join('')}} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/generate_docstring_tool.py | import ast
import asyncio
import logging
import os
import sys
from typing import Union, List
from promptflow import tool
from azure_open_ai import ChatLLM
from divider import Divider
from prompt import docstring_prompt, PromptLimitException
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
de... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/requirements.txt | promptflow[azure]
promptflow-tools
python-dotenv
jinja2
tiktoken | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/file.py | import logging
import os
from urllib.parse import urlparse
import requests
class File:
def __init__(self, source: str):
self._source = source
self._is_url = source.startswith("http://") or source.startswith("https://")
if self._is_url:
parsed_url = urlparse(source)
... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/combine_code_tool.py | from promptflow import tool
from divider import Divider
from typing import List
@tool
def combine_code(divided: List[str]):
code = Divider.combine(divided)
return code
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/divider.py | import logging
import re
from typing import List
class Settings:
divide_file = {
"py": r"(?<!.)(class|def)",
}
divide_func = {
"py": r"((\n {,6})|^)(class|def)\s+(\S+(?=\())\s*(\([^)]*\))?\s*(->[^:]*:|:) *"
}
class Divider:
language = 'py'
@classmethod
def divide_file(cl... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
source:
type: string
default: ./azure_open_ai.py
outputs:
code:
type: string
reference: ${combine_code.output}
nodes:
- name: load_code
type: python
source:
type: code
path: load_code_tool.py
input... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/prompt.py | import sys
from promptflow.tools.common import render_jinja_template
from divider import Divider
class PromptLimitException(Exception):
def __init__(self, message="", **kwargs):
super().__init__(message, **kwargs)
self._message = str(message)
self._kwargs = kwargs
self._inner_excep... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/main.py | import argparse
from file import File
from diff import show_diff
from load_code_tool import load_code
from promptflow import PFClient
from pathlib import Path
if __name__ == "__main__":
current_folder = Path(__file__).absolute().parent
parser = argparse.ArgumentParser(description="The code path of code that n... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/divide_code_tool.py | from promptflow import tool
from divider import Divider
@tool
def divide_code(file_content: str):
# Divide the code into several parts according to the global import/class/function.
divided = Divider.divide_file(file_content)
return divided
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/diff.py | import difflib
import webbrowser
def show_diff(left_content, right_content, name="file"):
d = difflib.HtmlDiff()
html = d.make_file(
left_content.splitlines(),
right_content.splitlines(),
"origin " + name,
"new " + name,
context=True,
numlines=20)
html = htm... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/doc_format.jinja2 | This is the docstring style of sphinx:
"""Description of the function.
:param [ParamName]: [ParamDescription](, defaults to [DefaultParamVal].)
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
Note:
For custom class types,... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/azure_open_ai.py | import asyncio
import logging
import time
import uuid
from typing import List
from openai.version import VERSION as OPENAI_VERSION
import os
from abc import ABC, abstractmethod
import tiktoken
from dotenv import load_dotenv
from prompt import PromptLimitException
class AOAI(ABC):
def __init__(self, **kwargs):
... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/data.jsonl | {"query": "When will my order be shipped?"}
{"query": "Can you help me find information about this T-shirt?"}
{"query": "Can you recommend me a useful prompt tool?"} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/product_info.py | from promptflow import tool
@tool
def product_info(query: str) -> str:
print(f"Your query is {query}.\nLooking for product information...")
return "This product is produced by Microsoft."
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/README.md | # Conditional flow for switch scenario
This example is a conditional flow for switch scenario.
By following this example, you will learn how to create a conditional flow using the `activate config`.
## Flow description
In this flow, we set the background to the search function of a certain mall, use `activate confi... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/generate_response.py | from promptflow import tool
@tool
def generate_response(order_search="", product_info="", product_recommendation="") -> str:
default_response = "Sorry, no results matching your search were found."
responses = [order_search, product_info, product_recommendation]
return next((response for response in respon... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/classify_with_llm.jinja2 | system:
There is a search bar in the mall APP and users can enter any query in the search bar.
The user may want to search for orders, view product information, or seek recommended products.
Therefore, please classify user intentions into the following three types according to the query: product_recommendation, order... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/product_recommendation.py | from promptflow import tool
@tool
def product_recommendation(query: str) -> str:
print(f"Your query is {query}.\nRecommending products...")
return "I recommend promptflow to you, which can solve your problem very well."
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
query:
type: string
default: When will my order be shipped?
outputs:
response:
type: string
reference: ${generate_response.output}
nodes:
- name: classify_with_llm
type: llm
source:
type: code
path: ... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/order_search.py | from promptflow import tool
@tool
def order_search(query: str) -> str:
print(f"Your query is {query}.\nSearching for order...")
return "Your order is being mailed, please wait patiently."
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/class_check.py | from promptflow import tool
@tool
def class_check(llm_result: str) -> str:
intentions_list = ["order_search", "product_info", "product_recommendation"]
matches = [intention for intention in intentions_list if intention in llm_result.lower()]
return matches[0] if matches else "unknown"
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/create_symlinks.py | import os
from pathlib import Path
saved_path = os.getcwd()
os.chdir(Path(__file__).parent)
source_folder = Path("../web-classification")
for file_name in os.listdir(source_folder):
if not Path(file_name).exists():
os.symlink(
source_folder / file_name,
file_name
)
os.chdi... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/README.md | # Flow with symlinks
User sometimes need to reference some common files or folders, this sample demos how to solve the problem using symlinks.
But it has the following limitations. It is recommended to use **additional include**.
Learn more: [flow-with-additional-includes](../flow-with-additional-includes/README.md)... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/run.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl
variant: ${summarize_text_content.variant_1}
column_mapping:
url: ${data.url} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/requirements.txt | promptflow[azure]
promptflow-tools
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
url:
type: string
default: https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h
outputs:
category:
type: string
reference: ${convert_to_dict.output.category}
ev... | 0 |
promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/.promptflow/flow.tools.json | {
"package": {},
"code": {
"summarize_text_content.jinja2": {
"type": "llm",
"inputs": {
"text": {
"type": [
"string"
]
}
},
"description": "Summarize webpage content into a short paragraph."
},
"summarize_text_content__variant_1.ji... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/prompt_gen.jinja2 | system:
I want you to act as a Math expert specializing in Algebra, Geometry, and Calculus. Given the question, develop python code to model the user's question.
The python code will print the result at the end.
Please generate executable python code, your reply will be in JSON format, something like:
{
"code": "pr... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/README.md | # Math to Code
Math to Code is a project that utilizes the power of the chatGPT model to generate code that models math questions and then executes the generated code to obtain the final numerical answer.
> [!NOTE]
>
> Building a system that generates executable code from user input with LLM is [a complex problem with... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/code_refine.py | from promptflow import tool
import ast
import json
def infinite_loop_check(code_snippet):
tree = ast.parse(code_snippet)
for node in ast.walk(tree):
if isinstance(node, ast.While):
if not node.orelse:
return True
return False
def syntax_error_check(code_snippet):
... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/math_test.ipynb | # setup pf client and execution path
from promptflow import PFClient
import json
import os
pf = PFClient()
root = os.path.join(os.getcwd(), "../")
flow = os.path.join(root, "maths-to-code")
data = os.path.join(flow, "math_data.jsonl")
eval_flow = os.path.join(root, "../evaluation/eval-accuracy-maths-to-code")# start... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/code_execution.py | from promptflow import tool
import sys
from io import StringIO
@tool
def func_exe(code_snippet: str):
if code_snippet == "JSONDecodeError" or code_snippet.startswith("Unknown Error:"):
return code_snippet
# Define the result variable before executing the code snippet
old_stdout = sys.stdout
... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/requirements.txt | langchain
sympy
promptflow[azure]
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/math_data.jsonl | {"question": "What is the sum of 5 and 3?", "answer": "8"}
{"question": "Subtract 7 from 10.", "answer": "3"}
{"question": "Multiply 6 by 4.", "answer": "24"}
{"question": "Divide 20 by 5.", "answer": "4"}
{"question": "What is the square of 7?", "answer": "49"}
{"question": "What is the square root of 81?", "answer": ... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/math_example.py | from promptflow import tool
@tool
def prepare_example():
return [
{
"question": "What is 37593 * 67?",
"code": "{\n \"code\": \"print(37593 * 67)\"\n}",
"answer": "2512641",
},
{
"question": "What is the value of x in the equation 2x + 3 = 11?",
"code":... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/ask_llm.jinja2 | system:
I want you to act as a Math expert specializing in Algebra, Geometry, and Calculus. Given the question, develop python code to model the user's question.
The python code will print the result at the end.
Please generate executable python code, your reply will be in JSON format, something like:
{
"code": "pr... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
math_question:
type: string
default: If a rectangle has a length of 10 and width of 5, what is the area?
outputs:
code:
type: string
reference: ${code_ref... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/data.jsonl | {"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g", "answer": "Channel", "evidence": "Url"}
{"url": "https://arxiv.org/abs/2307.04767", "answer": "Academic", "evidence": "Text content"}
{"url": "https://play.google.com/store/apps/details?id=com.twitter.android", "answer": "App", "evidence": "Both"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/README.md | # Flow with additional_includes
User sometimes need to reference some common files or folders, this sample demos how to solve the problem using additional_includes. The file or folders in additional includes will be
copied to the snapshot folder by promptflow when operate this flow.
## Tools used in this flow
- LLM ... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/run.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl
variant: ${summarize_text_content.variant_1}
column_mapping:
url: ${data.url} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/run_evaluation.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: ../../evaluation/eval-classification-accuracy
data: data.jsonl
run: web_classification_variant_1_20230724_173442_973403 # replace with your run name
column_mapping:
groundtruth: ${data.answer}
prediction: ${run.outputs.category} | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.