id stringlengths 14 16 | text stringlengths 31 3.14k | source stringlengths 58 124 |
|---|---|---|
09c7c6e2de63-47 | Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt va... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-48 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forwar... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-49 | Task to call the model with. Should be a task that returns generated_text.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the give... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-50 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-51 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-52 | To use, you should have the transformers python package installed.
Only supports text-generation and text2text-generation for now.
Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2", task="text-generation"
)
Example passing pipeline in ... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-53 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Crea... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-54 | Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.llms.base.LLM[source]#
Construct the pipeline object from model_id and task.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.s... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-55 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-56 | path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.llms import LlamaCppEmbeddings
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all field... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-57 | field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use.
If None, the number of threads is automatically determined.
field repeat_penalty: Optio... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-58 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.sche... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-59 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
gen... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-60 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-61 | Once that happens, this interface could change.
It also calls the callback manager’s on_llm_new_token event with
similar parameters to the OpenAI LLM class method of the same name.
Args:prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:A generator representing... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-62 | field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts:... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-63 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-64 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-65 | To use, you should have the nlpcloud python package installed, and the
environment variable NLPCLOUD_API_KEY set with your API key.
Example
from langchain.llms import NLPCloud
nlpcloud = NLPCloud(model="gpt-neox-20b")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fi... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-66 | field remove_input: bool = True#
Remove input text from API response
field repetition_penalty: float = 1.0#
Penalizes repeated tokens. 1.0 means no penalty.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 50#
The number of highest probability tokens to keep for top-k filtering.
fiel... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-67 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fie... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-68 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-69 | To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(mod... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-70 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-71 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_fro... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-72 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to ge... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-73 | Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: An... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-74 | field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field prefix_messages: List [Optional]#
Series of messages for Chat input.
field streaming: bool = False#
Whether to stream the results or... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-75 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fie... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-76 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-77 | To use, you should have the petals python package installed, and the
environment variable HUGGINGFACE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
set_callback_manager » cal... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-78 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.sche... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-79 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
gen... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-80 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-81 | To use, you should have the predictionguard python package installed, and the
environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass
it as a named parameter to the constructor.
.. rubric:: Example
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-82 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-83 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-84 | pydantic model langchain.llms.PromptLayerOpenAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parame... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-85 | Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Confi... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-86 | Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt va... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-87 | Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: b... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-88 | Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to sav... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-89 | and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAIChat LLM can also
be passed here. The PromptLayerOpenAIChat adds two optional
:param pl_tags: List of strings to tag the request with.
:param return_pl_id: If True, the PromptLayer reque... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-90 | field streaming: bool = False#
Whether to stream the results or not.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prom... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-91 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-92 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-93 | To use, you should have the rwkv python package installed, the
pre-trained model file, and the model’s config information.
Example
from langchain.llms import RWKV
model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32")
# Simplest invocation
response = model("Once upon a time, ")
Validators
set_callback_man... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-94 | The top-p value to use for sampling.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_pr... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-95 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-96 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-97 | To use, you should have the replicate python package installed,
and the environment variable REPLICATE_API_TOKEN set with your API token.
You can find your token here: https://replicate.com/account
The model param is required, but any other model parameters can also
be passed in with the format input={model_param: valu... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-98 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fie... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-99 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-100 | To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile sh... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-101 | field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be u... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-102 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-103 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-104 | pydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]#
Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another c... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-105 | )
return pipe
hf = SelfHostedHuggingFaceLLM(
model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field device: int = 0#
Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.
field hardware: Any = None#
Remote ha... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-106 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.sche... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-107 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-108 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-109 | Run model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-110 | my_model = ...
llm = SelfHostedPipeline.from_pipeline(
pipeline=my_model,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = p... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-111 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.sche... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-112 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-113 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-114 | To use, you should have the environment variable STOCHASTICAI_API_KEY
set with your API key.
Example
from langchain.llms import StochasticAI
stochasticai = StochasticAI(api_url="")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field a... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-115 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fie... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-116 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-117 | set with your API key.
Example
from langchain import Writer
writer = Writer(model_id="palmyra-base")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field beam_search_di... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-118 | field stop: Optional[List[str]] = None#
Sequences when completion generation will stop
field temperature: float = 1.0#
What sampling temperature to use.
field tokens_to_generate: int = 24#
Max number of tokens to generate.
field top_k: int = 1#
The number of highest probability vocabulary tokens to
keep for top-k-filte... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-119 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fie... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-120 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
7336c17120d1-0 | .rst
.pdf
Tools
Tools#
Core toolkit implementations.
pydantic model langchain.tools.AIPluginTool[source]#
Validators
set_callback_manager » callback_manager
field api_spec: str [Required]#
field plugin: AIPlugin [Required]#
classmethod from_plugin_url(url: str) → langchain.tools.plugin.AIPluginTool[source]#
pydantic mo... | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-1 | Create an APIOperation from an OpenAPI spec.
classmethod from_openapi_url(spec_url: str, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]#
Create an APIOperation from an OpenAPI URL.
to_typescript() → str[source]#
Get typescript string representation of the operation.
static ts_ty... | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-2 | Run the tool asynchronously.
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', **kwargs: Any) → str[source]#
Run the tool.
property args: dict#
pydantic model langchain.tools.DuckDuckGoSearchTool[source]#
Tool that adds the capability... | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-3 | OpenAPI Model that removes misformatted parts of the spec.
field components: Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components] = None#
An element to hold various schemas for the document.
field externalDocs: Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation] = None#... | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-4 | A declaration of which security mechanisms can be used across the API.
The list of values includes alternative security requirement objects that can be used.
Only one of the security requirement objects need to be satisfied to authorize a request.
Individual operations can override this definition.
To make security opt... | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-5 | The incoming webhooks that MAY be received as part of this API and that the API consumer MAY choose to implement.
Closely related to the callbacks feature, this section describes requests initiated other than by an API call,
for example by an out of band registration.
The key name is a unique string to refer to each we... | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-6 | Get a cleaned operation id from an operation id.
get_methods_for_path(path: str) → List[str][source]#
Return a list of valid methods for the specified path.
get_operation(path: str, method: str) → openapi_schema_pydantic.v3.v3_1_0.operation.Operation[source]#
Get the operation object for a given path and HTTP method.
g... | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-7 | property base_url: str#
Get the base url.
previous
Agents
next
Agent Toolkits
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7afad359b4bb-0 | .rst
.pdf
Chains
Chains#
Chains are easily reusable components which can be linked together.
pydantic model langchain.chains.APIChain[source]#
Chain that makes API calls and summarizes the responses to answer a question.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_api_answer_prompt... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-1 | classmethod from_llm_and_api_docs(llm: langchain.schema.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API D... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-2 | build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-3 | Load chain from just an LLM and the api docs.
pydantic model langchain.chains.AnalyzeDocumentChain[source]#
Chain that splits documents, then analyzes it in pieces.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocum... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-4 | field vectorstore: VectorStore [Required]#
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_va... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-5 | constitutional_chain = ConstitutionalChain.from_llm(
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the m... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-6 | classmethod from_llm(llm: langchain.schema.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_pr... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-7 | serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-8 | better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_mode... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-9 | by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is pr... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-10 | 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-stri... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-11 | 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-12 | 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent ... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-13 | It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last re... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-14 | information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mer... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-15 | suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is conversation between a human and an AI model.', template_format='f-string', validate_template=T... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-16 | Create a chain from an LLM.
classmethod get_principles(names: Optional[List[str]] = None) → List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]#
property input_keys: List[str]#
Defines the input keys.
property output_keys: List[str]#
Defines the output keys.
pydantic model langchain.chains.C... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-17 | Default conversation prompt to use.
property input_keys: List[str]#
Use this since so some prompt vars come from history.
pydantic model langchain.chains.ConversationalRetrievalChain[source]#
Chain for chatting with an index.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field max_tokens_limi... | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.