id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
10ba8875dd49-3 | Returns
The embedding for the input query text.
Return type
List[float]
classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') → langchain.embeddings.elasticsearch.ElasticsearchEmbeddings[source... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-4 | Embed search docs.
embed_query(text: str) → List[float][source]#
Embed query text.
pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings impor... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-5 | environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extractio... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-6 | )
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to ... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-7 | Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_gpu_layers: Optional[int] = None#
Number of layers to be loaded into gpu memory. Default None.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of par... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-8 | query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
field embed_type_db: str = 'db'#
For embed_documents
field embed_type_query: str = 'query'#
For embed_query
field endpoint_url: str = 'https://api.minimax.chat/v1/em... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-9 | Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MosaicMLInstr... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-10 | Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-11 | query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout in seconds for the OpenAPI request.
e... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-12 | field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-13 | Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, ... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-14 | embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation » all fields
set_verbose » verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddi... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-15 | Validators
raise_deprecation » all fields
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-16 | field model_id: str = 'hkunlp/instructor-large'#
Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embe... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-17 | Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
f27a06397b2e-0 | .md
.pdf
Quickstart Guide
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add S... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-1 | LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications.
LLMs: Get predictions from a language model#
The most basic building block of LangChain is calling an LLM on some input.
Le... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-2 | This is easy to do with LangChain!
First lets define the prompt template:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
Let’s now see how this works! We can call the .format method to forma... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-3 | Now we can run that chain only specifying the product!
chain.run("colorful socks")
# -> '\n\nSocktastic!'
There we go! There’s the first chain - an LLM Chain.
This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.
For more details, check out... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-4 | pip install google-search-results
And set the appropriate environment variables.
import os
os.environ["SERPAPI_API_KEY"] = "..."
Now we can get started!
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
# First,... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-5 | Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
> Finished chain.
Memory: Add State to Chains and Agents#
So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or age... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-6 | Current conversation:
Human: Hi there!
AI:
> Finished chain.
' Hello! How are you today?'
output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The A... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-7 | AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get completions by passing in a single message.
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can also pa... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-8 | You can recover things like token usage from this LLMResult:
result.llm_output['token_usage']
# -> {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}
Chat Prompt Templates#
Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or mo... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-9 | from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMe... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-10 | agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-11 | '2.169459462491557'
Memory: Add State to Chains and Agents#
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
from la... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-12 | conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this on... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
663e094b34a0-0 | .md
.pdf
Concepts
Contents
Chain of Thought
Action Plan Generation
ReAct
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
Concepts#
These are concepts and terminology commonly used when developing LLM applications.
It contains reference to external papers or sources where the concept was fi... | https://python.langchain.com/en/latest/getting_started/concepts.html |
663e094b34a0-1 | to respond in a certain way framing the discussion in a context that the model knows of and that
will result in that type of response.
For example, as a conversation between a student and a teacher.
Paper
Self Consistency#
Self Consistency is a decoding strategy that samples a diverse set of reasoning paths and then se... | https://python.langchain.com/en/latest/getting_started/concepts.html |
9599bc411a7a-0 | .md
.pdf
Tutorials
Contents
Tutorials#
This is a collection of LangChain tutorials mostly on YouTube.
⛓ icon marks a new video [last update 2023-05-15]
#
LangChain AI Handbook By James Briggs and Francisco Ingham
#
LangChain Tutorials by Edrick:
⛓ LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF
La... | https://python.langchain.com/en/latest/getting_started/tutorials.html |
9599bc411a7a-1 | Question A 300 Page Book (w/ OpenAI + Pinecone)
Workaround OpenAI's Token Limit With Chain Types
Build Your Own OpenAI + LangChain Web App in 23 Minutes
Working With The New ChatGPT API
OpenAI + LangChain Wrote Me 100 Custom Sales Emails
Structured Output From OpenAI (Clean Dirty Data)
Connect OpenAI To +5,000 Tools (L... | https://python.langchain.com/en/latest/getting_started/tutorials.html |
9599bc411a7a-2 | ⛓ Using LangChain with DuckDuckGO Wikipedia & PythonREPL Tools
⛓ Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)
⛓ LangChain Retrieval QA Over Multiple Files with ChromaDB
⛓ LangChain Retrieval QA with Instructor Embeddings & ChromaDB for PDFs
⛓ LangChain + Retrieval Local LLMs for Retrieval QA - No Ope... | https://python.langchain.com/en/latest/getting_started/tutorials.html |
9599bc411a7a-3 | Analyze Custom CSV Data with GPT-4 using Langchain
⛓ Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations
⛓ icon marks a new video [last update 2023-05-15]
previous
Concepts
next
Models
Contents
By Harrison Chase
© Copyright 2023, Harrison Chase.
L... | https://python.langchain.com/en/latest/getting_started/tutorials.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.