id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
49
117
99b2553660fa-87
tfidf_array (langchain.retrievers.TFIDFRetriever attribute) time (langchain.utilities.DuckDuckGoSearchAPIWrapper attribute) to_typescript() (langchain.tools.APIOperation method) token (langchain.utilities.PowerBIDataset attribute) token_path (langchain.document_loaders.GoogleApiClient attribute) (langchain.document_loa...
https://python.langchain.com/en/latest/genindex.html
99b2553660fa-88
(langchain.retrievers.ChatGPTPluginRetriever attribute) (langchain.retrievers.DataberryRetriever attribute) (langchain.retrievers.PineconeHybridSearchRetriever attribute) top_k_docs_for_context (langchain.chains.ChatVectorDBChain attribute) top_k_results (langchain.utilities.ArxivAPIWrapper attribute) (langchain.utilit...
https://python.langchain.com/en/latest/genindex.html
99b2553660fa-89
transformers (langchain.retrievers.document_compressors.DocumentCompressorPipeline attribute) truncate (langchain.embeddings.CohereEmbeddings attribute) (langchain.llms.Cohere attribute) ts_type_from_python() (langchain.tools.APIOperation static method) ttl (langchain.memory.RedisEntityStore attribute) tuned_model_name...
https://python.langchain.com/en/latest/genindex.html
99b2553660fa-90
update_forward_refs() (langchain.llms.AI21 class method) (langchain.llms.AlephAlpha class method) (langchain.llms.Anthropic class method) (langchain.llms.Anyscale class method) (langchain.llms.AzureOpenAI class method) (langchain.llms.Banana class method) (langchain.llms.Beam class method) (langchain.llms.CerebriumAI c...
https://python.langchain.com/en/latest/genindex.html
99b2553660fa-91
(langchain.llms.PromptLayerOpenAIChat class method) (langchain.llms.Replicate class method) (langchain.llms.RWKV class method) (langchain.llms.SagemakerEndpoint class method) (langchain.llms.SelfHostedHuggingFaceLLM class method) (langchain.llms.SelfHostedPipeline class method) (langchain.llms.StochasticAI class method...
https://python.langchain.com/en/latest/genindex.html
99b2553660fa-92
(langchain.prompts.PromptTemplate attribute) Vectara (class in langchain.vectorstores) vectorizer (langchain.retrievers.TFIDFRetriever attribute) VectorStore (class in langchain.vectorstores) vectorstore (langchain.agents.agent_toolkits.VectorStoreInfo attribute) (langchain.chains.ChatVectorDBChain attribute) (langchai...
https://python.langchain.com/en/latest/genindex.html
99b2553660fa-93
(langchain.llms.HuggingFaceTextGenInference attribute) (langchain.llms.HumanInputLLM attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.Modal attribute) (langchain.llms.MosaicML attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.OpenAI attribute) (langchain.llms.OpenAIChat attribute) (langchain....
https://python.langchain.com/en/latest/genindex.html
99b2553660fa-94
WeaviateHybridSearchRetriever.Config (class in langchain.retrievers) web_path (langchain.document_loaders.WebBaseLoader property) web_paths (langchain.document_loaders.WebBaseLoader attribute) WebBaseLoader (class in langchain.document_loaders) WhatsAppChatLoader (class in langchain.document_loaders) Wikipedia (class i...
https://python.langchain.com/en/latest/genindex.html
4ed3a21b83d7-0
Search Error Please activate JavaScript to enable the search functionality. Ctrl+K By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/search.html
1fd387a86ce7-0
.md .pdf Deployments Contents Streamlit Gradio (on Hugging Face) Chainlit Beam Vercel FastAPI + Vercel Kinsta Fly.io Digitalocean App Platform Google Cloud Run SteamShip Langchain-serve BentoML Databutton Deployments# So, you’ve created a really cool chain - now what? How do you deploy it and make it easily shareable...
https://python.langchain.com/en/latest/ecosystem/deployments.html
1fd387a86ce7-1
Chainlit doc on the integration with LangChain Beam# This repo serves as a template for how deploy a LangChain with Beam. It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API. Vercel# A minimal example on how to run LangChain on Vercel using Flask. FastAPI + Ve...
https://python.langchain.com/en/latest/ecosystem/deployments.html
1fd387a86ce7-2
Databutton# These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memo...
https://python.langchain.com/en/latest/ecosystem/deployments.html
ef68ed4865a9-0
.md .pdf Locally Hosted Setup Contents Installation Environment Setup Locally Hosted Setup# This page contains instructions for installing and then setting up the environment to use the locally hosted version of tracing. Installation# Ensure you have Docker installed (see Get Docker) and that it’s running. Install th...
https://python.langchain.com/en/latest/tracing/local_installation.html
ef68ed4865a9-1
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/tracing/local_installation.html
df5c460206ef-0
.md .pdf Cloud Hosted Setup Contents Installation Environment Setup Cloud Hosted Setup# We offer a hosted version of tracing at langchainplus.vercel.app. You can use this to view traces from your run without having to run the server locally. Note: we are currently only offering this to a limited number of users. The ...
https://python.langchain.com/en/latest/tracing/hosted_installation.html
df5c460206ef-1
os.environ["LANGCHAIN_API_KEY"] = "my_api_key" # Don't commit this to your repo! Better to set it in your terminal. Contents Installation Environment Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/tracing/hosted_installation.html
27b211f430d6-0
.ipynb .pdf Tracing Walkthrough Contents [Beta] Tracing V2 Tracing Walkthrough# There are two recommended ways to trace your LangChains: Setting the LANGCHAIN_TRACING environment variable to “true”. Using a context manager with tracing_enabled() to trace a particular block of code. Note if the environment variable is...
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
27b211f430d6-1
> Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 2^.123243 Observation: Answer: 1.0891804557407723 Thought: I now know the final answer. Final Answer: 1.0891804557407723 > Finished chain. '1.0891804557407723' # Agent run with tracing using a chat model ag...
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
27b211f430d6-2
I need to use a calculator to solve this. Action: Calculator Action Input: 5 ^ .123243 Observation: Answer: 1.2193914912400514 Thought:I now know the answer to the question. Final Answer: 1.2193914912400514 > Finished chain. # Now, we unset the environment variable and use a context manager. if "LANGCHAIN_TRACING" in ...
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
27b211f430d6-3
del os.environ["LANGCHAIN_TRACING"] questions = [f"What is {i} raised to .123 power?" for i in range(1,4)] # start a background task task = asyncio.create_task(agent.arun(questions[0])) # this should not be traced with tracing_enabled() as session: assert session tasks = [agent.arun(q) for q in questions[...
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
27b211f430d6-4
pip install --upgrade langchain langchain plus start Option 2 (Hosted): After making an account an grabbing a LangChainPlus API Key, set the LANGCHAIN_ENDPOINT and LANGCHAIN_API_KEY environment variables import os os.environ["LANGCHAIN_TRACING_V2"] = "true" # os.environ["LANGCHAIN_ENDPOINT"] = "https://langchainpro-api...
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
27b211f430d6-5
Contents [Beta] Tracing V2 By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
b099ae02276a-0
Source code for langchain.text_splitter """Functionality for splitting text.""" from __future__ import annotations import copy import logging from abc import ABC, abstractmethod from typing import ( AbstractSet, Any, Callable, Collection, Iterable, List, Literal, Optional, Sequence, ...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-1
documents = [] for i, text in enumerate(texts): for chunk in self.split_text(text): new_doc = Document( page_content=chunk, metadata=copy.deepcopy(_metadatas[i]) ) documents.append(new_doc) return documents [docs] def spl...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-2
doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) # Keep on popping if: # - we have a larger chunk than in the chunk overlap # - or if we still have any chunks and the length is long ...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-3
) return cls(length_function=_huggingface_tokenizer_length, **kwargs) [docs] @classmethod def from_tiktoken_encoder( cls: Type[TS], encoding_name: str = "gpt2", model_name: Optional[str] = None, allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disa...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-4
) -> Sequence[Document]: """Transform sequence of documents by splitting them.""" return self.split_documents(list(documents)) [docs] async def atransform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Asynchronously transform a sequence ...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-5
raise ImportError( "Could not import tiktoken python package. " "This is needed in order to for TokenTextSplitter. " "Please install it with `pip install tiktoken`." ) if model_name is not None: enc = tiktoken.encoding_for_model(model_name)...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-6
[docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" final_chunks = [] # Get appropriate separator to use separator = self._separators[-1] for _s in self._separators: if _s == "": separator = _s ...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-7
"NLTK is not installed, please install it with `pip install nltk`." ) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. splits...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-8
"\n## ", "\n### ", "\n#### ", "\n##### ", "\n###### ", # Note the alternative syntax for headings (below) is not handled here # Heading level 2 # --------------- # End of code block "```\n\n", # Horiz...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
b099ae02276a-9
"\n\\begin{align}", "$$", "$", # Now split by the normal type of lines " ", "", ] super().__init__(separators=separators, **kwargs) [docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Pyth...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
81b8eb375b52-0
Source code for langchain.requests """Lightweight wrapper around requests library, with async support.""" from contextlib import asynccontextmanager from typing import Any, AsyncGenerator, Dict, Optional import aiohttp import requests from pydantic import BaseModel, Extra class Requests(BaseModel): """Wrapper aroun...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
81b8eb375b52-1
def delete(self, url: str, **kwargs: Any) -> requests.Response: """DELETE the URL and return the text.""" return requests.delete(url, headers=self.headers, **kwargs) @asynccontextmanager async def _arequest( self, method: str, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.Clien...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
81b8eb375b52-2
"""PATCH the URL and return the text asynchronously.""" async with self._arequest("PATCH", url, **kwargs) as response: yield response @asynccontextmanager async def aput( self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: ...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
81b8eb375b52-3
"""POST to the URL and return the text.""" return self.requests.post(url, data, **kwargs).text [docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PATCH the URL and return the text.""" return self.requests.patch(url, data, **kwargs).text [docs] def put(self, ur...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
81b8eb375b52-4
"""PUT the URL and return the text asynchronously.""" async with self.requests.aput(url, **kwargs) as response: return await response.text() [docs] async def adelete(self, url: str, **kwargs: Any) -> str: """DELETE the URL and return the text asynchronously.""" async with self.req...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
b1e543a584e5-0
Source code for langchain.document_transformers """Transform documents""" from typing import Any, Callable, List, Sequence import numpy as np from pydantic import BaseModel, Field from langchain.embeddings.base import Embeddings from langchain.math_utils import cosine_similarity from langchain.schema import BaseDocumen...
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
b1e543a584e5-1
for first_idx, second_idx in redundant_stacked[redundant_sorted]: if first_idx in included_idxs and second_idx in included_idxs: # Default to dropping the second document of any highly similar pair. included_idxs.remove(second_idx) return list(sorted(included_idxs)) def _get_embeddin...
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
b1e543a584e5-2
"""Filter down documents.""" stateful_documents = get_stateful_documents(documents) embedded_documents = _get_embeddings_from_stateful_docs( self.embeddings, stateful_documents ) included_idxs = _filter_similar_embeddings( embedded_documents, self.similarity_fn, s...
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
5b2703e2b6f1-0
Source code for langchain.experimental.autonomous_agents.baby_agi.baby_agi """BabyAGI agent.""" from collections import deque from typing import Any, Dict, List, Optional from pydantic import BaseModel, Field from langchain.base_language import BaseLanguageModel from langchain.callbacks.manager import CallbackManagerFo...
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
5b2703e2b6f1-1
print(str(t["task_id"]) + ": " + t["task_name"]) def print_next_task(self, task: Dict) -> None: print("\033[92m\033[1m" + "\n*****NEXT TASK*****\n" + "\033[0m\033[0m") print(str(task["task_id"]) + ": " + task["task_name"]) def print_task_result(self, result: str) -> None: print("\033[93m...
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
5b2703e2b6f1-2
next_task_id = int(this_task_id) + 1 response = self.task_prioritization_chain.run( task_names=", ".join(task_names), next_task_id=str(next_task_id), objective=objective, ) new_tasks = response.split("\n") prioritized_task_list = [] for task_st...
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
5b2703e2b6f1-3
"""Run the agent.""" objective = inputs["objective"] first_task = inputs.get("first_task", "Make a todo list") self.add_task({"task_id": 1, "task_name": first_task}) num_iters = 0 while True: if self.task_list: self.print_task_list() # ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
5b2703e2b6f1-4
break return {} [docs] @classmethod def from_llm( cls, llm: BaseLanguageModel, vectorstore: VectorStore, verbose: bool = False, task_execution_chain: Optional[Chain] = None, **kwargs: Dict[str, Any], ) -> "BabyAGI": """Initialize the BabyAGI Con...
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
64fa5b911a50-0
Source code for langchain.experimental.autonomous_agents.autogpt.agent from __future__ import annotations from typing import List, Optional from pydantic import ValidationError from langchain.chains.llm import LLMChain from langchain.chat_models.base import BaseChatModel from langchain.experimental.autonomous_agents.au...
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html
64fa5b911a50-1
ai_role: str, memory: VectorStoreRetriever, tools: List[BaseTool], llm: BaseChatModel, human_in_the_loop: bool = False, output_parser: Optional[BaseAutoGPTOutputParser] = None, ) -> AutoGPT: prompt = AutoGPTPrompt( ai_name=ai_name, ai_role=ai_r...
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html
64fa5b911a50-2
# Get command name and arguments action = self.output_parser.parse(assistant_reply) tools = {t.name: t for t in self.tools} if action.name == FINISH_NAME: return action.args["response"] if action.name in tools: tool = tools[action.name] ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html
ace0ca23347e-0
Source code for langchain.experimental.generative_agents.generative_agent import re from datetime import datetime from typing import Any, Dict, List, Optional, Tuple from pydantic import BaseModel, Field from langchain import LLMChain from langchain.base_language import BaseLanguageModel from langchain.experimental.gen...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
ace0ca23347e-1
arbitrary_types_allowed = True # LLM-related methods @staticmethod def _parse_list(text: str) -> List[str]: """Parse a newline-separated string into a list of strings.""" lines = re.split(r"\n", text.strip()) return [re.sub(r"^\s*\d+\.\s*", "", line).strip() for line in lines] de...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
ace0ca23347e-2
entity_action = self._get_entity_action(observation, entity_name) q1 = f"What is the relationship between {self.name} and {entity_name}" q2 = f"{entity_name} is {entity_action}" return self.chain(prompt=prompt).run(q1=q1, queries=[q1, q2]).strip() def _generate_reaction( self, observ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
ace0ca23347e-3
) consumed_tokens = self.llm.get_num_tokens( prompt.format(most_recent_memories="", **kwargs) ) kwargs[self.memory.most_recent_memories_token_key] = consumed_tokens return self.chain(prompt=prompt).run(**kwargs).strip() def _clean_response(self, text: str) -> str: ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
ace0ca23347e-4
if "SAY:" in result: said_value = self._clean_response(result.split("SAY:")[-1]) return True, f"{self.name} said {said_value}" else: return False, result [docs] def generate_dialogue_response( self, observation: str, now: Optional[datetime] = None ) -> Tuple[bo...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
ace0ca23347e-5
}, ) return True, f"{self.name} said {response_text}" else: return False, result ###################################################### # Agent stateful' summary methods. # # Each dialog or response prompt includes a header # # summarizing ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
ace0ca23347e-6
+ f"\nInnate traits: {self.traits}" + f"\n{self.summary}" ) [docs] def get_full_header( self, force_refresh: bool = False, now: Optional[datetime] = None ) -> str: """Return a full header of the agent's status, summary, and current time.""" now = datetime.now() if now ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
4d820bc16959-0
Source code for langchain.experimental.generative_agents.memory import logging import re from datetime import datetime from typing import Any, Dict, List, Optional from langchain import LLMChain from langchain.base_language import BaseLanguageModel from langchain.prompts import PromptTemplate from langchain.retrievers ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
4d820bc16959-1
# output keys relevant_memories_key: str = "relevant_memories" relevant_memories_simple_key: str = "relevant_memories_simple" most_recent_memories_key: str = "most_recent_memories" now_key: str = "now" reflecting: bool = False def chain(self, prompt: PromptTemplate) -> LLMChain: return L...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
4d820bc16959-2
) -> List[str]: """Generate 'insights' on a topic of reflection, based on pertinent memories.""" prompt = PromptTemplate.from_template( "Statements about {topic}\n" + "{related_statements}\n\n" + "What 5 high-level insights can you infer from the above statements?" ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
4d820bc16959-3
"On the scale of 1 to 10, where 1 is purely mundane" + " (e.g., brushing teeth, making bed) and 10 is" + " extremely poignant (e.g., a break up, college" + " acceptance), rate the likely poignancy of the" + " following piece of memory. Respond with a single integer." ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
4d820bc16959-4
and not self.reflecting ): self.reflecting = True self.pause_to_reflect(now=now) # Hack to clear the importance from reflection self.aggregate_importance = 0.0 self.reflecting = False return result [docs] def fetch_memories( self, ob...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
4d820bc16959-5
break consumed_tokens += self.llm.get_num_tokens(doc.page_content) if consumed_tokens < self.max_tokens_limit: result.append(doc) return self.format_memories_simple(result) @property def memory_variables(self) -> List[str]: """Input keys this memory class ...
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
4d820bc16959-6
[docs] def clear(self) -> None: """Clear memory contents.""" # TODO By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
257ac82b613a-0
Source code for langchain.retrievers.time_weighted_retriever """Retriever that combines embedding similarity with recency in retrieving values.""" import datetime from copy import deepcopy from typing import Any, Dict, List, Optional, Tuple from pydantic import BaseModel, Field from langchain.schema import BaseRetrieve...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html
257ac82b613a-1
""" class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True def _get_combined_score( self, document: Document, vector_relevance: Optional[float], current_time: datetime.datetime, ) -> float: """Return the combined sco...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html
257ac82b613a-2
for doc in self.memory_stream[-self.k :] } # If a doc is considered salient, update the salience score docs_and_scores.update(self.get_salient_docs(query)) rescored_docs = [ (doc, self._get_combined_score(doc, relevance, current_time)) for doc, relevance in docs_a...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html
257ac82b613a-3
self.memory_stream.extend(dup_docs) return self.vectorstore.add_documents(dup_docs, **kwargs) [docs] async def aadd_documents( self, documents: List[Document], **kwargs: Any ) -> List[str]: """Add documents to vectorstore.""" current_time = kwargs.get("current_time") if cu...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html
e8ee33bc64a0-0
Source code for langchain.retrievers.pinecone_hybrid_search """Taken from: https://docs.pinecone.io/docs/hybrid-search""" import hashlib from typing import Any, Dict, List, Optional from pydantic import BaseModel, Extra, root_validator from langchain.embeddings.base import Embeddings from langchain.schema import BaseRe...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html
e8ee33bc64a0-1
] # create dense vectors dense_embeds = embeddings.embed_documents(context_batch) # create sparse vectors sparse_embeds = sparse_encoder.encode_documents(context_batch) for s in sparse_embeds: s["values"] = [float(s1) for s1 in s["values"]] vectors = [] ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html
e8ee33bc64a0-2
"""Validate that api key and python package exists in environment.""" try: from pinecone_text.hybrid import hybrid_convex_scale # noqa:F401 from pinecone_text.sparse.base_sparse_encoder import ( BaseSparseEncoder, # noqa:F401 ) except ImportError: ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html
f62ea7d3967c-0
Source code for langchain.retrievers.vespa_retriever """Wrapper for retrieving documents from Vespa.""" from __future__ import annotations import json from typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional, Sequence, Union from langchain.schema import BaseRetriever, Document if TYPE_CHECKING: from ves...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html
f62ea7d3967c-1
docs.append(Document(page_content=page_content, metadata=metadata)) return docs [docs] def get_relevant_documents(self, query: str) -> List[Document]: body = self._query_body.copy() body["query"] = query return self._query(body) [docs] async def aget_relevant_documents(self, query:...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html
f62ea7d3967c-2
document metadata. Defaults to empty tuple (). sources (Sequence[str] or "*" or None): Sources to retrieve from. Defaults to None. _filter (Optional[str]): Document filter condition expressed in YQL. Defaults to None. yql (Optional[str]): Full YQL quer...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html
1b8739524fbb-0
Source code for langchain.retrievers.tfidf """TF-IDF Retriever. Largely based on https://github.com/asvskartheek/Text-Retrieval/blob/master/TF-IDF%20Search%20Engine%20(SKLEARN).ipynb""" from __future__ import annotations from typing import Any, Dict, Iterable, List, Optional from pydantic import BaseModel from langchai...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html
1b8739524fbb-1
return cls(vectorizer=vectorizer, docs=docs, tfidf_array=tfidf_array, **kwargs) [docs] @classmethod def from_documents( cls, documents: Iterable[Document], *, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> TFIDFRetriever: texts, metadatas = ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html
1193e524cfea-0
Source code for langchain.retrievers.svm """SMV Retriever. Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb""" from __future__ import annotations import concurrent.futures from typing import Any, List, Optional import numpy as np from pydantic import BaseModel from langchain.embedding...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html
1193e524cfea-1
y[0] = 1 clf = svm.LinearSVC( class_weight="balanced", verbose=False, max_iter=10000, tol=1e-6, C=0.1 ) clf.fit(x, y) similarities = clf.decision_function(x) sorted_ix = np.argsort(-similarities) # svm.LinearSVC in scikit-learn is non-deterministic. # ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html
1c494575cc58-0
Source code for langchain.retrievers.wikipedia from typing import List from langchain.schema import BaseRetriever, Document from langchain.utilities.wikipedia import WikipediaAPIWrapper [docs]class WikipediaRetriever(BaseRetriever, WikipediaAPIWrapper): """ It is effectively a wrapper for WikipediaAPIWrapper. ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/wikipedia.html
2252610a7a0a-0
Source code for langchain.retrievers.azure_cognitive_search """Retriever wrapper for Azure Cognitive Search.""" from __future__ import annotations import json from typing import Dict, List, Optional import aiohttp import requests from pydantic import BaseModel, Extra, root_validator from langchain.schema import BaseRet...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html
2252610a7a0a-1
) values["api_key"] = get_from_dict_or_env( values, "api_key", "AZURE_COGNITIVE_SEARCH_API_KEY" ) return values def _build_search_url(self, query: str) -> str: base_url = f"https://{self.service_name}.search.windows.net/" endpoint_path = f"indexes/{self.index_name...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html
2252610a7a0a-2
search_results = self._search(query) return [ Document(page_content=result.pop(self.content_key), metadata=result) for result in search_results ] [docs] async def aget_relevant_documents(self, query: str) -> List[Document]: search_results = await self._asearch(query) ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html
5eed9d96f4e3-0
Source code for langchain.retrievers.elastic_search_bm25 """Wrapper around Elasticsearch vector database.""" from __future__ import annotations import uuid from typing import Any, Iterable, List from langchain.docstore.document import Document from langchain.schema import BaseRetriever [docs]class ElasticSearchBM25Retr...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html
5eed9d96f4e3-1
self.index_name = index_name [docs] @classmethod def create( cls, elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75 ) -> ElasticSearchBM25Retriever: from elasticsearch import Elasticsearch # Create an Elasticsearch client instance es = Elasticsearch(ela...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html
5eed9d96f4e3-2
raise ValueError( "Could not import elasticsearch python package. " "Please install it with `pip install elasticsearch`." ) requests = [] ids = [] for i, text in enumerate(texts): _id = str(uuid.uuid4()) request = { ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html
89b4c588bb2b-0
Source code for langchain.retrievers.knn """KNN Retriever. Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb""" from __future__ import annotations import concurrent.futures from typing import Any, List, Optional import numpy as np from pydantic import BaseModel from langchain.embedding...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html
89b4c588bb2b-1
similarities = index_embeds.dot(query_embeds) sorted_ix = np.argsort(-similarities) denominator = np.max(similarities) - np.min(similarities) + 1e-6 normalized_similarities = (similarities - np.min(similarities)) / denominator top_k_results = [] for row in sorted_ix[0 : self.k]: ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html
01cf0126d83c-0
Source code for langchain.retrievers.remote_retriever from typing import List, Optional import aiohttp import requests from pydantic import BaseModel from langchain.schema import BaseRetriever, Document [docs]class RemoteLangChainRetriever(BaseRetriever, BaseModel): url: str headers: Optional[dict] = None i...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/remote_retriever.html
97ca0d125cbf-0
Source code for langchain.retrievers.zep from __future__ import annotations from typing import TYPE_CHECKING, List, Optional from langchain.schema import BaseRetriever, Document if TYPE_CHECKING: from zep_python import SearchResult [docs]class ZepRetriever(BaseRetriever): """A Retriever implementation for the Z...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html
97ca0d125cbf-1
) for r in results if r.message ] [docs] def get_relevant_documents(self, query: str) -> List[Document]: from zep_python import SearchPayload payload: SearchPayload = SearchPayload(text=query) results: List[SearchResult] = self.zep_client.search_memory( ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html
76cae3620686-0
Source code for langchain.retrievers.contextual_compression """Retriever that wraps a base retriever and filters the results.""" from typing import List from pydantic import BaseModel, Extra from langchain.retrievers.document_compressors.base import ( BaseDocumentCompressor, ) from langchain.schema import BaseRetri...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html
76cae3620686-1
return list(compressed_docs) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html
cf8c73c7312a-0
Source code for langchain.retrievers.weaviate_hybrid_search """Wrapper around weaviate vector database.""" from __future__ import annotations from typing import Any, Dict, List, Optional from uuid import uuid4 from pydantic import Extra from langchain.docstore.document import Document from langchain.schema import BaseR...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html
cf8c73c7312a-1
"properties": [{"name": self._text_key, "dataType": ["text"]}], "vectorizer": "text2vec-openai", } if not self._client.schema.exists(self._index_name): self._client.schema.create_class(class_obj) [docs] class Config: """Configuration for this pydantic object.""" ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html
cf8c73c7312a-2
if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs [docs] async d...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html
5627c3dda3ad-0
Source code for langchain.retrievers.databerry from typing import List, Optional import aiohttp import requests from langchain.schema import BaseRetriever, Document [docs]class DataberryRetriever(BaseRetriever): datastore_url: str top_k: Optional[int] api_key: Optional[str] def __init__( self, ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html
5627c3dda3ad-1
self.datastore_url, json={ "query": query, **({"topK": self.top_k} if self.top_k is not None else {}), }, headers={ "Content-Type": "application/json", **( {"Authorizat...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html
ebaf8b5f22fb-0
Source code for langchain.retrievers.chatgpt_plugin_retriever from __future__ import annotations from typing import List, Optional import aiohttp import requests from pydantic import BaseModel from langchain.schema import BaseRetriever, Document [docs]class ChatGPTPluginRetriever(BaseRetriever, BaseModel): url: str...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html
ebaf8b5f22fb-1
docs = [] for d in results: content = d.pop("text") docs.append(Document(page_content=content, metadata=d)) return docs def _create_request(self, query: str) -> tuple[str, dict, dict]: url = f"{self.url}/query" json = { "queries": [ ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html
ece4ee365a95-0
Source code for langchain.retrievers.arxiv from typing import List from langchain.schema import BaseRetriever, Document from langchain.utilities.arxiv import ArxivAPIWrapper [docs]class ArxivRetriever(BaseRetriever, ArxivAPIWrapper): """ It is effectively a wrapper for ArxivAPIWrapper. It wraps load() to ge...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/arxiv.html
1181cc50c539-0
Source code for langchain.retrievers.metal from typing import Any, List, Optional from langchain.schema import BaseRetriever, Document [docs]class MetalRetriever(BaseRetriever): def __init__(self, client: Any, params: Optional[dict] = None): from metal_sdk.metal import Metal if not isinstance(client...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/metal.html
43a0c102b9ba-0
Source code for langchain.retrievers.document_compressors.embeddings_filter """Document compressor that uses embeddings to drop documents unrelated to the query.""" from typing import Callable, Dict, Optional, Sequence import numpy as np from pydantic import root_validator from langchain.document_transformers import ( ...
https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/embeddings_filter.html