id stringlengths 14 15 | text stringlengths 17 2.72k | source stringlengths 47 115 |
|---|---|---|
ed7680a2cece-0 | AWS S3 File
Amazon Simple Storage Service (Amazon S3) is an object storage service.
AWS S3 Buckets
This covers how to load document objects from an AWS S3 File object.
from langchain.document_loaders import S3FileLoader
loader = S3FileLoader("testing-hwc", "fake.docx")
[Document(page_content='Lorem ipsum dolor sit amet... | https://python.langchain.com/docs/integrations/document_loaders/aws_s3_file |
ad386b61abf7-0 | This covers how to load AZLyrics webpages into a document format that we can use downstream. | https://python.langchain.com/docs/integrations/document_loaders/azlyrics |
ad386b61abf7-1 | [Document(page_content="Miley Cyrus - Flowers Lyrics | AZLyrics.com\n\r\nWe were good, we were gold\nKinda dream that can't be sold\nWe were right till we weren't\nBuilt a home and watched it burn\n\nI didn't wanna leave you\nI didn't wanna lie\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my... | https://python.langchain.com/docs/integrations/document_loaders/azlyrics |
6691ea7ca732-0 | Azure Blob Storage Container
Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.
Azure Blob Storage is des... | https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_container |
0c8db1584906-0 | Azure Blob Storage File
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.
This covers how to load document objects from a Azure Files.
#!pip install azure-storage-blob... | https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file |
5af53a63cf8c-0 | Azure Document Intelligence
Azure Document Intelligence (formerly known as Azure Forms Recognizer) is machine-learning based service that extracts text (including handwriting), tables or key-value-pairs from scanned documents or images.
This current implementation of a loader using Document Intelligence is able to inco... | https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence |
163defdee230-0 | LarkSuite (FeiShu)
LarkSuite is an enterprise collaboration platform developed by ByteDance.
This notebook covers how to load data from the LarkSuite REST API into a format that can be ingested into LangChain, along with example usage for text summarization.
The LarkSuite API requires an access token (tenant_access_tok... | https://python.langchain.com/docs/integrations/document_loaders/larksuite |
0ff4c62ec5b1-0 | Mastodon
Mastodon is a federated social media and social networking service.
This loader fetches the text from the "toots" of a list of Mastodon accounts, using the Mastodon.py Python package.
Public accounts can the queried by default without any authentication. If non-public accounts or instances are queried, you hav... | https://python.langchain.com/docs/integrations/document_loaders/mastodon |
ab2958dd6fb6-0 | MediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.
This covers how to load a MediaWiki XML dump file into a document format... | https://python.langchain.com/docs/integrations/document_loaders/mediawikidump |
ab2958dd6fb6-1 | Document(page_content='Description\nThis template is used to insert descriptions on template pages.\n\nSyntax\nAdd <noinclude></noinclude> at the end of the template page.\n\nAdd <noinclude></noinclude> to transclude an alternative page from the /doc subpage.\n\nUsage\n\nOn the Template page\nThis is the normal format ... | https://python.langchain.com/docs/integrations/document_loaders/mediawikidump |
ab2958dd6fb6-2 | Document(page_content='\t\n\t\t \n\t\n\t\t Aliases\n\t Relatives\n\t Affiliation\n Occupation\n \n Biographical information\n Marital status\n \tDate of birth\n Place of birth\n Date of death\n Place of death\n \n Physical description\n Species\n Gender\n Height\n Weight\n Eye color\n\t\n Appearances\n Portrayed by\n A... | https://python.langchain.com/docs/integrations/document_loaders/mediawikidump |
f4a3f2b7ebab-0 | MergeDocLoader
Merge the documents returned from a set of specified data loaders.
from langchain.document_loaders import WebBaseLoader
loader_web = WebBaseLoader(
"https://github.com/basecamp/handbook/blob/master/37signals-is-you.md"
)
from langchain.document_loaders import PyPDFLoader
loader_pdf = PyPDFLoader("../Ma... | https://python.langchain.com/docs/integrations/document_loaders/merge_doc_loader |
1e32b548efb5-0 | MHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc.
page_content='La... | https://python.langchain.com/docs/integrations/document_loaders/mhtml |
42d7665ecaaf-0 | Microsoft OneDrive
Microsoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft.
This notebook covers how to load documents from OneDrive. Currently, only docx, doc, and pdf files are supported.
Prerequisites
Register an application with the Microsoft identity platform instructions.
When regi... | https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive |
42d7665ecaaf-1 | os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET"
This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user... | https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive |
42d7665ecaaf-2 | loader = OneDriveLoader(drive_id="YOUR DRIVE ID")
Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the a... | https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive |
823744774acd-0 | Airbyte Gong
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Gong connector as a document loader, allowing you to load various Gong objects as documents.
In... | https://python.langchain.com/docs/integrations/document_loaders/airbyte_gong |
823744774acd-1 | loader = AirbyteGongLoader(config=config, record_handler=handle_record, stream_name="calls")
docs = loader.load()
Incremental loads
Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are u... | https://python.langchain.com/docs/integrations/document_loaders/airbyte_gong |
37f4adaf18cc-0 | Microsoft PowerPoint
Microsoft PowerPoint is a presentation program by Microsoft.
This covers how to load Microsoft PowerPoint documents into a document format that we can use downstream.
from langchain.document_loaders import UnstructuredPowerPointLoader
loader = UnstructuredPowerPointLoader("example_data/fake-power-p... | https://python.langchain.com/docs/integrations/document_loaders/microsoft_powerpoint |
3c7c08aa6eea-0 | Microsoft SharePoint
Microsoft SharePoint is a website-based collaboration system that uses workflow applications, “list” databases, and other web parts and security features to empower business teams to work together developed by Microsoft.
This notebook covers how to load documents from the SharePoint Document Librar... | https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint |
3c7c08aa6eea-1 | Visit the Graph Explorer Playground to obtain your Document Library ID. The first step is to ensure you are logged in with the account associated with your SharePoint site. Then you need to make a request to https://graph.microsoft.com/v1.0/sites/<SharePoint site ID>/drive and the response will return a payload with a ... | https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint |
3c7c08aa6eea-2 | loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID")
Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, ... | https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint |
38a4478e5aa6-0 | Microsoft Word
Microsoft Word is a word processor developed by Microsoft.
This covers how to load Word documents into a document format that we can use downstream.
Using Docx2txt
Load .docx using Docx2txt into a document.
from langchain.document_loaders import Docx2txtLoader
loader = Docx2txtLoader("example_data/fake.... | https://python.langchain.com/docs/integrations/document_loaders/microsoft_word |
4c8b958a06a5-0 | Modern Treasury
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.
Connect to banks and payment systems
Track transactions and balances in real-time
Automate payment operations for scale
This notebook covers how to load data from the Modern T... | https://python.langchain.com/docs/integrations/document_loaders/modern_treasury |
9c7f7377e378-0 | This covers how to load HTML news articles from a list of URLs into a document format that we can use downstream.
First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that... | https://python.langchain.com/docs/integrations/document_loaders/news |
9c7f7377e378-1 | Second article: page_content='Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."' metadata={'title': "Lizzo dancers Arianna Davis and Crystal Williams: 'No one sp... | https://python.langchain.com/docs/integrations/document_loaders/news |
acac23157ddd-0 | Notion DB 1/2
Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.
This notebook covers how to load documents from a Notion database dump.... | https://python.langchain.com/docs/integrations/document_loaders/notion |
e2a1667f70d5-0 | Notion DB 2/2
Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.
NotionDBLoader is a Python class for loading content from a Notion data... | https://python.langchain.com/docs/integrations/document_loaders/notiondb |
e2a1667f70d5-1 | Click on the three-dot menu icon in the top right corner of the database view.
Select "Copy link" from the menu to copy the database URL to your clipboard.
The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520... | https://python.langchain.com/docs/integrations/document_loaders/notiondb |
e2a1667f70d5-2 | NOTION_TOKEN = getpass()
DATABASE_ID = getpass()
from langchain.document_loaders import NotionDBLoader
loader = NotionDBLoader(
integration_token=NOTION_TOKEN,
database_id=DATABASE_ID,
request_timeout_sec=30, # optional, defaults to 10
) | https://python.langchain.com/docs/integrations/document_loaders/notiondb |
84e1a0d1b6da-0 | Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.
The Nuclia Understanding API supports the processing of unstructured data, in... | https://python.langchain.com/docs/integrations/document_loaders/nuclia |
84e1a0d1b6da-1 | pending = True
while pending:
time.sleep(15)
docs = loader.load()
if len(docs) > 0:
print(docs[0].page_content)
print(docs[0].metadata)
pending = False
else:
print("waiting...")
Retrieved information
Nuclia returns the following information:
file metadata
extracted text
nested text (like text in an embedded image)
par... | https://python.langchain.com/docs/integrations/document_loaders/nuclia |
a40e03caab87-0 | This notebook covers how to load documents from an Obsidian database.
Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.
Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the docume... | https://python.langchain.com/docs/integrations/document_loaders/obsidian |
d18c78603de1-0 | That provides you with the dataset identifier.
Use the dataset identifier to grab specific tables for a given city_id (data.sfgov.org) -
{'pdid': '4133422003074',
'incidntnum': '041334220',
'incident_code': '03074',
'category': 'ROBBERY',
'descript': 'ROBBERY, BODILY FORCE',
'dayofweek': 'Monday',
'date': '2004-11-22T... | https://python.langchain.com/docs/integrations/document_loaders/open_city_data |
4670f2329926-0 | You can load data from Org-mode files with UnstructuredOrgModeLoader using the following workflow.
page_content='Example Docs' metadata={'source': 'example_data/README.org', 'filename': 'README.org', 'file_directory': 'example_data', 'filetype': 'text/org', 'page_number': 1, 'category': 'Title'} | https://python.langchain.com/docs/integrations/document_loaders/org_mode |
8844a5e6b8bd-0 | The Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applic... | https://python.langchain.com/docs/integrations/document_loaders/odt |
c87352a7bcaa-0 | [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}),
Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}),
Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}),
Document(page_content='Giants', metadata={' ... | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
c87352a7bcaa-1 | Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}),
Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}),
Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}),
Document(page_content='Mariners', metada... | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
c87352a7bcaa-2 | page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}
page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}
page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}
page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94... | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
c87352a7bcaa-3 | page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}
page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}
page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}
page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}
p... | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
29c9a4621221-0 | [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}),
Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}),
Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}),
Document(page_content='Giants', metadata={' ... | https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe |
29c9a4621221-1 | Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}),
Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}),
Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}),
Document(page_content='Mariners', metada... | https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe |
29c9a4621221-2 | page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}
page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}
page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}
page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94... | https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe |
29c9a4621221-3 | page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}
page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}
page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}
page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}
p... | https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe |
b939c9f05eec-0 | Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents... | https://python.langchain.com/docs/integrations/document_loaders/pdf-amazonTextractPDFLoader |
b939c9f05eec-1 | ' The authors are Zejiang Shen, Ruochen Zhang, Melissa Dell, Benjamin Charles Germain Lee, Jacob Carlson, Weining Li, Gardner, M., Grus, J., Neumann, M., Tafjord, O., Dasigi, P., Liu, N., Peters, M., Schmitz, M., Zettlemoyer, L., Lukasz Garncarek, Powalski, R., Stanislawek, T., Topolski, B., Halama, P., Gralinski, F., ... | https://python.langchain.com/docs/integrations/document_loaders/pdf-amazonTextractPDFLoader |
a78b4d558ff4-0 | Psychic
This notebook covers how to load documents from Psychic. See here for more details.
Prerequisites
Follow the Quick Start section in this document
Log into the Psychic dashboard and get your secret key
Install the frontend react library into your web app and have a user authenticate a connection. The connection... | https://python.langchain.com/docs/integrations/document_loaders/psychic |
39fb7e1dd258-0 | PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
"BACKGROUND: ... | https://python.langchain.com/docs/integrations/document_loaders/pubmed |
17dc26ab8548-0 | [Stage 8:> (0 + 1) / 1] | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
17dc26ab8548-1 | [Document(page_content='Nationals', metadata={' "Payroll (millions)"': ' 81.34', ' "Wins"': ' 98'}),
Document(page_content='Reds', metadata={' "Payroll (millions)"': ' 82.20', ' "Wins"': ' 97'}),
Document(page_content='Yankees', metadata={' "Payroll (millions)"': ' 197.96', ' "Wins"': ' 95'}),
Document(page_content='Gi... | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
17dc26ab8548-2 | Document(page_content='Brewers', metadata={' "Payroll (millions)"': ' 97.65', ' "Wins"': ' 83'}),
Document(page_content='Phillies', metadata={' "Payroll (millions)"': ' 174.54', ' "Wins"': ' 81'}),
Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': ' 74.28', ' "Wins"': ' 81'}),
Document(page_conte... | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
17dc26ab8548-3 | Document(page_content='Cubs', metadata={' "Payroll (millions)"': ' 88.19', ' "Wins"': ' 61'}),
Document(page_content='Astros', metadata={' "Payroll (millions)"': ' 60.65', ' "Wins"': ' 55'})] | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
94659706adbc-0 | ReadTheDocs Documentation
Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.
This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build.
For an example of this in the wild,... | https://python.langchain.com/docs/integrations/document_loaders/readthedocs_documentation |
9522ea991bb9-0 | This loader fetches the text from the Posts of Subreddits or Reddit users, using the praw Python package.
Make a Reddit Application and initialize the loader with with your Reddit API credentials.
# load using 'subreddit' mode
loader = RedditPostsLoader(
client_id="YOUR CLIENT ID",
client_secret="YOUR CLIENT SECRET",
u... | https://python.langchain.com/docs/integrations/document_loaders/reddit |
9522ea991bb9-1 | # Note: Categories can be only of following value - "controversial" "hot" "new" "rising" "top"
[Document(page_content='Hello, I am not looking for investment advice. I will apply my own due diligence. However, I am interested if anyone knows as a UK resident how fees and exchange rate differences would impact performan... | https://python.langchain.com/docs/integrations/document_loaders/reddit |
9522ea991bb9-2 | Document(page_content='Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn\'t warrant a self post? Feel free to post here! \n\nIf your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you shoul... | https://python.langchain.com/docs/integrations/document_loaders/reddit |
9522ea991bb9-3 | 'post_score': 5, 'post_id': '130eszz', 'post_url': 'https://www.reddit.com/r/investing/comments/130eszz/daily_general_discussion_and_advice_thread_april/', 'post_author': Redditor(name='AutoModerator')}), | https://python.langchain.com/docs/integrations/document_loaders/reddit |
9522ea991bb9-4 | Document(page_content="Based on recent news about salt battery advancements and the overall issues of lithium, I was wondering what would be feasible ways to invest into non-lithium based battery technologies? CATL is of course a choice, but the selection of brokers I currently have in my disposal don't provide HK stoc... | https://python.langchain.com/docs/integrations/document_loaders/reddit |
7457745be8f4-0 | Recursive URL Loader
We may want to process load all URLs under a root directory.
For example, let's look at the Python 3.9 Document.
This has many interesting child pages that we may want to read in bulk.
Of course, the WebBaseLoader can load a list of pages.
But, the challenge is traversing the tree of child pages a... | https://python.langchain.com/docs/integrations/document_loaders/recursive_url_loader |
7457745be8f4-1 | url = "https://docs.python.org/3.9/"
loader = RecursiveUrlLoader(url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text)
docs = loader.load()
docs[0].page_content[:50]
'\n\n\n\n\nPython Frequently Asked Questions — Python 3.'
{'source': 'https://docs.python.org/3.9/library/index.html',
'title': 'The Pyth... | https://python.langchain.com/docs/integrations/document_loaders/recursive_url_loader |
94237a151b8a-0 | This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here.
Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.
When exporting, make sure to select the Markdown & C... | https://python.langchain.com/docs/integrations/document_loaders/roam |
37b0ea33203b-0 | Rockset
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for... | https://python.langchain.com/docs/integrations/document_loaders/rockset |
37b0ea33203b-1 | loader = RocksetLoader(
RocksetClient(Regions.usw2a1, "<api key>"),
models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 3"), # SQL query
["text"], # content columns
metadata_keys=["id", "date"], # metadata columns
)
Here, you can see that the following query is run:
SELECT * FROM langchain_demo LIMIT 3
The... | https://python.langchain.com/docs/integrations/document_loaders/rockset |
37b0ea33203b-2 | ),
Document(
page_content="Morbi tortor enim, commodo id efficitur vitae, fringilla nec mi. Nullam molestie faucibus aliquet. Praesent a est facilisis, condimentum justo sit amet, viverra erat. Fusce volutpat nisi vel purus blandit, et facilisis felis accumsan. Phasellus luctus ligula ultrices tellus tempor hendrerit. ... | https://python.langchain.com/docs/integrations/document_loaders/rockset |
37b0ea33203b-3 | loader = RocksetLoader(
RocksetClient(Regions.usw2a1, "<api key>"),
models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"),
["sentence1", "sentence2"], # TWO content columns
)
Assuming the "sentence1" field is "This is the first sentence." and the "sentence2" field is "This is the second sente... | https://python.langchain.com/docs/integrations/document_loaders/rockset |
ef01e20c95aa-0 | This covers how to load HTML news articles from a list of RSS feed URLs into a document format that we can use downstream.
You can pass arguments to the NewsURLLoader which it uses to load articles.
You can also use an OPML file such as a Feedly export. Pass in either a URL or the OPML contents. | https://python.langchain.com/docs/integrations/document_loaders/rss |
ef01e20c95aa-1 | 'The electric vehicle startup Fisker made a splash in Huntington Beach last night, showing off a range of new EVs it plans to build alongside the Fisker Ocean, which is slowly beginning deliveries in Europe and the US. With shades of Lotus circa 2010, it seems there\'s something for most tastes, with a powerful four-do... | https://python.langchain.com/docs/integrations/document_loaders/rss |
ef01e20c95aa-2 | Henrik Fisker\'s 2012 creation. There\'s no price for this one, but Fisker says its all-wheel drive powertrain will boast 1,000 hp (745 kW) and will hit 60 mph from a standing start in two seconds—just about as fast as modern tires will allow. Expect a massive battery in this one, as Fisker says it\'s targeting a 600-m... | https://python.langchain.com/docs/integrations/document_loaders/rss |
2e988aecfd13-0 | You can load data from RST files with UnstructuredRSTLoader using the following workflow.
page_content='Example Docs' metadata={'source': 'example_data/README.rst', 'filename': 'README.rst', 'file_directory': 'example_data', 'filetype': 'text/x-rst', 'page_number': 1, 'category': 'Title'} | https://python.langchain.com/docs/integrations/document_loaders/rst |
7aba57fdcde6-0 | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.
The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a g... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-1 | Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nMod... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-2 | File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\n\n\nUtils\nKey Concepts\nGeneric Utilities\nBash\nBing Search\nGoogle Search\nGoogle Serper API\nIFTTT WebHooks\nPython REPL\nRequests\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nReference\nPython REPL\nSerpAPI\nSearxNG S... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-3 | Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nAdding Memory To an LLMChain\nAdding Memory to a Multi-Input Chain\nAdding Memory to an Agent\nChatGPT Clone\nC... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-4 | Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosy... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-5 | search engines, and more. LangChain provides a large collection of common utils to use in your application.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, ... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-6 | want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. L... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-7 | Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 24, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/stable/', 'loc': 'https://python.langchain.com/en/stable/', 'lastmod': '2023-03-24T19:30:54.647430+00:00', 'ch... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-8 | Sitemaps can be massive files, with thousands of URLs. Often you don't need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded. | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-9 | Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nMod... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-10 | to create a custom example selector\nLengthBased ExampleSelector\nMaximal Marginal Relevance ExampleSelector\nNGram Overlap ExampleSelector\nSimilarity ExampleSelector\n\n\nOutput Parsers\nOutput Parsers\nCommaSeparatedListOutputParser\nOutputFixingParser\nPydanticOutputParser\nRetryOutputParser\nStructured Output Pars... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-11 | Chain\nAnalyze Document\nChat Index\nGraph QA\nHypothetical Document Embeddings\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMC... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-12 | REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHel... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-13 | are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nModels: The various model types and model integrations LangChain supports.\nPrompts: Thi... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-14 | Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-15 | lookup_str='', metadata={'source': 'https://python.langchain.com/en/latest/', 'loc': 'https://python.langchain.com/en/latest/', 'lastmod': '2023-03-27T22:50:49.790324+00:00', 'changefreq': 'daily', 'priority': '0.9'}, lookup_index=0) | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
7aba57fdcde6-16 | The SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers ... | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
fd652189c538-0 | This notebook covers how to load documents from a Zipfile generated from a Slack export.
Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your_slack_domain}.slack.com/services/export). Then, choose the right date range and click Start export. Sla... | https://python.langchain.com/docs/integrations/document_loaders/slack |
9f2aeb82004e-0 | QUERY = "select text, survey_id from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10"
snowflake_loader = SnowflakeLoader(
query=QUERY,
user=s.SNOWFLAKE_USER,
password=s.SNOWFLAKE_PASS,
account=s.SNOWFLAKE_ACCOUNT,
warehouse=s.SNOWFLAKE_WAREHOUSE,
role=s.SNOWFLAKE_ROLE,
database=s.SNOWFLAKE_DATABASE,
schema=s.S... | https://python.langchain.com/docs/integrations/document_loaders/snowflake |
f39c8a3cbf3a-0 | Source Code
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a seperate document.
This... | https://python.langchain.com/docs/integrations/document_loaders/source_code |
f39c8a3cbf3a-1 | # Code for: class MyClass:
# Code for: def main():
if __name__ == "__main__":
main()
--8<--
class MyClass {
constructor(name) {
this.name = name;
}
greet() {
console.log(`Hello, ${this.name}!`);
}
}
--8<--
function main() {
const name = prompt("Enter your name:");
const obj = new MyClass(name);
obj.greet();
}
... | https://python.langchain.com/docs/integrations/document_loaders/source_code |
f39c8a3cbf3a-2 | const obj = new MyClass(name);
obj.greet();
}
--8<--
// Code for: class MyClass {
// Code for: function main() {
--8<--
main(); | https://python.langchain.com/docs/integrations/document_loaders/source_code |
482dd6563b37-0 | Spreedly
Spreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized ... | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
482dd6563b37-1 | index = VectorstoreIndexCreator().from_loaders([spreedly_loader])
spreedly_doc_retriever = index.vectorstore.as_retriever()
Using embedded DuckDB without persistence: data will be transient
# Test the retriever
spreedly_doc_retriever.get_relevant_documents("CRC")
[Document(page_content='installment_grace_period_duratio... | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
482dd6563b37-2 | Document(page_content='BG\nBH\nBI\nBJ\nBM\nBN\nBO\nBR\nBS\nBT\nBW\nBY\nBZ\nCA\nCC\nCF\nCH\nCK\nCL\nCM\nCN\nCO\nCR\nCV\nCX\nCY\nCZ\nDE\nDJ\nDK\nDO\nDZ\nEC\nEE\nEG\nEH\nES\nET\nFI\nFJ\nFK\nFM\nFO\nFR\nGA\nGB\nGD\nGE\nGF\nGG\nGH\nGI\nGL\nGM\nGN\nGP\nGQ\nGR\nGT\nGU\nGW\nGY\nHK\nHM\nHN\nHR\nHT\nHU\nID\nIE\nIL\nIM\nIN\nIO\nI... | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
482dd6563b37-3 | WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
482dd6563b37-4 | Document(page_content='gateway_specific_fields: receipt_email\nradar_session_id\nskip_radar_rules\napplication_fee\nstripe_account\nmetadata\nidempotency_key\nreason\nrefund_application_fee\nrefund_fee_amount\nreverse_transfer\naccount_id\ncustomer_id\nvalidate\nmake_default\ncancellation_reason\ncapture_method\nconfir... | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
482dd6563b37-5 | Document(page_content='mdd_field_57\nmdd_field_58\nmdd_field_59\nmdd_field_60\nmdd_field_61\nmdd_field_62\nmdd_field_63\nmdd_field_64\nmdd_field_65\nmdd_field_66\nmdd_field_67\nmdd_field_68\nmdd_field_69\nmdd_field_70\nmdd_field_71\nmdd_field_72\nmdd_field_73\nmdd_field_74\nmdd_field_75\nmdd_field_76\nmdd_field_77\nmdd... | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
15e61bce73b1-0 | This notebook covers how to load data from the Stripe REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
The Stripe API requires an access token, which can be found inside of the Stripe dashboard.
This document loader also requires a resource option which defines wha... | https://python.langchain.com/docs/integrations/document_loaders/stripe |
5a25ceac9d77-0 | Subtitle
The SubRip file format is described on the Matroska multimedia container format website as "perhaps the most basic of all subtitle formats." SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequen... | https://python.langchain.com/docs/integrations/document_loaders/subtitle |
d2b2b9b77ce3-0 | Telegram
Telegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.
This notebook covers how to load data from ... | https://python.langchain.com/docs/integrations/document_loaders/telegram |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.