id stringlengths 14 15 | text stringlengths 32 2.18k | source stringclasses 30
values |
|---|---|---|
712506381261-13 | produce better results.For more information on how you can get the most out of LangSmith, check out LangSmith documentation, and please reach out with questions, feature requests, or feedback at support@langchain.dev.PreviousLangSmithNextModel ComparisonPrerequisitesLog runs to LangSmithEvaluate another agent implement... | https://python.langchain.com/docs/guides/langsmith/walkthrough |
2043a03ec19a-0 | Deployment | 🦜�🔗 Langchain | https://python.langchain.com/docs/guides/deployments/ |
2043a03ec19a-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationDebuggingDeploymentTemplate reposLangSmithModel ComparisonEcosystemAdditional resourcesGuidesDeplo... | https://python.langchain.com/docs/guides/deployments/ |
2043a03ec19a-2 | Alternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers.Regardless of the framework that forms the backbone of your product, dep... | https://python.langchain.com/docs/guides/deployments/ |
2043a03ec19a-3 | Second (TPS): This represents the number of tokens your model can generate in a second.Quality Metrics: These metrics are typically customized according to the business use-case. For instance, how does the output of your system compare to a baseline, such as a previous version? Although these metrics can be calculated ... | https://python.langchain.com/docs/guides/deployments/ |
2043a03ec19a-4 | the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a Weighted Round Robin or Least Connections strategy, where more requests are sent to the more powerful servers, or to those currently handling ... | https://python.langchain.com/docs/guides/deployments/ |
2043a03ec19a-5 | nor compromised application responsiveness.Utilizing Spot Instances​On platforms like AWS, spot instances offer substantial cost savings, typically priced at about a third of on-demand instances. The trade-off is a higher crash rate, necessitating a robust fault-tolerance mechanism for effective use.Independent Scali... | https://python.langchain.com/docs/guides/deployments/ |
2043a03ec19a-6 | but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key role:Model composition​Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query e... | https://python.langchain.com/docs/guides/deployments/ |
2043a03ec19a-7 | Spot InstancesIndependent ScalingBatching requestsEnsuring Rapid IterationModel compositionCloud providersInfrastructure as Code (IaC)CI/CDCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/guides/deployments/ |
cf7ac48a1b0f-0 | Template repos | 🦜�🔗 Langchain | https://python.langchain.com/docs/guides/deployments/template_repos |
cf7ac48a1b0f-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationDebuggingDeploymentTemplate reposLangSmithModel ComparisonEcosystemAdditional resourcesGuidesDeplo... | https://python.langchain.com/docs/guides/deployments/template_repos |
cf7ac48a1b0f-2 | Chainlit doc on the integration with LangChainBeam​This repo serves as a template for how deploy a LangChain with Beam.It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.Vercel​A minimal example on how to run LangChain on Vercel using Flask.FastAPI + Verc... | https://python.langchain.com/docs/guides/deployments/template_repos |
cf7ac48a1b0f-3 | learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and gRPC endpoints. With BentoML, you can integrate models from all popular ML frameworks and deploy them as microservices running on the most optimal hardware and scaling independently.OpenLLM​OpenLLM is a platf... | https://python.langchain.com/docs/guides/deployments/template_repos |
cf7ac48a1b0f-4 | See OpenLLM's integration doc for usage with LangChain.Databutton​These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examp... | https://python.langchain.com/docs/guides/deployments/template_repos |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.