9.1. Langchain Quickstart openAI#

Reference: https://python.langchain.com/docs/get_started/quickstart

What you will learn

  • Query openAI chatGPT

  • Defining prompts to be passed to LLMs

  • Formating the answers from LLMs

  • Building Langchain pipelines

  • Implement a simple Retrieval Augmented Generation (RAG) approach

    • Loading text from webpage, segmentation of the text into chunks and transforming text-chunks into vectors (embeddings)

    • Query a vector database (FAISS) and retrieve relevant documents

    • Pass documents, which are relevant for the query, as context information to openai chatGPT

9.1.1. Basic LLM Usage for Question Answering#

9.1.1.1. Most Basic Approach#

#!pip install langchain-openai
#%env OPENAI_API_KEY=sk-...rZt  #This is how to permanently store your API-Key. Note: Without " "
import os
import openai
from langchain_openai import ChatOpenAI
openai.api_key=os.environ["OPENAI_API_KEY"]
#openai.api_key
llm = ChatOpenAI()
llm.invoke("how can langsmith help with testing?")
AIMessage(content='Langsmith can help with testing in the following ways:\n\n1. Automated testing: Langsmith can be used to write scripts and test cases for automated testing of software applications. This can help in quickly and efficiently testing the functionality of the software.\n\n2. Test data generation: Langsmith can be used to generate test data for different scenarios, allowing testers to validate the behavior of the software under various conditions.\n\n3. Performance testing: Langsmith can be used to write scripts for performance testing of software applications, helping to identify and resolve performance issues.\n\n4. Integration testing: Langsmith can be used to write scripts for testing the integration of different components or systems, ensuring that they work together as expected.\n\n5. Regression testing: Langsmith can be used to automate regression testing, ensuring that new code changes do not introduce any new bugs or issues in the software.\n\nOverall, Langsmith can help testers in automating various testing tasks, saving time and effort, and improving the overall quality of the software.', response_metadata={'token_usage': {'completion_tokens': 199, 'prompt_tokens': 15, 'total_tokens': 214}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-c9310c0e-5eab-490e-b152-90a174408bd9-0', usage_metadata={'input_tokens': 15, 'output_tokens': 199, 'total_tokens': 214})

9.1.1.2. Create simple Pipeline#

In contrast to the previous most basic approach we now add a system prompt. For this we apply LangChain’s ChatPromptTemplate-class. Moreover, the LLM’s answer, shall be rendered in a better way, which is done using the StrOutputParser-class.

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are world class technical documentation writer."),
    ("user", "{input}")
])
output_parser = StrOutputParser()

We create a pipeline, which consists of the prompt, the LLM and the output-parser:

chain = prompt | llm | output_parser

9.1.1.3. Query#

The invoke()-method is now called for the pipeline-object:

print(chain.invoke({"input": "how can langsmith help with testing?"}))
Langsmith can help with testing in a variety of ways, including:

1. Test Automation: Langsmith can be used to automate testing processes, such as unit testing, integration testing, and end-to-end testing. By writing test scripts in Langsmith, you can ensure that your code is thoroughly tested and free of bugs.

2. Performance Testing: Langsmith can also be used for performance testing, such as load testing and stress testing. By simulating large numbers of users or heavy traffic on your application, you can identify performance bottlenecks and optimize your code accordingly.

3. Data Generation: Langsmith can be used to generate test data for your application. By creating realistic data sets with Langsmith, you can ensure that your tests are comprehensive and cover a wide range of scenarios.

4. Integration Testing: Langsmith can help with integration testing by simulating interactions between different components of your application. By writing integration tests in Langsmith, you can verify that all parts of your application work together seamlessly.

Overall, Langsmith can streamline the testing process, improve test coverage, and help you deliver high-quality software to your users.

9.1.2. Basic RAG Usage#

In the previous subsection it has been shown, how a LLM can be applied for question-answering. Now, we like to apply Retrieval Augmented Generation (RAG) for question answering. The RAG system integrates a LLM, but in contrast to the previously described basic usage, in RAG more context information is passed to the LLM. The corresponding answer of the LLM then not only depends on the data on which the LLM has been trained on, but also on external knowledge from documents, provided by the user. This external knowledge is passed as context to the LLM, together with the query. The external knowledge, which is used as context, certainly depends on the user’s query. Therefore, the query is first passed to a vector-database, which returns the most relevant documents for the given query. These relevant documents are used as context.

Below we

  1. Collect external documents from the web

  2. Segment these documents into chunks

  3. Calculate an embedding (a vector) for each chunk

  4. Store the chunk-embeddings in a vector DB.

9.1.2.1. Collect Documents for External Database#

By applying the LangChain WebBaseLoader-class the content of one or several webpages can be downloaded as shown in the code-cell below. For downloading multiple pages the corresponding Urls must be passed as a list.

from langchain_community.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide",encoding="utf-8")

docs = loader.load()
USER_AGENT environment variable not set, consider setting it to identify your requests.

The length of the returned datastructure is 1, since we loaded only a single page:

len(docs)
1

For each downloaded page we can now access the page_content and the page’s metadata as shown below:

print(docs[0].page_content)
LangSmith User Guide | 🦜️🛠️ LangSmith







Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick StartUser GuideTracingEvaluationProduction Monitoring & AutomationsPrompt HubProxyPricingSelf-HostingCookbookThis is outdated documentation for 🦜️🛠️ LangSmith, which is no longer actively maintained.For up-to-date documentation, see the latest version.User GuideOn this pageLangSmith User GuideLangSmith is a platform for LLM application development, monitoring, and testing. In this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each stage of the application development lifecycle. We hope this will inform users how to best utilize this powerful platform or give them something to consider if they’re just starting their journey.Prototyping​Prototyping LLM applications often involves quick experimentation between prompts, model types, retrieval strategy and other parameters.
The ability to rapidly understand how the model is performing — and debug where it is failing — is incredibly important for this phase.Debugging​When developing new LLM applications, we suggest having LangSmith tracing enabled by default.
Oftentimes, it isn’t necessary to look at every single trace. However, when things go wrong (an unexpected end result, infinite agent loop, slower than expected execution, higher than expected token usage), it’s extremely helpful to debug by looking through the application traces. LangSmith gives clear visibility and debugging information at each step of an LLM sequence, making it much easier to identify and root-cause issues.
We provide native rendering of chat messages, functions, and retrieve documents.Initial Test Set​While many developers still ship an initial version of their application based on “vibe checks”, we’ve seen an increasing number of engineering teams start to adopt a more test driven approach. LangSmith allows developers to create datasets, which are collections of inputs and reference outputs, and use these to run tests on their LLM applications.
These test cases can be uploaded in bulk, created on the fly, or exported from application traces. LangSmith also makes it easy to run custom evaluations (both LLM and heuristic based) to score test results.Comparison View​When prototyping different versions of your applications and making changes, it’s important to see whether or not you’ve regressed with respect to your initial test cases.
Oftentimes, changes in the prompt, retrieval strategy, or model choice can have huge implications in responses produced by your application.
In order to get a sense for which variant is performing better, it’s useful to be able to view results for different configurations on the same datapoints side-by-side. We’ve invested heavily in a user-friendly comparison view for test runs to track and diagnose regressions in test scores across multiple revisions of your application.Playground​LangSmith provides a playground environment for rapid iteration and experimentation.
This allows you to quickly test out different prompts and models. You can open the playground from any prompt or model run in your trace.
Every playground run is logged in the system and can be used to create test cases or compare with other runs.Beta Testing​Beta testing allows developers to collect more data on how their LLM applications are performing in real-world scenarios. In this phase, it’s important to develop an understanding for the types of inputs the app is performing well or poorly on and how exactly it’s breaking down in those cases. Both feedback collection and run annotation are critical for this workflow. This will help in curation of test cases that can help track regressions/improvements and development of automatic evaluations.Capturing Feedback​When launching your application to an initial set of users, it’s important to gather human feedback on the responses it’s producing. This helps draw attention to the most interesting runs and highlight edge cases that are causing problematic responses. LangSmith allows you to attach feedback scores to logged traces (oftentimes, this is hooked up to a feedback button in your app), then filter on traces that have a specific feedback tag and score. A common workflow is to filter on traces that receive a poor user feedback score, then drill down into problematic points using the detailed trace view.Annotating Traces​LangSmith also supports sending runs to annotation queues, which allow annotators to closely inspect interesting traces and annotate them with respect to different criteria. Annotators can be PMs, engineers, or even subject matter experts. This allows users to catch regressions across important evaluation criteria.Adding Runs to a Dataset​As your application progresses through the beta testing phase, it's essential to continue collecting data to refine and improve its performance. LangSmith enables you to add runs as examples to datasets (from both the project page and within an annotation queue), expanding your test coverage on real-world scenarios. This is a key benefit in having your logging system and your evaluation/testing system in the same platform.Production​Closely inspecting key data points, growing benchmarking datasets, annotating traces, and drilling down into important data in trace view are workflows you’ll also want to do once your app hits production.However, especially at the production stage, it’s crucial to get a high-level overview of application performance with respect to latency, cost, and feedback scores. This ensures that it's delivering desirable results at scale.Online evaluations and automations allow you to process and score production traces in near real-time.Additionally, threads provide a seamless way to group traces from a single conversation, making it easier to track the performance of your application across multiple turns.Monitoring and A/B Testing​LangSmith provides monitoring charts that allow you to track key metrics over time. You can expand to view metrics for a given period and drill down into a specific data point to get a trace table for that time period — this is especially handy for debugging production issues.LangSmith also allows for tag and metadata grouping, which allows users to mark different versions of their applications with different identifiers and view how they are performing side-by-side within each chart. This is helpful for A/B testing changes in prompt, model, or retrieval strategy.Automations​Automations are a powerful feature in LangSmith that allow you to perform actions on traces in near real-time. This can be used to automatically score traces, send them to annotation queues, or send them to datasets.To define an automation, simply provide a filter condition, a sampling rate, and an action to perform. Automations are particularly helpful for processing traces at production scale.Threads​Many LLM applications are multi-turn, meaning that they involve a series of interactions between the user and the application. LangSmith provides a threads view that groups traces from a single conversation together, making it easier to track the performance of and annotate your application across multiple turns.Was this page helpful?You can leave detailed feedback on GitHub.PreviousQuick StartNextOverviewPrototypingBeta TestingProductionCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.
docs[0].metadata
{'source': 'https://docs.smith.langchain.com/user_guide',
 'title': 'LangSmith User Guide | 🦜️🛠️ LangSmith',
 'description': 'LangSmith is a platform for LLM application development, monitoring, and testing. In this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each stage of the application development lifecycle. We hope this will inform users how to best utilize this powerful platform or give them something to consider if they’re just starting their journey.',
 'language': 'en'}

9.1.2.2. Chunking#

from langchain_text_splitters import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)
chunks = text_splitter.split_documents(docs)

Let’s have a look to the chunks:

len(chunks)
6
chunks[0].page_content
'LangSmith User Guide | 🦜️🛠️ LangSmith'
chunks[1].page_content
'Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick StartUser GuideTracingEvaluationProduction Monitoring & AutomationsPrompt HubProxyPricingSelf-HostingCookbookThis is outdated documentation for 🦜️🛠️ LangSmith, which is no longer actively maintained.For up-to-date documentation, see the latest version.User GuideOn this pageLangSmith User GuideLangSmith is a platform for LLM application development, monitoring, and testing. In this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each stage of the application development lifecycle. We hope this will inform users how to best utilize this powerful platform or give them something to consider if they’re just starting their journey.Prototyping\u200bPrototyping LLM applications often involves quick experimentation between prompts, model types, retrieval strategy and other parameters.\nThe ability to rapidly understand how the model is performing — and debug where it is failing — is incredibly important for this phase.Debugging\u200bWhen developing new LLM applications, we suggest having LangSmith tracing enabled by default.\nOftentimes, it isn’t necessary to look at every single trace. However, when things go wrong (an unexpected end result, infinite agent loop, slower than expected execution, higher than expected token usage), it’s extremely helpful to debug by looking through the application traces. LangSmith gives clear visibility and debugging information at each step of an LLM sequence, making it much easier to identify and root-cause issues.'
chunks[2].page_content
'We provide native rendering of chat messages, functions, and retrieve documents.Initial Test Set\u200bWhile many developers still ship an initial version of their application based on “vibe checks”, we’ve seen an increasing number of engineering teams start to adopt a more test driven approach. LangSmith allows developers to create datasets, which are collections of inputs and reference outputs, and use these to run tests on their LLM applications.\nThese test cases can be uploaded in bulk, created on the fly, or exported from application traces. LangSmith also makes it easy to run custom evaluations (both LLM and heuristic based) to score test results.Comparison View\u200bWhen prototyping different versions of your applications and making changes, it’s important to see whether or not you’ve regressed with respect to your initial test cases.\nOftentimes, changes in the prompt, retrieval strategy, or model choice can have huge implications in responses produced by your application.\nIn order to get a sense for which variant is performing better, it’s useful to be able to view results for different configurations on the same datapoints side-by-side. We’ve invested heavily in a user-friendly comparison view for test runs to track and diagnose regressions in test scores across multiple revisions of your application.Playground\u200bLangSmith provides a playground environment for rapid iteration and experimentation.\nThis allows you to quickly test out different prompts and models. You can open the playground from any prompt or model run in your trace.'
chunks[3].page_content
"Every playground run is logged in the system and can be used to create test cases or compare with other runs.Beta Testing\u200bBeta testing allows developers to collect more data on how their LLM applications are performing in real-world scenarios. In this phase, it’s important to develop an understanding for the types of inputs the app is performing well or poorly on and how exactly it’s breaking down in those cases. Both feedback collection and run annotation are critical for this workflow. This will help in curation of test cases that can help track regressions/improvements and development of automatic evaluations.Capturing Feedback\u200bWhen launching your application to an initial set of users, it’s important to gather human feedback on the responses it’s producing. This helps draw attention to the most interesting runs and highlight edge cases that are causing problematic responses. LangSmith allows you to attach feedback scores to logged traces (oftentimes, this is hooked up to a feedback button in your app), then filter on traces that have a specific feedback tag and score. A common workflow is to filter on traces that receive a poor user feedback score, then drill down into problematic points using the detailed trace view.Annotating Traces\u200bLangSmith also supports sending runs to annotation queues, which allow annotators to closely inspect interesting traces and annotate them with respect to different criteria. Annotators can be PMs, engineers, or even subject matter experts. This allows users to catch regressions across important evaluation criteria.Adding Runs to a Dataset\u200bAs your application progresses through the beta testing phase, it's essential to continue collecting data to refine and improve its performance. LangSmith enables you to add runs as examples to datasets (from both the project page and within an annotation queue), expanding your test coverage on real-world scenarios. This is a key benefit in having your logging system and your evaluation/testing"

9.1.2.3. Embedding of chunks and storing in Vector DB#

#!pip install faiss-cpu
#!pip install faiss-gpu
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
from langchain_community.vectorstores import FAISS
vector = FAISS.from_documents(chunks, embeddings)

Now, we have inserted our external documents (actually only one webpage) into the vector database. The RAG system is now ready to be used for question answering.

9.1.2.4. Create Prompt#

First, we define a general prompt template and a document chain, which consists of the prompt template and the LLM.

from langchain.chains.combine_documents import create_stuff_documents_chain

prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context:

<context>
{context}
</context>

Question: {input}""")

document_chain = create_stuff_documents_chain(llm, prompt)

The next code-cell is just for testing the chain, which has been defined above. In this test we just apply a dummy-text as context. This dummy-text will later be replaced by the vector DB’s answer on our query.

from langchain_core.documents import Document

document_chain.invoke({
    "input": "how can langsmith help with testing?",
    "context": [Document(page_content="langsmith can let you visualize test results")]
})
'Langsmith can help with testing by allowing you to visualize test results.'

After testing the document chain, we now define a retrieval chain, which consists of the vector DB and the already defined document chain. This retrieval chain constitutes the entire RAG system.

from langchain.chains import create_retrieval_chain

retriever = vector.as_retriever()
retrieval_chain = create_retrieval_chain(retriever, document_chain)

Next, we send a query to the RAG system. This means that

  1. the query is send to the vector DB

  2. the vector DB returns the most relevant documents for the given query. For this

    1. the embedding-vector of the query is calculated

    2. the similarity between the query’s embedding-vector and the embedding-vectors of all chunks in the DB is calculated.

    3. The chunk, whose embedding-vector is most similar to the query’s embedding-vector is returned.

  3. the returned relevant chunk is being passed as context together with the query to the LLM

  4. the LLM returns the answer on the query, taking into account the provided context

response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
print(response["answer"])
LangSmith allows developers to create datasets, which are collections of inputs and reference outputs, and use these to run tests on their LLM applications. Test cases can be uploaded in bulk, created on the fly, or exported from application traces. LangSmith also makes it easy to run custom evaluations (both LLM and heuristic based) to score test results. Additionally, LangSmith provides a comparison view for test runs to track and diagnose regressions in test scores across multiple revisions of an application. The platform also offers a playground environment for rapid iteration and experimentation, allowing developers to quickly test out different prompts and models.