audio audioduration (s) 10.5 29.9 | text stringlengths 156 578 |
|---|---|
Hello guys, welcome to this new amazing module on understanding about multimodal rag. So guys, till now, specifically if I talk about, you know, we have discussed about creating different kind of rag applications. And in that rag, let's say that there are two main important components, you know. So let's say that I ... | |
how do we go ahead and store our data? Okay. So let's say that we initially have some kind of data. This data can be a PDF file, it can be a Word doc file, it can be a database file, it can be any kind of files, right? So usually for this particular data, we convert this into chunks. | |
Then further, we use some kind of embedding models, embedding models, and we convert this into some kind of vector representation. And we store everything inside our vector store, right? Then what we do, once we have everything in the vector store, from this particular vector store, we create a retriever. Why do we ... | |
retriever? Because whenever we have any kind of new query that comes in, any type of new query that comes in, we will, we will perform an embedding on this particular query. And this query from the retriever can retrieve similar kind of results from the vector store. Okay, then from this, | |
we get the top key documents, top key documents, and we give it to the LLM model. The LLM model will finally be able to generate the output based on the prompt that we have given. So these are the two main fundamental modules that we specifically work on a rack pipeline. Yes, there are techniques like re-ranker, you... | |
How to probably go ahead and do the retrieval, how to apply filtering keywords and all we have discussed about this, right? But let's consider one specific use case in this particular data. Okay, so let's say that I have a PDF file. So this is my what my PDF file looks like. Okay, in my PDF file, I'll just go ahead ... | |
ahead and add some more information over here in the form of images. So here I have some kind of images. Let's say this particular image is a revenue of a company. Okay, so it is a kind of a revenue of a company, you have some textual information, again, you have some kind of images over here, again, you have text, ... | |
ahead and use two important things. One is text plus images. So let's say that if in your data, you have both text and images, can you also take this particular image and do some kind of similar search? You know, let's say that I will go ahead and ask, Hey, tell me from the diagram in the page one, can you talk about... | |
you should be able to retrieve this particular information based on this particular image, and we should be able to generate the output. So whenever we talk about multimodal, whenever we talk about this kind of scenarios, here we are working with both text and image data. Now we need to find out a way that how do we... | |
embeddings. But till now, we have never seen that for images, what kind of embeddings we can go ahead and apply? Right? What kind of embeddings you can go ahead and apply so that we store this in a vector store. And at the end of the day, when we give a specific query, how it is going to go ahead and do a search wit... | |
multimodal is very simple text plus images. When I talk about rag, I'm talking about creating this entire pipeline. So we will discuss about this. I will also probably go ahead and show you the entire step by step flow diagram, how we are going to go ahead and implement this. Okay? Let's say that we have a PDF in th... | |
is a multimodal LLM. Okay? multimodal LLM. Now, what does this basically mean? So till now, you have worked with different different LLMs, but those are specifically, if I talk about a normal LLM, like let's say GPT 3.5, GPT 4, these LLMs are very specific to text generation, means they are good at | |
text generation. Okay? When we talk about multimodal LLM, these LLMs will are trained on both text and images. Okay? That basically means if you give an image parameter, let's say if you give an image parameter to this LLM, this LLM will also be able to talk about like what is there in the images itself. | |
If you ask anything related to text, then also they will be able to provide you some kind of responses. So whenever I talk about multimodal LLM, that basically means they have been trained in both text and images. So this kind of models, we are going to use it. Okay? If I like to name some of the model in OpenAI, if... | |
but we will do an application with GPT 4. GPT 4, let me just go ahead and check in my code file so that, you know, you should see again, there are different different models, which you can use, right? In our examples, what we are going to specifically do is that we are going to use GPT 4 model, right? So here specif... | |
you can also use any other kind of model itself. If you go into OpenAI documentation, you will be able to see this. If I take one more example, there is Google Gemini model, right? If you go ahead and see Google Gemini flash model, specifically 2.5 version, right? So this is also a multimodal model. Okay? That basic... | |
most important thing is that what will be the steps that we really need to follow in order to solve this problem. So let's go ahead and discuss about the steps. Let's say the first thing is that I have a PDF document. So I'll go ahead and write this. So this is my PDF document. Okay? And in order to draw the steps, ... | |
see this. Okay? I will take this. Okay? Okay? So let's say this is my entire cycle that I'm actually are the entire steps that I'm going to write it over here. So the first step, let's say that I have my data source over here. Okay? So this is my PDF document. Let's say inside this PDF document, | |
you have both text and images. So from this PDF document, the first step will be that I will go ahead and extract the text and images. Okay? First of all, I will go ahead and extract text and images. So here, let me go ahead and write it down for you. Extract | |
text and images. Now the question arises, for this, how we are going to go ahead and extract text and images? If we just use like some library should be there, right? Because at the end of the day, we have to read this PDF document. And while reading the PDF document, we should be able to distinguish which are the t... | |
will be using. Okay? So this is my first step. Now in the second step is that, let's say, I go ahead and get this, I have completed this step. Okay? Then the second step is, second and third, let's consider this, second and third. Okay? So the second step is that, what I will do, | |
from this text, we will convert this into chunks. And from this, we are going to perform embeddings. Because for text, there will be different kinds of embeddings, right? Similarly, over here, for images, we will do this, we are going to go ahead and take this. And we are going to perform the embeddings. | |
Now the question arises, Krish, how are we going to go ahead and do this? Do we use different model to probably different embedding model to handle the text and images separately? Or should we just use one model? The best idea is that we try to use a single model. Now for this, which model we will be using? There is... | |
clip is, you know? So this clip model is provided by OpenAI. Now how this model is basically trained? This model is trained with images and text mapping. Okay? So there are like 400 million images, I think this model is basically trained with, where we can, and it is open source. Okay? So it is available in hugging ... | |
the text, we will try to convert that into embedding. And based on images, we will also try to convert that into embedding. The reason of using one model is that because this model is already trained with images, text mapped data set, you can consider in that way. Right? So there's a huge data set, which is already ... | |
text and the images. Okay? Then the next step over here, what we are basically going to do from this, once we are able to do this, the next step will be that we will store this images in the base 64 format. Because that is the format that is actually required. Okay? We will store this particular images as base 64. O... | |
then we take both these embeddings and we store it in some kind of vector store. So let's say that I will go ahead and use a phi s vector store, where I'm going to go ahead and store this entire embeddings. This is my next step. Okay? So now I have my vector store ready. All I have to do is that I have to take a new... | |
And from this particular query, I will again use this clip model. Why I will be using this clip model? So that I will be able to convert this query into an embedding. Right? So here, I will just go ahead and write clip embed. Okay? So that basically means this clip model will be used to convert the query into an emb... | |
the next step is that we will do a vector search from here. Vector search from here. So this will basically be my retriever. Once we do the retriever here, we are going to get the top K documents. And this top K documents will be specifically text plus image information. | |
text as image information. Then before sending it to the LLM model, as I said, here, let's say that I'm using my multimodal LLM model that is open AI GPT 4.4.1. Let's say if I'm using this specific model. | |
Okay? Before I give this text and images, it should be on a specific format. So here, we are going to convert this into a specific format. And this format will be passed to the LLM model. And finally, we are going to get the multimodal answer. | |
So these are the steps that we are going to specifically follow. Okay? We are going to specifically follow in order to implement this entire multimodal rag. | |
So this is my rag. The format is necessary because models, which are multimodal rag, multimodal LLM, they require in some specific format from the retriever. Then only they will be able to provide you the answer. Okay? So this is the steps. So here you can see, initially, we took this PDF document, we'll read this doc... | |
Similarly, images convert that into embedding. We'll use this clip model. It is from OpenAI. The best part is that this is already trained with images to sentence mappings. Okay? Then we are going to store this into a vector store. Then whenever we get a new query, we'll be embedding this. Then from the retriever, we'... | |
Then we'll format it, give it to the LLM model. And finally, we'll get the multimodal answer. Okay? So this is what we are specifically going to go ahead and do this. Okay? And the best part will be that when we are doing this right, you'll be able to see how efficient this is and how easily you should be able to see... | |
So the clip full form is nothing but contrastive, contrastive, language, image, image, pre-training. We are going to specifically go ahead and use this. Okay? | |
So I hope you have understood this entire flow. But you can just see that this clip will be very, very handy because it can process both text and images also. That's the best part about it. Okay? And usually in case of images, it uses something called as vision transformer. In case of text, it uses text encoder. Ok... | |
So if I see, right, since this clip, if you see the architecture of this, this has the combination of vision transformer plus transformer. Transformer is basically for image, for text and vision transformer is for images, right? So I hope you like this particular video related to multimodal rag. In the next video, w... | |
So guys, now let's go ahead and implement the multimodal rag wherein let's say that our data source that we are going to probably go ahead and consider is PDF with images. Now, if you remember this entire flow, the first thing is that we will be considering the PDF document and then we will extract the text and images... | |
And this PDF here, you can see it's a very simple PDF just to show you one basic example. I'm using this. Here in this particular PDF here, you can see some important information is there. Like document summarizes the revenue trend across Q1, Q2, Q3. As illustrated in the chart below, revenues grew steadily with th... | |
Now, considering this, we will go ahead and try to ask some query and we'll see that whether it will be able to, whether my rag will be able to do this or not. So I'm going to use that particular data set itself. Okay? Okay? Now, coming back to our simple diagram over here. Right? And from this, you can see that ... | |
So how do we go ahead and do this? So for this, we will be using this library which is called as PyMuPDF. Okay? Now PyMuPDF, inside this, there is a library called Fids. The best part about this library is that it is very good at text extraction, image extraction, speed, and memory usage. So we will go ahead and u... | |
So first of all, what we are going to go ahead and do is that I will import some of the libraries. Okay? So let me go ahead and import all the specific libraries which we are going to use it. Okay? So here you can see that we are importing libraries like Fids. We are using library like Document. I have already to... | |
I will discuss more about this. Then we are importing from PIL import image so that we can play with the images. Along with this, we will import Torch. Then we have Langtion.chat models import in it. Chat model, prompt template, human message, cosine similarity. Then along with that, you also have import Base64. ... | |
So now let me quickly go ahead and execute this. So once we execute this, it will take some amount of time. But again, it will get executed. There are so many different libraries that we are importing. Okay? The next thing is that you know that I will be requiring, as I said, I will be requiring what? Clip Model.... | |
The main aim of this particular clip model is that after extracting the text and the image, you know, we will be using text, converting that into chunks and converting that into embedding. Similarly, if you have the images, we will be converting that into embedding. And for that, we will be using this clip model itse... | |
So for clip model, we require, if you want to go ahead and load the clip model, two things is basically required. One is processor and the other one is model. I will talk about it. Why do we require the processor also? But before that, I will go ahead and quickly import OS from .env. I'm going to go ahead and impo... | |
Now for setting up the environment, I'll write OS dot environment. And here I'm going to go ahead and write openAI underscore API underscore key. And here I will just go ahead and write OS dot and get env. Now you may be thinking, am I doing this for the clip model? No. I'll be using the OpenAI multi-model LLM. R... | |
Now let's go ahead and initialize the clip model for unified embeddings. Okay? Unified embeddings. Quickly, let's do that. Okay? So first thing is that I will be going to hugging face. So let me just open the browser quickly for you. | |
So here and here I will search for clip model hugging face. Okay? So here you can see instantiate all the information is probably over here. Okay? Clip is a multi-model vision and language model motivated by overcoming the fixed number of categories. All these things you can find all the clip checkpoints under Ope... | |
Now, for initializing the clip model, two things is basically required over here. Okay? One, I will go ahead and create a variable called as clip model. And here I'm going to go ahead and write clip model dot from pre-trained. So I'm going to go ahead and directly call the model. And the model name, again, you can... | |
It is nothing but it is OpenAI clip vit base patch 32. Okay? So this is the model that we are going to specifically use. And this is the model name which will be responsible in converting both the text into embeddings and images into embeddings. Okay? And along with this, since we need to go ahead and use this par... | |
And here, we are going to go ahead and write OpenAI slash. And for this, the same model because we need and what is this? What is this processor? See, for giving input to any of the model, this processor, this clip underscore processor is making sure that whatever format is basically required over here. Right? It ... | |
Now what you can do is that I can go ahead and write dot eval. So just to see the entire clip model evaluation. Okay? So this is going to take some amount of time. Again, it depends on system to system. So here, you can see over here, clip model is nothing but it is using the clip text transformer, embeddings, pos... | |
So in the second step, which I have already told you, for this, we are going to use this clip model from the OpenAI, which will be able to do that. And we have loaded it. Okay? Now coming to the next important step. We need to find out a way of embedding the images. Right? How do we go ahead and embed the images?... | |
So for that, we will go ahead and create two embedding functions. Embedding function where we are going to specifically use this clip models. Right? One is definition embed underscore image. Here, we have to give our image data. Okay? And then we will go ahead and embed image using clip. Okay? | |
Now, whenever we have this image data, how this image data will come, I will talk about it in the later stages. Okay? Now, first of all, what we will do, we will just go ahead and check whether this image, if instance image underscore data comma str, if it is a path, okay, if I am providing this image data in the for... | |
Otherwise, if it is an image, let's say if we give directly the base64 data, then this will be just considered as the image data. Okay? Now, once we have this image data, since we need to convert this image into embedding vectors, for this, we are going to go ahead and use clip processor. Clip processor. And inside... | |
And we are going to go ahead and return this in this particular format. So this is basically saying that, hey, you need to return the tensors in the form of a PyTorch tensors itself. Okay? And this is actually available if you have some understanding with respect to deep learning and all. So what it does is that it... | |
See, once we get this, right, this will be my inputs that needs to be given to my clip model. Right? So here now I will write with torch.nograd. Here we are going to take the image features. There is a function inside the clip model saying that clip model get image features. This particular features based on this ... | |
This is what it is basically done. Because see, every image dimensions will be different. Right? If we really want to convert that into a unit vector, we can basically go ahead and use this. Wherein, we are taking the features and we are dividing it by normalization by keeping the dimension as minus one. And then ... | |
So here you can see embed text using clip. So I have used the same clip processor. Here we are giving text, return, taster, padding is equal to true, truncation is equal to true, and maximum length we are given as 77. And again, we are trying to normalize this. Here you can see instead of using get image feature, ... | |
But at the end of the day, we are using both clip model and clip processor. Okay? So here also we are using clip model and clip processor. Perfect. So these two are the function. One is embed text and one is embed image. Now, you know that. See, we have completed this function. We have completed this function. ... | |
So let's say my PDF path is multi model underscore sample dot PDF. Okay? Multi model. So this is the PDF name. The same PDF which I have actually shown. Now, I will go ahead and write doc is equal to fits dot open. And I will give this PDF path. Okay? Whatever the PDF path is. Okay? Now, initially what we wil... | |
Okay? We will try to create some variable wherein we will store all documents and embeddings. So this is my all docs, all embeddings and image data store. Okay? So we are creating this particular variable. Next step is that we will go ahead and use some kind of text splitter. So text splitter is also required for... | |
Now with respect to the doc, see if I just go ahead and execute this and if I see what is my doc. Right? Here you can see doc fit sort open if I am actually doing. Here you can actually see that this is what is my doc. Like it's a document of multimodel dot underscore sample dot PDF. Now once I have this variable ... | |
And then I will first of all get my text data, convert it into a text chunks and then convert it into embeddings. Similarly for the image data, I will get all the images and probably convert that into image vectors. Okay? So that is what we are basically going to do. So I will write for I in, sorry, for I comma pag... | |
Then I will first of all process the text. This is my first step. Then my second step is process the images. Okay? Now for processing the text, first of all, I will just go ahead and write text is equal to page dot, page dot get underscore text. Okay? So we are going to go ahead and use this particular function. ... | |
Right? We are just removing all the empty spaces dot strip, not trip. Okay? Strip. Here we are going to go ahead and create a temporary document for splitting. Okay? So let me go ahead and do this and I will keep this in the form of a document data structure. Okay? So here, this is my first step and we have don... | |
See temp underscore doc document data structure, page content is equal to text and metadata, some of the information. And remember, for all the text data, you need to keep the metadata as type is equal to text. Okay? This is really important. And then we are going to end using the split documents for this. Okay? ... | |
So for chunk in text underscore chunk, we have used this embed function. Then I am combining all the embeddings over here inside this particular variable and all underscore docs dot append. If you remember what is embed text, embed text is nothing but it is a function, which is basically getting the text features. R... | |
So guys, now similarly for the images, we need to follow these three important steps. Okay? So I will just go ahead and quickly comment this for you so that there are a lot of code to be written. So I really want, I'm trying my best to probably teach you in a way so that you should be able to understand. Okay? So ... | |
Store as base64 for my GPT, for like my computer vision GPT model, specifically and multi-model itself. Create clip embeddings for retrieval. Okay? Like how we did it for the text. Now we, in this particular loop, right? We are already having this particular page. Now, similarly, what we will do, I will go ahead ... | |
So here in the same loop, we will go ahead and write like this. See? So I'm enumerating through every pages and there is a function called as get underscore image. So this will actually get all the image information. As I said, we need to convert the PDF image to PL format. So this is what we are basically doing. ... | |
Then we create a unique identifier just to name that particular image. So here you can see based on the index, that particular image and image index will be getting this. Then we are storing the image as base64 for later use for the GPT model, whichever model we are specifically giving. And this PL image is getting ... | |
And finally, we go ahead and embed using clip wherein I have to give the PL image. Then I will get the embeddings and the variable that we have created, right? All embeddings inside this, we will put that specific embedding also, right? So all embeddings.append of abending. Finally, we go ahead and create the docum... | |
This is nothing but processing the text. This is nothing but processing the image. And step by step, we have written it what you are doing it, right? Please have a look onto the code. Step by step, if you will be able to understand all the code itself, right? Here, I have also written proper comments so that you s... | |
Now, if you go ahead and see like how many embeddings has got created and how many documents has got created, you can go ahead and write, just go ahead and execute it over here. All embeddings. This is how my all embedding looks like, right? Similarly, if you want to also go ahead and see some other embeddings, you... | |
Here you can see there are two docs, which is absolutely good. And with respect to this particular docs, here you can see, right? Image, image, ID, information, image, right? And this is my text. So this is my type image. This is my type text, right? So here we are able to get this. So this is perfect. Now the ... | |
So I'm using that all embeddings over here, right? You can see over here, I'm using that same all embeddings with side this embedding arrays. And then five start from embeddings, right? From embeddings. And there was also something called as from documents for another function. Here we are going to use from embedd... | |
See, if I just go ahead and see what is my embedding array also, you should be able to see this. Okay. So let's print this embedding array. So this is what is my embedding array for two sentences, right? For two sentences. So I'm taking all the docs and I'm taking all the embedding arrays, combining them in a zip ... | |
See, if I do zip off or if I just combine this all docs, all underscore docs, comma embedding array, right? So here you can see that this is my page information. This is my image information. And for this, this is my vector. And for my second record, this is my vector, right? And it is a 512 dimension, which I've ... | |
So we are iterating through, we are getting the page content and we are putting it inside my text embeddings. Here embedding will be none because we have already done that embedding. So that is the reason we have used this function from embedding. So this becomes my vector store. So here is my vector store. Now fr... | |
And with the help of this particular model, you can go ahead and implement, you can implement a rack pipeline with a multi-model rack. Okay. So this will be able to understand the text and this one. Right? Now, this is done. My entire, this vector store is ready, right? Now I can go ahead and use it in creating a... | |
So here you can see with respect to this particular search. Here we have created a function called as a retrieve model, unified retrieval using clip embedding for both text and images. Because here is my vector store. So this will basically be my retrieval model, right? Retrieval, sorry. So whenever I give a query... | |
So here what we need to do, we need to search based on the query embeddings, right? But now I already have this query embeddings. So I can directly, what I can do, I can go ahead and use, I will create a variable results is equal to vector store dot. There is a lot of method I told, right? Similarity search by vect... | |
So here I will go ahead and write my embeddings is equal to query embedding, query underscore embedding, double equal to operator, okay? Single equal to query embedding, comma, whatever k value is, k is equal to k, right? And if you remember what k is, the value that we have given, k is equal to five, it should be sm... | |
Query embedding, why this is wrong, let me check. Similarity search by vectors, query embeddings, I think I have made some mistake in the spelling, just a second. Okay, so here, oh, there was an indentation issue, okay? So query embedding, similarity search by vector and then we are returning the results. Perfect. | |
So here you can see that I will be able to retrieve the results from here, okay? Now, this retrieval model is just like a retriever, right? Here you can see that from this retriever, I am able to get some text on images, right? In the top k relevant. But we need to convert this into a format before we give it to ou... | |
Please go ahead and just check this function, okay? So this function is called as create multimodal message. Create a message with both text and images for GPT-4V, okay? So here I have created a content, content.append type is equal to text, question and context. It's just like one template of information. I will ... | |
We will go ahead and separate for text and images. Then if we have text docs, we will create a separate text context for this. If there is an images, we create separate image context for this. Here we give image URL because image URL is basically required along with the image data, okay? So the GPT-4 version that w... | |
Just go ahead and see this, how this message is basically created. There is a specific format that is required for GPT-4V.1, right? And for that we use it over here. Here the main fundamental is that whatever retrieve documents we are basically getting, we are separating it with respect to text docs and image docs,... | |
So here inside this function, you can see context docs, I'm calling retrieval multi-doc based on the query. Then we are creating this multi-model message, how the message the LLM wants. And finally, whatever docs, context underscore docs we are basically getting. See, from this particular message, we are just invoki... | |
If you want to see the response, you can also go ahead and return the response.content over here, right? And this is basically printing all the relevant context that you have got from the retriever. Very simple, right? Two to three functions and you should be able to do this, okay? And let's, the exciting part is t... | |
First is, how does the chart on page one, let's go ahead and see this. How does the chart on page one show about revenue trends? Summarize the main findings on the document. What visual elements are present in the document? So this is the three questions. I will print the query. We'll call this multimodal rag pip... | |
Here we are basically calling the multimodal rag pipeline and then we can see the answer, okay? So what does the chart on page one show about revenue trends? So retrieve documents, two documents, text from page zero, annual revenue. This document, this image from page zero, this information is basically getting disp... | |
Then summarize the main findings from the document. Text from zero, this is there. Main findings here you can see. Steady revenue growth. The document shows that we are able to go over the three quarters. The height of the bar increased from left to right, visually representing growth across three quarters, right... | |
How we created a retriever, then how we created this particular format for the LLM. And finally, we are able to get the model answer. For this, the main thing is that how do you generate this multimodal message? And for this, you need to see the documentation for the OpenAI multimodal LLM models. Like how this spec... | |
Now you can go ahead and play with any kind of PDFs and try to use this and see what all things you are able to get. So I hope you like this particular video. I will see you all in the next video. Thank you. Take care. Bye-bye. |
README.md exists but content is empty.
- Downloads last month
- 4