Summary
Langchain framework makes building of LLM applications much easier, extends capabilities of LLM applications, and introduces structured approach, which facilitates supporting and managing applications.
This post builds on top of previous https://www.cloudmatter.io/post/leveraging-openai-and-azure-services-for-intelligent-data-search and provides end-to-end simple example to use LLM with vector DB to provide responses based context your PDF documents.
But we can see how chains help elegantly integrate Azure Cognitive Vector Search with OpenAI LLM and make code better.
There is much more that can be done with with chains but this is for another day. :)
Data indexing:
Rather than trying configure langchain to fit your index structure much easier is let it do the work for you. For example below creates Azure search index and loads document chunks:
Query data:
Since langchain makes things so easy added a bit more functionality comparing two models: gpt-4 and text-davinci-0003
gpt-4 (conversational)
davinci (retrieval):
Output
gpt-4
davinci
provides similar response (will need to work on formatting part)
Answer: The Vice President of Human Resources for Contoso Electronics is responsible for leading the Human Resources team in developing and implementing strategies that support the company’s long-term objectives. ... [id: role_library_pdf-10], fostering a positive and productive work environment, collaborating with other departments ....
The Manager of Human Resources for Contoso Electronics is responsible for providing leadership and direction to the Human Resources department. ... [id: role_library_pdf-27], overseeing the recruitment and selection process ...
Full end-to-end sample code is provided in GitHub. Building AI samples had never been this simple.
Well, that is until you start making them into enterprise/production grade code :) But this is yet another topic for another day :)
Comentários