Vector Stores¶
================
⚠️ Warning: This text was generated by NearAI using vector store example.
Introduction¶
Vector Stores are a powerful feature in NearAI that allows you to store and manage large amounts of data in a vectorized format. This enables efficient searching and retrieval of data, making it ideal for applications such as natural language processing, image recognition, and more.
Creating a Vector Store¶
To create a Vector Store, you can use the client.beta.vector_stores.create
method, passing in a name for the store and any additional metadata.
Example¶
client = openai.OpenAI(base_url=base_url, api_key=json.dumps(auth))
# Create a vector store
vs = client.beta.vector_stores.create(name="example_vector_store")
print(f"Vector store created: {vs}")
Uploading Files¶
To upload files to a Vector Store, you can use the client.files.create
method, passing in the file contents and metadata.
Example¶
# Upload a file to the vector store
uploaded_file = client.files.create(
file=open("example_file.txt", "rb"),
purpose="assistants",
)
attached_file = client.beta.vector_stores.files.create(
vector_store_id=vs.id,
file_id=uploaded_file.id,
)
print(f"File uploaded and attached: {uploaded_file.filename}")
Retrieving Files¶
To retrieve a file from a Vector Store, you can use the client.files.download
method, passing in the file ID.
Example¶
# Retrieve a file from the vector store
retrieved_file = client.files.download(uploaded_file.id)
print(f"File retrieved: {retrieved_file}")
Deleting Files¶
To delete a file from a Vector Store, you can use the client.files.delete
method, passing in the file ID.
Example¶
# Delete a file from the vector store
deleted_file = client.files.delete(uploaded_file.id)
print(f"File deleted: {deleted_file}")
Searching the Vector Store¶
To search a Vector Store, you can use the client.post
method, passing in the search query and any additional metadata.
Example¶
# Search the vector store
search_query = "example search query"
search_response = client.post(
path=f"{base_url}/vector_stores/{vs.id}/search",
body={"query": search_query},
cast_to=dict,
)
print(f"Search results for '{search_query}':")
print(f"- {search_response}")
Obtaining LLM Responses¶
To obtain LLM responses using a Vector Store, you can use the inference.query_vector_store
method, passing in the Vector Store ID, search query, and any additional metadata.
Example¶
def generate_llm_response(messages, processed_results) -> str:
SYSTEM_PROMPT = """You're an AI assistant that writes technical documentation. You can search a vector store for
information relevant to the user's query. Use the provided vector store results to inform your response, but don't
mention the vector store directly."""
vs_results = "\n=========\n".join(
[f"{result.get('chunk_text', 'No text available')}" for result in processed_results]
)
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
*messages,
{
"role": "system",
"content": f"User query: {messages[-1]['content']}\n\nRelevant information:\n{vs_results}",
},
]
return inference.completions(model="llama-v3p1-405b-instruct", messages=messages, max_tokens=16000)
# Get an LLM response using the vector store
search_query = "example search query"
client_config = ClientConfig(base_url=CONFIG.nearai_hub.base_url, auth=CONFIG.auth)
inference = InferenceRouter(client_config)
vector_results = inference.query_vector_store(vs.id, search_query)
processed_results = process_vector_results([vector_results])
llm_response = generate_llm_response(messages, processed_results)
print(llm_response["choices"][0]["message"]["content"])
Note: This is just a general example and you may need to modify it to fit your specific use case.
Helpful links*:¶
- Load local files into the vector store: vector_store.py
- Load a GitHub repository into the vector store: vector_store_from_source.py
- Create this help document: vector_store_build_doc.py
* Helpful links were provided by the editor