Nvidia is actively encouraging the acquisition of its newest graphics processing units by launching a new utility that enables the use of an AI-driven chatbot on a Windows PC without an internet connection. This feature, known as Chat with RTX, is available to owners of the GeForce RTX 30 Series and 40 Series cards.

Chat with RTX offers the capability for users to tailor a Generative AI model, akin to OpenAI’s ChatGPT, by integrating it with their own documents, files, and notes for information retrieval.

Nvidia explains its new chatbot in a blog post: “Rather than searching through notes or saved content, users can simply type queries. For example, one could ask, ‘What was the restaurant my partner recommended while in Las Vegas?’ and Chat with RTX will scan local files the user points it to and provide the answer with context.”

Nvidia allows users to utilize open-source AI models such as Mistral’s offerings, but there is also an option to switch to Meta’s Large Language Model (LLM) Llama 2. However, keep in mind that the download size for these AI models can be tremendous, ranging from 50GB to 100GB, depending on the chosen model.

At present, Chat with RTX supports an array of file formats, including text, PDF, .doc, .docx, and .xml. By directing the application to a folder filled with any of these supported file types, those files will be incorporated into the model’s dataset for refinement. Additionally, Chat with RTX is capable of utilizing the URL of a YouTube playlist to import video transcriptions into the dataset, thereby allowing the selected model to search through the video content.

But there are a few limitations to note as well. For instance, Chat with RTX cannot retain any context like ChatGPT. This means if you ask the chatbot about a bird’s homeland and then ask what color the bird is, RTX Chat will not know what the user is talking about. ChatGPT, on the other hand, has introduced a memory feature for more personalized information. 

Nvidia recognizes that various factors, some more manageable than others, can influence the accuracy of the app’s responses. These include how the question is formulated, the efficiency of the chosen model, and the breadth of the dataset used for fine-tuning. Queries seeking specific information from a few documents tend to produce more reliable outcomes compared to requests for summaries of a single document or a collection of documents.

But as this chatbot is still a work in progress, Nvidia promises to make improvements over time.