Augmenting AI Experience with Local Data


We are living in the age of artificial intelligence (AI) where data is the fuel that powers these intelligent systems. However, the conventional approach to AI involves sending our data to models hosted in the cloud. But what if we could bring the models to our data which is stored locally and keep it there? This idea has been discussed recently, and it opens up new possibilities for enhancing our AI experience.

Models Come to Our Data

Traditionally, data was sent to models for processing and analysis. But what if we flipped the script and had the models come to our data? This concept can be seen as a way to augment our AI capabilities by leveraging the wealth of data stored locally. As @Jayakumark mentioned, “our browsing history tells a lot about what we read… Almost everyone has a habit of reading x news site, x social network, x YouTube videos.” This browsing history and other personal data can provide valuable insights that are currently underutilized.

Local Models for Personal Data

While there are many open-source solutions for loading personal data, they often lack the capability to handle images or videos. However, significant progress has been made in developing local models for image processing. For example, LLaVA, an LLM with multi-modal image capabilities, has shown promise in running efficiently on personal laptops, as @simonw highlighted. Additionally, models like Salesforce BLIP can generate captions for images, as mentioned by @simonw.

Another noteworthy development is CogVLM, which boasts impressive capabilities. As @orbital-decay mentioned, CogVLM surpasses LLaVA in performance, although it may require more powerful hardware. However, @cinntaile pointed out that recent updates to CogVLM now allow it to work with 11GB of VRAM, making it more accessible.

Bringing AI to Local Data

The idea of bringing models to our local data is not just theoretical. Companies like are actively pursuing this approach. @conradev shared that collects personal data into a local database and trains models to utilize it. These models can be either local or cloud-based, and the routing of requests can be customized based on data sensitivity or the required capabilities. They are taking a privacy-focused approach to empower individuals with their own AI models.

However, discussions arose around the domain name of @gardenhedge found it amusing and admitted they would join just for the name. @voakbasda expressed concern about spending a significant amount on a domain name, suggesting it might indicate more style than substance. In response, @conradev clarified that they acquired the domain for a reasonable price and that the costly .inc domain is intentional to discourage domain squatting.

Building Personal AI

The concept of personal AI is both exciting and promising. As @csbartus mentioned, the ability to have models that use search history and behavior to compose search queries more effectively would be game-changing. @timenova pointed to the example of GitHub Copilot, which scans and indexes codebase to provide context-aware code suggestions. Extending this idea to personal data like messages, browsing history, and webpage content could revolutionize how we interact with AI.

@jakderrida envisioned a more ambitious application, where models derive the best way to compose search queries based on our search history and behavior. The aim is to fine-tune the AI to emulate the search processes of researchers, paralegals, or fact-checkers. This would require advanced operators and personalized search strategies.

Challenges and Considerations

While the concept of local AI models brings about exciting possibilities, there are practical considerations. @butz highlighted the need for high-end hardware, particularly for training large language models (LLMs). However, @cjbprime pointed out an alternative approach. Rather than training models on personal data, retrieval augmented generation can be used to add relevant documents to prompts at query time.

@Art9681 explained that the effectiveness of this workflow can depend on document size. For large documents, even powerful hardware may struggle to handle inference speed. This limitation means a powerful client is still required. However, for smaller documents, real-time feedback can work effectively.

The Future of Augmented AI

In conclusion, the concept of models coming to our data and augmenting our AI capabilities shows great promise. By leveraging local data and personalizing AI experiences, we can unlock new insights and enhance the power of AI in our daily lives. While there are challenges to overcome, the ongoing advancements in local models and the pursuit of privacy-focused AI solutions are steps in the right direction. The future of augmented AI is within our reach, and it holds the potential to revolutionize how we interact with intelligent systems.

So, let’s embrace the idea of models coming to our data and unlock the full potential of augmented AI! 🚀

Latest Posts