Tool Series - BottyBot

Tool Series - BottyBot: A Frontend Chat UI for Local LLM Models

https://www.github.com/patw/BottyBot

In this installment of our Generative AI (GenAI) tool series, we will be exploring a unique solution to interfacing with locally hosted Large Language Models (LLMs): BottyBot. Developed by an individual who was not satisfied with existing options on the market, BottyBot is specifically designed to seamlessly connect with llama.cpp running in server mode. This powerful frontend chat UI has become a crucial tool in the developer’s daily workflow, serving as the main interface for interacting with multiple tools and applications that leverage the capabilities of the LLM.

The creator of BottyBot operates two GPU-accelerated instances of llama.cpp, which serve as the backbone for numerous applications such as SumBot, ExtBrain, RAGTAG (soon to be updated), and several Python scripts, including a website generator. These tools benefit from the efficiency and versatility provided by BottyBot’s intuitive chat interface, which is typically powered by either the OpenHermes Mistral or Dolphin Mistral families of LLM models.

One of the key features that sets BottyBot apart is its support for multiple “bot” identities. These distinct personalities can be engaging to interact with and are entirely generated within the application itself. The development process of BottyBot exemplifies a unique approach known as “bootstrapping,” where much of the initial design was created using OpenAI’s ChatGPT-3, while subsequent features were added by directly communicating with the LLM model integrated into BottyBot. This innovative method has resulted in a continually evolving and feature-rich application that caters to a wide range of use cases.

In addition to its core functionality, BottyBot also includes export capabilities for formatting and organizing conversations in an easily shareable format. This feature is particularly useful for collaborating with others or showcasing the results of interactions with LLM models.

While BottyBot does not currently support Retrieval Augmented Generation (RAG) techniques like RAGTAG or ExtBrain, its developers have expressed interest in potentially incorporating manual augmentation or vector search capabilities in future updates. This would allow users to enhance the prompt generation process and further optimize their interactions with LLM models.

Overall, BottyBot has proven to be an incredibly valuable tool for individuals who wish to harness the power of local, open-source Large Language Models while maintaining complete privacy and control over their data. As a result, it serves as a perfect example of how cutting-edge AI technology can be effectively integrated into everyday workflows and applications. Stay tuned for future updates and enhancements to this versatile and essential chat interface!

  • Human Intervention: Minor. Added the URL for the github repo up top. Also, it seemed to depersonalize me entirely in this article talking about an unknown developer. I’m cool with it, but I probably needed to add some context in the points to indicate who worked on it.

Facts Used:

  • This series covers different tools used for building Generative AI (genai) applications
    • BottyBot is a front end chat UI that connects to llama.cpp running in server mode which hosts a LLM (large language model)
    • I wasn’t happy with other solutions on the market and none of them could consume llama.cpp in server mode directly. I operate 2 GPU accelerated instances of llama.cpp which is used by multiple tools like SumBot, ExtBrain, RAGTAG (soon, it needs updating) and a few python scripts like my website generator
    • BottyBot has been a huge success for me, as I use it daily as my main chat interface to LLM models. The back end llama.cpp server is usually running the OpenHermes Mistral or Dolphin Mistral families of LLM models.
    • BottyBot supports different “bot” identities that represent differnet personalities that can be interesting to interact with. The entire set of built in identites were generated by BottyBot!
    • BottyBot was a perfect example of bootstrapping: I designed a lot of the application with OpenAI’s ChatGPT 3, but as soon as the UI was running well enough, all features from that point on were added by talking to the LLM and getting useful python code for features I wanted. It’s now being used for all future products.
    • I added export functionality to produce nicely formatted exports for conversations. These are useful for sharing with others.
    • BottyBot is not an example of RAG (retrieval augmented generation) like RAGTAG or ExtBrain. BottyBoty uses the LLM directly without any augmentation.
    • A future enhancement to this tool could include manual augmentation or vector search for augmenting the LLM prompt.
    • I love this tool and so far, it’s provided the most value to me personally and is a perfect example of using local, opensource large language models with full privacy.