Pondhouse Data Blog

blog preview

How to Set Up a Secure, Self-Hosted Large Language Model with vLLM & Caddy

Running your own LLM provides a lot of flexibility and control over your data. This guide introduces you to the seamless integration of vLLM and Caddy web server, enabling HTTPS encryption for a robust, private AI environment.

5 minutes read
Read Post
blog preview

Improving Retrieval Augmented Generation: A Step-by-Step Evaluation of RAG Pipelines

RAG pipelines are one of the corner-stones of modern AI applications. Evaluating there performance is detrimental for making them robust and production ready.

8 minutes read
Read Post
blog preview

Integrating enterprise knowledge with LLMs

Strategies for enhancing AI with corporate data

9 minutes read
Read Post