DEV Community

Cover image for A Trust-First Recipe AI Agent Using Algolia (No Hallucinations)
fayzak izzik
fayzak izzik

Posted on

A Trust-First Recipe AI Agent Using Algolia (No Hallucinations)

This project was built for the Algolia Agent Studio Challenge.

• The agent operates in Hebrew, matching the language of the recipe and its audience
• Certain verified questions return a video button that jumps to the exact second where the answer appears

Instead of building a generic AI chatbot, I built a recipe-specific AI assistant designed to answer real user questions only from verified knowledge, without ever inventing ingredients, quantities, or steps.

The goal of this project was to test whether a focused, retrieval-only agent can provide real value in cooking — a domain where incorrect answers are worse than no answers.

What I Built

I built a consumer-facing conversational AI agent embedded into the Iditush recipe website.

The agent acts as a personal assistant for a single high-traffic cheesecake recipe, which has over 190,000 views and generates a large volume of recurring user questions.

The assistant answers questions only when the information already exists in a structured knowledge base indexed in Algolia.
If no verified answer exists, the agent explicitly declines to answer and captures the question for future improvement.

The Problem

Popular recipe pages generate thousands of similar questions over time, such as:

Why did the cake collapse?

Can I replace a specific ingredient?

How long should the cake cool?

What went wrong at a specific step?

Most AI chat systems attempt to answer these questions even when no reliable data exists — often hallucinating cooking advice that can ruin the recipe.

In cooking, a wrong answer is worse than no answer.

The Solution

I built a retrieval-only AI assistant dedicated to this specific recipe.

The agent:

Searches only within a verified, author-approved knowledge base

Answers questions only when a relevant match exists

Explicitly returns “no answer found” when information is missing

Logs unanswered questions for future expansion of the knowledge base

Optionally allows users to receive an email once an official answer is added

This creates a feedback loop where real user questions drive knowledge growth — without compromising trust.

Live Demo

🔗 Live Application: demo

No login is required to test the agent.

How It Works (Architecture)

The agent follows a deterministic, trust-first flow:

A user submits a question via the chat widget

The backend performs search retrieval using Algolia

If a relevant knowledge object is found, the answer is returned

If no match exists, the agent returns a safe fallback message

The unanswered question is logged for future analysis and expansion

No generative model is used for answering.

How Algolia Is Used

Algolia serves as the single source of truth for the system.

All recipe knowledge, Q&A, and explanations are indexed as structured objects

Searchable attributes are explicitly defined

The agent relies entirely on Algolia retrieval to decide whether it is allowed to answer

No response is returned without a valid Algolia match

Algolia powers the intelligence layer of the agent.

User Experience Decisions

Several UX decisions were intentional:

No hallucinated answers

Clear feedback when information is missing

Optional email collection only after the first unanswered question

Direct video timestamp linking when relevant (users jump directly to the exact moment in the recipe video)

The interface and content are in Hebrew, but the system design and retrieval logic are language-agnostic.

How to Test the Agent

  1. Open the chat widget on the recipe page
  2. Ask a question in Hebrew (for example: “כמה סוכר צריך?”)
  3. Click the video button to jump to the exact moment in the recipe video
  4. Ask a question that does not exist in the knowledge base
  5. Observe the safe fallback behavior

Challenges & Learnings

The biggest challenge was intentionally not using a generative LLM.

This project reinforced that in domains like cooking, trust, correctness, and clear boundaries matter more than creativity.

An agent that knows when not to answer can be more useful than one that answers everything.

What’s Next

This project was designed as an experiment.

If the agent proves reliable and useful for this single high-traffic recipe, the same architecture can be expanded to additional recipes — each with its own dedicated knowledge base.

The system is also designed to support an instruction-driven LLM layer in the future, while keeping Algolia-based retrieval as the final authority.

This project explores whether an AI assistant can provide real value by knowing less — but knowing it with certainty.

Top comments (0)