This is my submission for the DEV Challenge: Consumer-Facing Conversational Experiences.
What I Built
ContradictMe is an AI designed to disagree with you.
Most AI assistants are optimized to be helpful, harmless, and agreeable. ContradictMe is built to challenge your beliefs with the strongest possible counterarguments , not to be contrarian, but to make you think better.
The Problem
We’re drowning in agreement.
Social algorithms feed us content that confirms what we already believe. AI assistants optimize for satisfaction. The result is echo chambers everywhere.
The consequences are real:
- Students graduate without learning to defend their ideas
- Professionals make decisions without stress-testing assumptions
- Public discourse becomes tribal, we don’t even understand the other side anymore
The Solution
ContradictMe weaponizes disagreement for good. Tell it something you believe strongly, and it will:
- Steel-man the opposition , present the strongest version of counterarguments, not weak straw-men
- Cite real research , every claim backed by credible sources and scored for quality
- Acknowledge nuance , flags limitations, mixed evidence, and where you might actually be right
- Spark deeper thinking , ends with reflection questions that stay with you
It’s not about changing minds. It’s about strengthening them.
Demo
🔗 https://contradict-me.vercel.app
Core Experience: Challenge Your Beliefs
Enter any belief, for example:
- “Remote work is always better”
- “AI will take all jobs”
- “College isn’t worth it”
…and get a thoughtful, evidence-based counterargument in seconds.
What you’ll see:
- Quality scores (0-100) for each argument
- Source credibility badges (peer-reviewed, institution, citation count)
- Explicit limitations (example: “This study only examined tech workers”)
- Follow-up questions to explore further
AI Debate Arena
Can’t decide what to think? Watch two AI agents battle it out.
Enter a topic and watch:
- Logical Larry (evidence-focused)
- Emotional Emma (values-focused)
…debate through 5 structured rounds. You can:
- Interject with your own questions mid-debate
- Vote for the winner
- Export the full transcript
Analytics & Achievements
Track your intellectual journey:
- Topics explored (tag cloud visualization)
- Arguments encountered
- Critical thinking achievements (example: “Renaissance Mind” for exploring 5+ topics)
Full Feature List
- Dark/light/system themes (keyboard shortcut: ⌘⇧L)
- Conversation history with search + bookmarks
- Export conversations (JSON, Markdown, TXT)
- WCAG accessibility compliance
- Streaming responses with elegant loading states
How I Used Algolia Agent Studio
Architecture Overview
User belief → Agent Studio (GPT-4) → Algolia Search → ranked arguments → synthesized response
The magic is in how retrieval and generation work together.
1) Curated Argument Database
I didn’t index random content. I built a curated database of 26 research-backed arguments across controversial topics:
Work
- Remote productivity
- 4-day workweek
- Gig economy
Economics
- Minimum wage
- UBI
- Cryptocurrency
- Housing policy
Technology
- AI displacement
- EVs
- Social media effects
- Space funding
Social
- Gun policy
- Immigration
- Drug decriminalization
Health/Education
- Plant-based diets
- Healthcare systems
- College ROI
Each argument is structured for optimal retrieval. Example record shape:
{
"objectID": "remote-work-innovation",
"position": "against_remote_work",
"opposingBeliefs": ["remote work is always better", "offices are obsolete"],
"mainClaim": "Innovation often depends on unplanned collaboration",
"evidence": "Summary of findings and key results...",
"supportingPoints": [
"Spontaneous cross-team idea flow",
"Whiteboard brainstorming sessions",
"Mentorship through observation"
],
"limitations": "Context limits, population limits, or mixed evidence notes...",
"sourceMetadata": {
"title": "Innovation Patterns in Distributed Teams",
"authors": ["Author One", "Author Two"],
"institution": "Example University",
"publicationType": "peer-reviewed",
"yearPublished": 2024,
"citationCount": 847,
"doi": "10.xxxx/xxxx.xxxx"
},
"qualityScore": 87,
"sourceCredibility": 95,
"evidenceStrength": 85
}
Average quality score across the database: 88.1 / 100
2) Search Configuration
Key idea: rank by quality, not just relevance.
// Searchable attributes
searchableAttributes: [
"mainClaim",
"evidence",
"supportingPoints",
"opposingBeliefs", // key: matches user's stated belief
"metadata.tags",
"metadata.domain"
],
// Custom ranking - quality over relevance
customRanking: [
"desc(qualityScore)",
"desc(sourceCredibility)",
"desc(evidenceStrength)"
],
// Faceting for filtering
attributesForFaceting: [
"filterOnly(position)",
"searchable(metadata.domain)",
"filterOnly(sourceMetadata.yearPublished)"
]
The opposingBeliefs field is crucial, it’s how the agent matches “I believe remote work is better” to arguments that oppose that position.
3) Prompt Engineering for Steel-Manning
System prompt principles:
- Acknowledge the user’s perspective, don’t dismiss it
- Retrieve the strongest counterarguments (steel-man, not straw-man)
- Present 2-3 top-ranked arguments with:
- Core claim
- Supporting evidence + quality indicators
- Source attribution (authors, institution, year)
- Explicit limitations or caveats
- Note where the user’s belief has valid points
- End with a thought-provoking question
Tone rules: never condescending, never attack the person, challenge ideas with evidence and curiosity.
4) Streaming Integration (SSE)
The frontend uses Server-Sent Events (SSE) for real-time streaming so the response feels like the AI is reasoning in front of you.
const response = await fetch(agentEndpoint, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messages: [{ role: "user", parts: [{ text: userBelief }] }],
stream: true,
compatibilityMode: "ai-sdk-5"
})
});
// Stream chunks to client
for await (const chunk of response.body) {
// Parse SSE format and forward to UI
}
Why Fast Retrieval Matters
Speed isn’t a nice-to-have for ContradictMe, it’s essential to the psychology of the experience.
When someone shares a deeply-held belief, they’re in a narrow window of openness. If the system is slow, defenses come back up and they start preparing rebuttals before they even read the response.
- Fast (< 2s): user stays engaged and receptive
- Slow (> 5s): user disengages or “armors up”
Algolia’s fast search lets the agent retrieve, rank, and synthesize arguments before that window closes.
Real Performance Numbers
| Metric | Result |
|---|---|
| Average belief-to-first-token | 1.2 seconds |
| Full response completion | 4-6 seconds |
| Debate Arena (10 retrievals) | Smooth, no lag |
| Tests passing | 73 / 73 |
The Debate Arena Stress Test
Each 5-round debate requires:
- 10 argument retrievals (5 per side)
- real-time synthesis
- awareness of prior rounds
With slower retrieval, this feature wouldn’t feel usable. With Algolia, it feels like watching two informed debaters go head-to-head in real time.
The Bigger Picture
ContradictMe isn’t just a demo, it’s a proof of concept for a different kind of AI.
What if we built systems that made us better thinkers, not just more efficient workers? What if disagreement was a feature, not a bug?
Algolia Agent Studio made this possible by combining:
- fast semantic search over structured argument data
- GPT-4 synthesis for nuanced responses
- streaming delivery for conversational flow
The result is an AI that respects you enough to disagree.
Top comments (0)