DEV Community

Cover image for I built a Stutter-Friendly App in 1 Day with Elm, Elixir, and Copilot
ujja
ujja

Posted on • Edited on

I built a Stutter-Friendly App in 1 Day with Elm, Elixir, and Copilot

GitHub Copilot CLI Challenge Submission

This is a submission for the GitHub Copilot CLI Challenge

What I Built

I went a little crazy šŸ˜Ž.

I decided to build a stutter-accessibility app in just one day. Why, one day you ask? Well, I've been swamped for the last couple of weeks, and finally got some breathing space this weekend. So I figured, why not take this window and make something meaningful AND a little wild at the same time?

This project is deeply personal. I've faced my own challenges with stuttering, spent heaps of time and money on speech therapy, and realised that not everyone can afford professional support. That got me thinking. As an engineer, how can I use my skills to help the community?

Enter PaceMate: a calm, guided speaking experience built for people who stutter.

The Full Feature Set

Core Experience

The app guides users through a 5-state speaking session:

  1. Idle - Calm welcome screen with session start button
  2. Breathing - Animated breathing prompt with visual guide
  3. Prompt - A short speaking prompt (no time pressure)
  4. Speaking - User speaks and clicks "Done"
  5. Feedback - AI-powered feedback + detailed speech metrics

Progress Dashboard: Track your journey with analytics—total sessions, words spoken, average WPM, and practice streaks. All data is stored locally in the browser for privacy.

Advanced Features

Real-Time Communication

  • WebSocket-powered via Phoenix Channels
  • Multiple concurrent sessions handled by isolated Erlang processes
  • Graceful connection recovery

AI-Powered Feedback Engine

  • Primary: Ollama + Phi3 (fully local, privacy-preserving)
  • Fallback: Rule-based tips when AI is unavailable
  • Analyses:
    • Words Per Minute (WPM) pacing
    • Sentence count and structure
    • Speech flow patterns
    • Estimated pause points for breath

Metrics Display
The app shows users:

  • Speaking duration
  • Recognised speech transcript
  • Estimated word count
  • Pacing analysis (fast/normal/slow)
  • Sentence statistics
  • Personalised pacing recommendations

Professional UI/UX

  • Dark mode support for comfortable long sessions
  • Fully responsive (mobile, tablet, desktop)

Developer Experience

  • Complete architecture documentation
  • Feature breakdown docs
  • Quick-start guide for local setup
  • Docker Compose for single-command startup

Why Elm & Elixir?

Because sometimes, the forgotten things are the coolest.

Elm: a functional frontend language with predictable state and zero runtime errors. Perfect for accessibility-focused UI: no crashes, no surprises, just smooth flows that users can rely on.

  • Type Safety: Impossible to send malformed messages to the backend
  • Predictable State Machine: 5 states, clear transitions, no hidden edge cases
  • Pure Functions: UI logic is testable and reproducible

Elixir: a functional, concurrent backend language running on the Erlang VM. Real-time sessions? Multiple users talking at once? Crashes don't break the room? Check, check, check.

  • Lightweight Concurrency: Each session is an isolated process (fault-tolerant)
  • Hot Code Reloading: Update logic without restarting users
  • Pattern Matching: Elegantly handle different message types from clients

Most people skip these languages because they "aren't popular." But for this app? They're perfect. Elm gives a safe, predictable UI for someone practising speech. Elixir lets me manage real-time sessions and AI feedback without risking a messy backend. Sometimes, you have to pick the right tool for the mission, not the trend.

How AI Fits In

AI provides gentle feedback after each speaking session using a local Ollama + Phi3 pipeline:

Why Local LLM?

  • Privacy: Speech data never leaves the user's device/server
  • No API costs: Ollama runs locally
  • Accessibility: Doesn't require external API keys

Feedback Examples

  • "Nice pacing. Keep it gentle."
  • "Try a soft start next time."
  • "Good breath before speaking."
  • "You're doing great. Take your time."
  • "Slow down slightly to improve clarity."
  • "Great control over pace. Well done!"

Fallback System
If Ollama isn't running, the app gracefully falls back to rule-based feedback (no broken experiences).

The Elm frontend displays feedback in a calm, distraction-free card layout, keeping the focus on the user's practice, not the tech.

Tech Stack & Infrastructure

Frontend

  • Elm 0.19.1 (typed, functional, zero runtime errors)
  • WebSockets (Phoenix Channels protocol)
  • CSS Grid + Flexbox (fully responsive)
  • Font Awesome 6 (clean, accessible icons)

Backend

  • Elixir 1.19.5 on Erlang/OTP 28
  • Phoenix 1.8.3 (web framework)
  • Phoenix Channels (WebSocket handler)
  • HTTPoison 2.3.0 (Ollama API client)
  • Speech metrics analyzer (custom Elixir logic)

Analytics: Dual-mode: Browser localStorage (privacy-first, offline) + optional SQLite backend (charting, streaks, history)

AI

  • Ollama (local LLM runner)
  • Phi3 (language model - lightweight, fast, efficient)
  • Speech metrics parser (custom Elixir logic)

DevOps & Deployment

  • Docker (multi-stage builds for efficiency)
  • Docker Compose (local development orchestration)
  • Alpine Linux (lightweight runtime images)
  • Health checks (service dependency management)
  • Fly.io (backend hosting with auto-scaling)
  • Netlify (frontend CDN with automatic HTTPS)
  • GitHub Actions (CI/CD pipeline for auto-deployment)

Testing

  • ExUnit (backend tests)
  • Elm test (frontend tests)
  • 33+ tests covering state transitions, JSON decoding, AI logic

Demo

Demo Video

For the best experience, fuelled with security and privacy, run the app on your local device.

Demo URL - Note: The AI feedback is powered by a backend deployed on Fly.io, which is running on a 7-day free tier for this demo. After the free tier expires, the app will automatically fall back to rule-based feedback instead of AI analysis.

Live Repository: GitHub - ujjavala/GitHub-Copilot-CLI-Challenge-PaceMate

My Experience with GitHub Copilot CLI

Copilot CLI was a game-changer. Here's what made the difference:

āœ… Scaffolded Elm state machines in seconds instead of hours
āœ… Suggested Phoenix Channels boilerplate that just worked
āœ… Assisted in integrating AI feedback logic without getting lost in syntax
āœ… Enabled fast iteration in the terminal—critical for a one-day sprint
āœ… Helped debug Docker errors with specific Alpine package names
āœ… Generated comprehensive test cases covering edge cases I might have missed
āœ… Helped configure Fly.io deployment with optimal Phoenix settings
āœ… Suggested Netlify build commands for Elm compilation
āœ… Accelerated documentation writing - helped structure markdown files, suggested clear formatting, and ensured consistency across multiple docs (ARCHITECTURE.md, DEPLOYMENT.md, FEATURES.md, etc.)
āœ… Git workflow optimisation - suggested commit message conventions, helped structure the repository with proper .gitignore patterns, and guided branching strategies
āœ… CSS and styling suggestions - provided responsive design patterns, suggested Font Awesome icon selections, and helped implement the calm, accessibility-focused colour scheme
āœ… JSON encoding/decoding in Elm - helped navigate Elm's JSON decoder patterns, especially for WebSocket message handling and complex nested data structures
āœ… Phoenix routing patterns - suggested clean REST-style routes and WebSocket channel patterns that aligned with Elm's expectations
āœ… Error handling strategies - recommended graceful fallback patterns for AI service unavailability and WebSocket disconnection scenarios
āœ… Docker multi-stage builds - optimised Dockerfile structure to minimise image size while keeping build times reasonable
āœ… GitHub Actions workflow - suggested CI/CD pipeline structure, health check patterns, and secret management best practices
āœ… Shell scripting helpers - generated the deployment script (deploy.sh) with proper error handling and user-friendly output
āœ… Environment-specific configuration - helped set up development vs. production WebSocket URLs, API endpoints, and feature flags

With Copilot, I could focus on making the app feel calm, human, and supportive, not fighting boilerplate or syntax. Even deployment configuration became straightforward with Copilot, suggesting best practices for Fly.io and Netlify. The CLI's ability to understand context across the entire project, from frontend Elm code to backend Elixir logic to DevOps scripts, meant I could stay in flow state and ship meaningful features instead of context-switching between documentation sites.

Real Examples Where Copilot Saved Hours

1. Elm State Machine Pattern
When I asked Copilot to help scaffold the session state machine, it generated:

type SessionState
    = Idle
    | Breathing
    | ShowingPrompt
    | Speaking
    | ShowingFeedback FeedbackData

type Msg
    = StartSession
    | FinishBreathing
    | StartSpeaking
    | StopSpeaking
    | ReceiveFeedback String
Enter fullscreen mode Exit fullscreen mode

This immediately gave me the exact architecture I needed—no trial and error.

2. Phoenix Channel JSON Encoding
Copilot suggested the idiomatic Elixir pattern for encoding speech metrics:

def handle_in("speech_complete", %{"transcript" => transcript, "duration" => duration}, socket) do
  feedback = SpeechAnalyzer.analyze(transcript, duration)

  push(socket, "feedback", %{
    message: feedback.message,
    metrics: %{
      wpm: feedback.wpm,
      word_count: feedback.word_count,
      duration_seconds: duration
    }
  })

  {:noreply, socket}
end
Enter fullscreen mode Exit fullscreen mode

Without Copilot, I would've spent time reading Phoenix docs for the exact pattern-matching syntax.

3. Docker Health Check
When setting up the Dockerfile, Copilot generated this production-ready health check:

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:4000/api/health || exit 1
Enter fullscreen mode Exit fullscreen mode

It even knew to use wget instead of curl for Alpine Linux!

4. Elm JSON Decoder for WebSocket Messages
One of Elm's trickier parts is JSON decoding. Copilot generated this decoder after I described the message structure:

feedbackDecoder : Decoder FeedbackData
feedbackDecoder =
    Decode.map3 FeedbackData
        (Decode.field "message" Decode.string)
        (Decode.field "wpm" Decode.int)
        (Decode.at ["metrics", "word_count"] Decode.int)
Enter fullscreen mode Exit fullscreen mode

This saved me from debugging nested field access and type mismatches.

5. GitHub Actions Deployment Workflow
Copilot scaffolded the entire CI/CD pipeline with proper secrets handling:

- name: Deploy to Fly.io
  env:
    FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
  run: |
    flyctl deploy --remote-only --ha=false
    flyctl status --json | jq '.status'
Enter fullscreen mode Exit fullscreen mode

It even included the health check verification step I hadn't thought of!

6. Responsive CSS Grid
When building the metrics display, Copilot suggested this clean grid pattern:

.metrics-grid {
    display: grid;
    grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
    gap: 1.5rem;
    padding: 1rem;
}

@media (max-width: 768px) {
    .metrics-grid {
        grid-template-columns: 1fr;
    }
}
Enter fullscreen mode Exit fullscreen mode

Perfect mobile-first responsive design without me having to look up the auto-fit vs auto-fill debate.

These weren't just syntax suggestions. Copilot understood the context of each language and framework, saving me from constant context-switching between documentation sites. That's what made the one-day sprint possible.

A Note on Experience: Not an Expert, Just Determined

Here is the truth. I am not an Elm expert, and I am not an Elixir wizard.

I had not touched Elixir in almost eight years, and Elm was completely new territory. Normally, that would mean syntax anxiety, type system confusion, and hours buried in documentation.

This time, there were no jitters. Just flow.

GitHub Copilot CLI felt like a pair programmer who knew both languages deeply. When I forgot Elixir pattern-matching syntax, it filled in the structure. When Elm decoders felt unfamiliar, it guided me toward the idiomatic approach. When I could not remember whether Phoenix used push or broadcast, it simply suggested the right path.

The shift was bigger than convenience.

I felt more like an architect than a coder.

My focus stayed on the why and the how. Why does this state transition matter for someone who stutters? How should feedback feel supportive instead of judgmental? How do we handle a WebSocket disconnection gracefully?

Copilot handled the what. The exact Elm decoder syntax. The clean Phoenix channel pattern. The right Docker configuration for a proper health check.

Instead of bouncing between docs, forum threads, and compiler errors, I stayed in product thinking mode. The cognitive load dropped. I was designing and directing while Copilot translated intent into implementation.

Eight years ago, this would have been a day of reading and debugging. Instead, it was a day of building and refining something meaningful.

The takeaway? Do not let "I am not an expert in X" stop you. With the right tools, curiosity becomes capability, and capability becomes real software that helps people.

AI does not replace expertise. It amplifies intent.

Challenges & Interesting Discoveries

Technical Challenges (Solved by Copilot, with love, of course)

  1. HTTPoison Dependency Lock - Had to run mix deps.get to generate mix.lock entries
  2. Elm Operator Precedence - Pipe operator has lower precedence than <> (string concat)
  3. Elm Type Name Clash - Union constructors and type aliases share namespace
  4. Port Module Declaration - Elm ports require port module declaration, not module
  5. Elm npm Package in Docker - Linux binary download fails; solved by pre-compiling locally
  6. Erlang ncurses Runtime - Alpine needed explicit ncurses-libs for Erlang runtime

Observations About Copilot CLI

While GitHub Copilot CLI is amazing, there are a few things I noticed while building this POC:

āš ļø Concurrent Prompts Issue: If you try to enter a new prompt while a previous one is still running, the screen flickers and sometimes hangs. Worth keeping in mind for future development.

āš ļø Minor Latency: Occasional latency when switching between multiple tasks in the terminal.

Interesting Note: I noticed similar issues with Claude Code CLI during its early phase—screen flickering, hangs when entering a new prompt while a previous one was running, etc. This seems to be a common quirk in terminal-based AI coding tools that stream outputs in real-time.

These aren't show-stoppers, but something to be aware of for future improvements and real-world workflow.

What Makes This Special

For Users with Stuttering

  • No judgment: Just you and your words
  • No timer: Speak at your own pace
  • Gentle feedback: Encouragement, not criticism
  • Privacy: AI runs locally on your device/server

For Engineers

  • Language diversity: Elm + Elixir show that "boring" languages solve real problems
  • Real-time architecture: See how WebSockets and functional patterns work together
  • AI integration: Local LLM pipeline without cloud costs or privacy concerns
  • Type safety: Elm's type system prevents entire classes of bugs
  • Testability: Functional architecture makes testing straightforward

For the Community

  • Open source: Build on it, fork it, make it yours
  • Documentation: Learn from comprehensive guides and code
  • One-day POC: Proof that meaningful software can be built fast with the right tools

Final Thoughts

This POC is small, a little crazy, and deeply personal.

I've faced stuttering struggles myself, and I know how hard it can be when therapy is expensive or inaccessible.
I wanted to use my engineering skills to help the community, even in a tiny way.
Elm and Elixir were perfect tools to make something stable, real-time, and calm.
GitHub Copilot CLI made it possible to build, test, document, and deploy in one day.

Sometimes, the forgotten languages are magical. Sometimes, AI is your sidekick. Sometimes, local-first architecture beats cloud everything. And sometimes, a little crazy idea can turn into a meaningful tool for people who really need it.

PaceMate is proof that accessibility-focused software doesn't need to be flashy—it needs to be thoughtful.


GitHub Repository

šŸ‘‰ ujjavala/GitHub-Copilot-CLI-Challenge-PaceMate


Built with ā¤ļø for people who stutter. Built fast with Elm, Elixir, and GitHub Copilot CLI.

Top comments (14)

Collapse
 
georgekobaidze profile image
Giorgi Kobaidze

Love this. I'm sure this app will help a lot of people feel more confident and better about themselves. Great job!

Collapse
 
ujja profile image
ujja

Thank you so much, that really means a lot. I’ve been there first-hand, and I know how challenging it can be. I’ve been wanting to build something like this for a long time to give back to the community, and this dev challenge finally gave me the push to just go for it. Really appreciate the encouragement!

Collapse
 
georgekobaidze profile image
Giorgi Kobaidze

šŸ‘šŸ‘šŸ‘

Collapse
 
aliabhatt profile image
Alia

This is honestly one of the most thoughtful hackathon submissions I’ve read in a while.

You didn’t just build a demo to show off tech. You built something that comes from lived experience. That shows.

A few things that really stood out to me:

  • Choosing Elm and Elixir for stability and predictable state instead of chasing trends. That’s a strong architectural decision.
  • Designing around calmness and psychological safety, not just features.
  • Running AI locally with Ollama + Phi3. That privacy-first approach makes a lot of sense for speech data.
  • Building real-time feedback with Phoenix Channels in a one-day sprint. That’s bold.
  • The fallback system. No broken experience if AI fails. That’s product thinking.

What I like most is how clear your intent is. You weren’t trying to prove you’re an expert in Elm or Elixir. You were trying to build something reliable for someone who might already feel vulnerable while speaking. That mindset matters more than flashy features.

Also, the Copilot examples were practical. You didn’t just say ā€œit helped.ā€ You showed exactly how it accelerated state machines, decoders, Docker health checks, and CI workflows. That makes the story credible.

If this is a one-day POC, I’d be curious to see where it goes next. Maybe:

  • Visual breathing pace sync with detected WPM
  • Session comparison over time
  • Optional exportable progress reports
  • Community prompt packs But even as it stands, PaceMate feels intentional and real. Respect for turning something personal into something useful. That’s what good engineering looks like.
Collapse
 
ujja profile image
ujja

Wow. This genuinely means a lot. Thank you for taking the time to read it so closely.
You are right. I was not trying to build a flashy demo. I wanted to build something that feels safe for someone who might already feel vulnerable while speaking. That intention guided every decision more than the tech did.
Choosing Elm and Elixir was very deliberate. I wanted predictable state and stability. Calm systems help create calm experiences. The same goes for running AI locally. Speech data is deeply personal, so privacy could not be an afterthought.
The fallback system was important to me too. If the AI fails, the user should not feel like they failed. That was a product decision first and a technical one second.
I am really glad the Copilot examples felt practical. I wanted to show how it actually helped in real scenarios, not just say that it was useful.
And I love the ideas you suggested. Breathing pace sync and session comparison especially feel aligned with the core mission. Lots to think about there.
Thank you again. Thoughtful feedback like this makes the whole effort feel worthwhile.

Collapse
 
javadinteger profile image
Javad

I think that your idea is so great for everyone, thanks for sharing!

Collapse
 
ujja profile image
ujja • Edited

Thanks Javad. Glad you liked the ideašŸ’›

Collapse
 
egedev profile image
egeindie

Really cool project and I love the tech choices. Elm + Elixir is such an underrated combo - you get compile-time guarantees on the frontend AND the fault tolerance of the BEAM VM on the backend. Perfect for something where reliability actually matters to real people.

The local-first AI approach with Ollama + Phi3 is smart too. For a speech therapy app, privacy isn't just a feature - it's a requirement. Nobody wants their speech practice data hitting some random API endpoint.

One thing that stood out: building the fallback system so the app works even without the AI running. That's the kind of resilience thinking that separates a real product from a demo. Too many AI-powered apps just break when the model is unavailable.

As someone who ships SaaS products with Go + React, I appreciate seeing someone pick the right tools for the job instead of defaulting to the most popular ones. Elm's type system preventing runtime errors is exactly what you want for accessibility-focused software. Awesome work shipping this in a day šŸ”„

Collapse
 
ujja profile image
ujja

This is such a thoughtful comment, thank you šŸ™Œ
You really understood what I was trying to do with the stack. Reliability and privacy were not just technical decisions, they were core requirements for something this personal.
The fallback logic was intentional from day one. I did not want AI to be a single point of failure. The app should still feel supportive and complete even without the model running.
Also appreciate the note about picking the right tools. Trends are fun, but fit matters more. Glad it resonated with someone who ships real products too šŸ”„

Collapse
 
ujja profile image
ujja • Edited

Just wanted to add this too.
I spent a few hours yesterday revisiting Elixir, Phoenix, and Elm while building this, and I already feel like I’ve barely scratched the surface.
Elixir feels more capable than I remembered. Phoenix is smoother and more polished. Elm is still resilient, predictable, and just works in a way that’s hard to find.
What amazes me is how quietly this stack has grown. It doesn’t shout for attention. It just keeps getting better.
If this little project does anything, I hope it’s just giving a small nod of appreciation and maybe helping these stacks get a bit more notice.

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

Really cool! I like your avatar. It make your application user friendly

Collapse
 
ujja profile image
ujja

Glad you found the app user friendly šŸ’›
It was honestly built from personal experience and a lot of love. That made every little detail matter.

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

nice!

Collapse
 
ujja profile image
ujja

Thank you so much 😊