DEV Community

Cover image for Using Perplexity AI and Gemini 3 (Pro) for Academic Research and Writing
Dumebi Okolo
Dumebi Okolo

Posted on

Using Perplexity AI and Gemini 3 (Pro) for Academic Research and Writing

I’m currently in the trenches of my Master’s thesis, focusing on 5G Anomaly Detection using TensorFlow Lite at the Edge.

I wrote a paper on EDGE-DEPLOYABLE TENSORFLOW LITE AUTOENCODER FOR REAL-TIME 5G ANOMALY DETECTION AND COST-AWARE OPTIMIZATION that you can check out.


This blog post is part of my short-form content series. Where I write straight-to-the-point blog posts of less than 1000 words


Before building my AI research workflow, I used to spend hours just "pre-reading," trying to build the literature review section of my thesis.

Not anymore!
I built my own "Research Stack" with already existing AI tools that does all the heavy lifting for me in a matter of minutes.

I don’t use just one tool. I use an AI aggregator and a AI Native Pro Model together.


Perplexity is the AI Aggregator

Many people, like me before making this discovery, think of Perplexity as just a model; it’s actually more of a "librarian."
It doesn't just rely on its own model; it uses some of the best in the industry—Claude 4, GPT-5, and Gemini 3—to scour the web and find citations.

perplexity models
Sonar is Perplexity's own model.

I've come to learn that Perplexity is the "king" of finding where the information is.

However, when it comes to understanding/making sense of the 20 or so PDFs I just found? That’s where the "Aggregator" model hits a wall.

perplexity ai interface

The Native Pro Advantage (Gemini Advanced)

Because I have a Gemini Pro subscription, I have access to something Perplexity’s implementation can’t match: Gemini's 2-Million Token Context Window.**

While Perplexity gives me snippets and links, I can feed those entire PDFs or papers it gives me into Gemini Pro.
This way, Gemini doesn't just look up the research papers; it "lives" in them.
That is, it remembers a conflict in data on page 4 and compares it to a conclusion on page 48.

My Research Workflow

Here is exactly how I use Perplexity AI and Google Gemini to speed up my thesis research:

Phase 1-- Using Perplexity to find research papers and material:

I ask Perplexity to find the most recent 2026 papers on Federated Learning in 5G. It gives me URLs and citations.
Here's an example of my prompt:

Find the top 5 most cited research papers from late 2025 and 2026 regarding 'Anomaly Detection in 5G Core Networks using Federated Learning.' Provide the direct URLs and a 2-sentence summary of their core methodology

Enter fullscreen mode Exit fullscreen mode

Phase 2-- Using Gemini Pro to go through research materials:

I download those papers and upload them to Gemini and use it for things like comparing, reasoning, or critiquing.
Here's an example prompt I've used

I have these 5 research papers [Paste links/sources]. Using your 2M token context, analyze how these papers address the 'latency vs. accuracy' trade-off in Edge computing. Then, draft a 1,000-word skeleton for my literature review that explains why AI automation is the solution to 5G network failures.
Enter fullscreen mode Exit fullscreen mode

Phase 3: Direct editing in the Google Docs Workspace

Since Gemini is integrated with my Google Workspace, I edit the literature review draft directly into a Google Doc.


📊 Comparison: Perplexity AI vs Google Gemini for Research

Feature Perplexity (The Librarian) Gemini Pro (The Architect)
Primary Strength Real-time search & citations. Massive context & reasoning.
Model Source Aggregator (Claude, GPT, Gemini). Native (Google's best).
Context Window Small (Snippet-based). 2M+ Tokens (Entire libraries).
Best For... Finding "The What" & URLs. Analyzing "The How" & Drafting.
Integration Web-only. Google Workspace (Docs/Gmail).

What I have learned in my AI use is that looking for the one tool that does everything would lead to failure. or inaccuracies.
I prefer a "separation of concerns" type of workflow, leading to better accuracy.
This only works, though, when you know how to build the right stack for your workflow and how to get around the stack

Are you still using a single LLM for your research, or have you started "stacking" your tools? Let's discuss in the comments!

You can find me on LinkedIn!

Top comments (3)

Collapse
 
lizmari_simon_1df195af184 profile image
lizmari simon

That's very interesting.
Why Gemini and not ChatGPT?

Collapse
 
dumebi_okolo_38a432d7880d profile image
DInc!

I think it's probably about capacity. The author mentioned the 20 million token idea. I think that's why she preferred Gemini.

Collapse
 
precious_du profile image
Precious

Great work on the short form content thing though!
Glad I had to read through this without having to skim through or outrightly skip unnecessary information.