Research Assistant

Moody’s

Moody’s is a global financial services company best known for its credit ratings, research, and risk analysis. It Evaluates credit risk via ratings, provides analytics and tools for risk management and offers research and data to support financial decision-making.

Product Goals

Primary Objective:
Build a AI powered chatbot that simplifies access to Moody’s vast financial data while ensuring accuracy, compliance, and user trust.

Key Goals:

  • Empower users (analysts, clients, risk managers) to get fast, contextual answers.

  • Reduce cognitive load in navigating complex reports.

  • Ensure secure and transparent interactions.

Team

1 PM, 2 Engineers, 1 Data Scientist, Myself (Lead Designer)

Role

Product Designer

Tools

Figma, OpenAI API, FigJam

Timeline

12 - 20 Weeks

My Approach

Preplanning months ahead, the team and I gathered necessary information and details pertaining to the wants, needs, and frustrations of the clients and the needs of AI Chatbot.

Some major insights that were gathered were:

  1. Document Q&A like what’s Moody’s latest view on the US banking sector?

  2. Report Summarization summarize financial report of current and past years.

  3. Rating Explanation like why did company X get downgraded?

  4. Data Retrieval for example show sovereign default rates from 2010-2024.

  5. Compliance Disclosure Assistant for example does this document meet disclosure standards?

User Research

User Interviews

During the ideation phase of the project, I conducted user interviews to build new personas and to inform the design. Together with the team, we prepared an interview script with open-ended questions, focusing on our target audiences’ values, motivations, and daily routines

I spoke with:

  • 3 Institutional Investors → Want fast summaries, risk explanations, exportable insights

  • 2 Risk Analysts (internal) → Need report cross-referencing and quick recall

  • 1 Customer Support Lead → Wants to reduce repetitive queries

Key Insights:

  • Trust in AI was a top concern: “I need to know where the answer came from.”

  • Users prefer structured summaries with the option to dive deeper.

  • Navigation of long reports was frustrating — users wanted “just the relevant parts.”

Key Design Elements

a. Chat UI Design

  • Input: Multi-line with voice support, drag-drop PDFs

  • Output: Rich answers with:

    • Citations

    • Inline charts

    • Downloadable reports

b. Information Hierarchy

  • Use progressive disclosure:

    • Show short summary first

    • “View full details” expands into a structured, tabbed layout

c. Error & Safety UX

  • Hallucination guard: show fallback messages like “Let me double-check this with an analyst.”

  • “Not sure” state with clarifying questions

  • Legal disclaimer at bottom of every answer

Low Fidelity (Iterations)

Possible Solutions

After getting approval from the Product team, I worked together with the developers to see what components of the Chatbot were possible and not, and from there, I designed high-fi wireframes for them to implement during the development cycle.

Some key features that were thought by us and approved by Stakeholders were:

  • Chatbot Homepage Entry

  • Main Chat Interface

  • Document Viewer Model

  • Follow-Up Query Flow

  • Export & Feedback

High Fidelity Wireframes

Key Learnings

Start with the Problem, Not the Technology

What I learned:
It’s easy to get excited by AI tools, but the real value comes from understanding the user’s pain point first. I started with interviews and observed how financial analysts and investors struggle with information overload.

🎯 Lesson: Ask, “What problem are we solving?” before asking “What can AI do here?”

2. 🔍 Users Need Trust, Not Just Speed

What I learned:
In finance, users don’t just want fast answers — they need reliable, explainable answers. If they don’t trust the source or logic behind the AI, they won’t use it.

🎯 Lesson: Always show the source, highlight confidence levels, and offer a human fallback option.

3. 🧠 Designing for AI Requires Guardrails

What I learned:
LLMs can be powerful but also unpredictable. I had to think about what the chatbot should not say, how to handle unclear questions, and how to avoid “hallucinations.”

🎯 Lesson: Design “fallback states” like:
“I’m not confident in this answer. Would you like me to escalate this to an analyst?”

4. 🧩 One Size Doesn’t Fit All — Design for Roles

What I learned:
An internal analyst needs different data and tools than a client or sales team. I had to design role-based views, filters, and actions for each audience.

🎯 Lesson: Personalization increases relevance — even in enterprise tools.

5. 🧪 Real Data Makes Better Designs

What I learned:
Using real Moody’s documents and sample reports in the design and prototype helped me create more meaningful chat flows, citations, and edge cases.

🎯 Lesson: Design with real content early — it sharpens both UX and edge-case planning.

6. 🔄 AI is Iterative — Feedback is Key

What I learned:
Unlike static products, an AI chatbot keeps evolving with more usage. I had to design clear feedback paths (👍/👎, comment boxes) and escalation routes to analysts.

🎯 Lesson: The chatbot isn’t done at launch — it’s a living product that learns over time.

7. 🧱 Modular Design is Future-Proof Design

What I learned:
By breaking the chatbot into clear components — like filters, report viewers, feedback modules — the design became easier to scale and reuse across other Moody’s tools.

🎯 Lesson: Think in building blocks, not just screens.

My work with Moody’s is protected under NDA. The above shown is part of my work.