Hub Assistant

A generative AI tool that leverages retrieval augmented generation (RAG) to redefine how PBS Station Support staff and Member Stations access the internal knowledge base resources.

Role

Product Designer and Design Engineer

Team

Sophie Marshall (Data Engineer), Useff Chase (Principal Engineer), and Amy Rubino (UX Director)

Tools

Figma, React, HTML, CSS, Framer Motion, GitHub and VS Code

Impact*

Faster, Accurate Access to Information on the internal Knowledge Base

75%

fewer support inquiries related to sources and information available on the Hub

80%

conversion rate based on query submissions

With this beta release, the team innovative approach and the tool's underlying technology could be scaled to assist PBS GA viewers in discovering content on the flagship pbs.org or support educators in finding learning resources on PBS LearningMedia.

*The Hub Assistant is currently in a closed beta release.

Context

The Hub, an internal Knowledge Base, relies on an extensive navigation structure and lexical search based on keyword matching for information and resource discovery.

As the central repository, the Hub is constantly updated with new content from numerous HQ departments, via a weekly email newsletter, and regularly scheduled webinars.

Search results for "Station Financial Dashboards," showing three out of five results as irrelevant.

330'

Public Media Member Stations

4 '

Station Support staff

A Station Support team assists the entire Member Station network with all inquiries including locating resources and/or answering their questions about updated information that’s posted within the Hub.

The Challenge

How can Member Stations be empowered to quickly find the information they need and ensure limited Station Support staff can focus on higher-priority inquiries?

30% of Station Support inquiries stem from poor content discovery

Design Hand-Off

I was handed the project after the previous designer couldn't fully address the request or support the engineering team in the development efforts.

The banner appeared oversized on smaller screens, lacked clarity about its interactivity, and was missing a status indicator to confirm when the LLM endpoint is available.
Placing helpful searches below the search field and inconsistently positioning the input field across screens confused users about their purpose and functionality.

I saw an opportunity to improve the user experience and visual design.

The UI lacked clear assurance that the user’s query was being processed during the time-intensive generation of LLM responses and retrieval of related sources.
A heavy visual emphasis on the sources without accounting for the AI response

And, I wanted to differentiate the product against the field of blue on the Hub.

I explored the PBS Design System and I uncovered a rarely used yet powerful purple accent—analogous to the blues, but distinct enough to break away while maintaining accessibility.

Beta-Released Designs

I refined the previous design to better showcase the product and its generative AI capabilities, incorporating subtle details that elevate the overall quality.

Built in React with Motion for React animations and transitions, the beta-released product balances performance with polish.

Subtle animation gently guide the user’s attention to the Assistant

Animated background stars draws attention toward the Hub Assistant widget, encouraging interaction as an engaging alternative to the traditional search that sits in a prominent location at the top of the window.

Trending queries across the network ensures timely and quick engagement

Animated background stars draws attention toward the Hub Assistant widget, encouraging interaction as an engaging alternative to the traditional search that sits a prominent location at the top of the window.

Enhanced Search with Predictive Queries

As users type, the Hub Assistant predicts and suggests related queries employing the underlying vector embedding database technology in order to speed up search and resource discovery.

From Curiosity to Query

A Loading Animation Reassures Users and Sets Clear Expectations

Generating results may take up to 30 seconds, so the loading animation provides visual feedback, while keeping users engaged and establishing the notion for responsiveness.

From Ask to Answer
Prev.
Next

Multiple Pathways to Access the Hub’s Resources with Intelligent AI Search

By moving beyond simple keyword matching, the Assistant enhances access to the Hub’s information for contextually accurate results. It also encourages deeper exploration by presenting related questions and AI-generated concise responses with cited and linked sources.

User Feedback Helps Continuously Improve the AI Response and Relevant Sources

The Assistant improves through reinforcement learning, optimizing results over time based on quick feedback.

Prev.
Next

Fine-Tuning the AI model with Real User Input

Users can submit detailed feedback on query results, which is manually reviewed and implemented to improve the product. Eventually, this input will feed back into an evaluation model to boost the AI’s accuracy and efficiency over time. Moreover, the feedback flow encourages participation with status-based animations that reward users for their time.

From Response to Refinement

User Testing

With the in-person moderated usability testing, I wanted to gather feedback on the Assistant's designs and functionality, focusing on a new approach to displaying search results with generative AI.



One of five representatives from the Station Support Team running through the moderated usability test.

The goals of the user testing were:

Watch

1

Watching Station Support staff use the Hub Assistant, from start to finish.

Learn

2

Learning if there are opportunities to make it a better experience.

Understand

3

Understand the likelihood that Member Stations will use it.

Following the usability testing, the Hub Product Owner released the Hub Assistant to a select number of Member Stations to test and provide feedback, allowing the team to identify and fix bugs, improve usability, and gather valuable insights before updating and releasing the product to the entire Member Station network.

Updated Designs

As feedback and insights are coming in through the closed beta-release, I have begun to revise the designs to better integrate the Assistant with the Hub, reduce a user’s cognitive load, and optimize the visual design.

In addition, the Engineers and I refactored much of the code to improve on the performance and scalability of the product in preparation for a final release.

Improved Branded Experience

Since the API status indicator was no longer necessary due to refactoring efforts, the PBS logo was added to the available space to reinforce the PBS brand identity after users expressed a disconnect.

A More Integrated Assistant

The overall size of the Hub Assistant window was reduced to create a more cohesive experience, as users felt it was visually detached from the rest of the Hub user interface.

Enhanced Clarity Through Simplification

To improve usability, I unified the button design and positioned all action buttons along the bottom, addressing user confusion about their appearance and placement. Additionally, I moved related queries closer to the search input field to clarify their function.

Reduction in Cognitive Load

Progressive disclosure was implemented for response sources and other elements to reduce cognitive load, allowing users to focus on key interactions without distractions.