A generative AI tool that leverages retrieval augmented generation (RAG) to redefine how PBS Station Support staff and Member Stations access the internal knowledge base resources.
Product Designer and Design Engineer
Sophie Marshall (Data Engineer), Useff Chase (Principal Engineer), and Amy Rubino (UX Director)
Figma, React, HTML, CSS, Framer Motion, GitHub and VS Code
Faster, Accurate Access to Information on the internal Knowledge Base
fewer support inquiries related to sources and information available on the Hub
conversion rate based on query submissions
With this beta release, the team innovative approach and the tool's underlying technology could be scaled to assist PBS GA viewers in discovering content on the flagship pbs.org or support educators in finding learning resources on PBS LearningMedia.
*The Hub Assistant is currently in a closed beta release.
The Hub, an internal Knowledge Base, relies on an extensive navigation structure and lexical search based on keyword matching for information and resource discovery.
As the central repository, the Hub is constantly updated with new content from numerous HQ departments, via a weekly email newsletter, and regularly scheduled webinars.
330'
Public Media Member Stations
4 '
Station Support staff
A Station Support team assists the entire Member Station network with all inquiries including locating resources and/or answering their questions about updated information that’s posted within the Hub.
How can Member Stations be empowered to quickly find the information they need and ensure limited Station Support staff can focus on higher-priority inquiries?
30% of Station Support inquiries stem from poor content discovery
I was handed the project after the previous designer couldn't fully address the request or support the engineering team in the development efforts.
I saw an opportunity to improve the user experience and visual design.
And, I wanted to differentiate the product against the field of blue on the Hub.
I refined the previous design to better showcase the product and its generative AI capabilities, incorporating subtle details that elevate the overall quality.
Built in React with Motion for React animations and transitions, the beta-released product balances performance with polish.
Animated background stars draws attention toward the Hub Assistant widget, encouraging interaction as an engaging alternative to the traditional search that sits in a prominent location at the top of the window.
Animated background stars draws attention toward the Hub Assistant widget, encouraging interaction as an engaging alternative to the traditional search that sits a prominent location at the top of the window.
As users type, the Hub Assistant predicts and suggests related queries employing the underlying vector embedding database technology in order to speed up search and resource discovery.
Generating results may take up to 30 seconds, so the loading animation provides visual feedback, while keeping users engaged and establishing the notion for responsiveness.
By moving beyond simple keyword matching, the Assistant enhances access to the Hub’s information for contextually accurate results. It also encourages deeper exploration by presenting related questions and AI-generated concise responses with cited and linked sources.
The Assistant improves through reinforcement learning, optimizing results over time based on quick feedback.
Users can submit detailed feedback on query results, which is manually reviewed and implemented to improve the product. Eventually, this input will feed back into an evaluation model to boost the AI’s accuracy and efficiency over time. Moreover, the feedback flow encourages participation with status-based animations that reward users for their time.
With the in-person moderated usability testing, I wanted to gather feedback on the Assistant's designs and functionality, focusing on a new approach to displaying search results with generative AI.
The goals of the user testing were:
1
Watching Station Support staff use the Hub Assistant, from start to finish.
2
Learning if there are opportunities to make it a better experience.
3
Understand the likelihood that Member Stations will use it.
Following the usability testing, the Hub Product Owner released the Hub Assistant to a select number of Member Stations to test and provide feedback, allowing the team to identify and fix bugs, improve usability, and gather valuable insights before updating and releasing the product to the entire Member Station network.
As feedback and insights are coming in through the closed beta-release, I have begun to revise the designs to better integrate the Assistant with the Hub, reduce a user’s cognitive load, and optimize the visual design.
In addition, the Engineers and I refactored much of the code to improve on the performance and scalability of the product in preparation for a final release.
Since the API status indicator was no longer necessary due to refactoring efforts, the PBS logo was added to the available space to reinforce the PBS brand identity after users expressed a disconnect.
The overall size of the Hub Assistant window was reduced to create a more cohesive experience, as users felt it was visually detached from the rest of the Hub user interface.
To improve usability, I unified the button design and positioned all action buttons along the bottom, addressing user confusion about their appearance and placement. Additionally, I moved related queries closer to the search input field to clarify their function.
Progressive disclosure was implemented for response sources and other elements to reduce cognitive load, allowing users to focus on key interactions without distractions.