Crowdsourcing

We briefly touched on crowdsourcing as a way to gather usability feedback at scale. While this approach can be useful for collecting large amounts of data quickly, it was clear that it comes with limitations. Crowdsourced testing can lack context, and the quality of feedback can vary depending on the participant’s level of engagement and understanding. It feels most useful when paired with more controlled usability testing rather than used on its own.

image.png


Eye Tracking

A large part of today’s session focused on eye tracking and how it can be used to understand user behaviour. We covered the history of eye tracking and how it has evolved into a valuable UX research tool. Understanding concepts such as fixations and saccades helped explain how users visually process interfaces, with fixations showing where attention is focused and saccades representing the rapid movements between those points.

We also discussed foveal and peripheral vision, which helped explain why certain elements draw attention more easily than others. Fixation counts can be particularly useful when analysing text-heavy layouts, as they can reveal how users move through paragraphs or navigate a page.

Different visualisation methods were covered, including heat maps and gaze plots. Heat maps give a broad overview of where users are looking most, while gaze plots show the exact order and path of eye movements. Both felt like powerful tools for identifying issues with layout, hierarchy, and navigation, especially when used alongside other usability methods.

image.png


Running a Usability Test

The second half of the session focused on running a usability test in practice. We were tasked with a website and creating three tasks for a participant to complete. The setup involved preparing a computer, assigning the participant to a new team, and conducting the test while recording both the screen and audio. My role during this process was as the observer, which allowed me to focus entirely on the participant’s behaviour rather than guiding the task. After the test, we reviewed the findings and noted observations based on how the participant interacted with the site.

Here is a recording of the test.

Screen%20Recording%202025-12-04%20at%2011.51.30.mov


Observations

For the first task, the participant navigated through the TV tab and then into televisions, completing the task without using filters or tertiary navigation. This suggested that the primary navigation was clear enough to support task completion without additional support.

In the second task, the participant used the dishwasher tab and immediately engaged with the comparison feature, which was the first tool they noticed. However, the “Buy Now” option caused some confusion, as it was interpreted differently from an “Add to Basket” action. Despite this, the task was completed successfully, although the wish list feature was not used at any point.

The third task was the most complex. The participant initially attempted to use a banner, then moved to the footer under shop services, where they found delivery costing but not the delivery policy. From there, they returned to the homepage and attempted to search for the delivery policy, navigating through My Account, My Basket, the return form, Contact, and finally the terms and conditions, where the task was eventually completed. This highlighted issues around information architecture and the visibility of important policies.