BRIEF
The BBC employs various methods to undestand what their audiences think of their products and content including long-standing formalised tools like the Broadcasters’ Audience Research Board (BARB) for TV audience measurement, Radio Joint Audience Research (RAJAR) for radio audience measurement, the Pulse panel for appreciation index & iPlayer sign-in data for user behaviour on the iPlayer platform. Additionally, the BBC uses informal methods such as interactive survey, online reviews/ratings, social media monitoring, sentiment analysis etc. to also get more subjective audience response. However, these are all for well-known and established ‘products’
When it comes to more novel forms of media experiences the BBC has various innovation labs which test new ideas with a smaller sub-set of audiences and ofcourse, the iPlayer platform allows some A/B and beta testing. However, these methods depend on experiment participants magically arriving at the websites and the platforms themselves limit the type of experiences that can be tested.
Ideally, it would be useful to engage in ecologically valid, iterative testing, earlier in the development/implementation process on novel products for a media company (like the BBC). That way the novel products can be crafted in a manner more in tune with audiences plus brand reputation can be protected. The platform for engagement would have to be portable and updateable so that the BBC could distribute the ‘platform’ to participant homes for in-the-wild testing.
The design and implementation of the platform itself served as a good way to test various user research methodologies.
The project explored various things including:
- Hardware build and encasement
- Software OS and ‘TV’ application
- Analytics
- Application Design (Presentation layer)
- Participant recruitment
- Experiment methodology
- Data Analysis & Dissemination
- Branding & Packaging
- Future Scoping
Although heavily involved in the last six sub-areas of the project, this write-up only deals with the design of the experiment methodology.
In collaboration with…
BBC colleagues in R&D, UX&D, Audience Research, TV Platforms, Sports, Technology, Strategy & Architecture


DESIGN RESEARCH QUESTION
- How can we design the right questions to ask participants through each step of testing?
- How do we go about posing the questions and when in an experience being test?
- What format should the questions be in and where should the questions be presented, in what format?
The design research questions in this project had three purposes:
- To develop a policy/programme of how to design future experiment methodology for the platform.
- To test the programme while also testing the delivery of the platform at the protoype level.
- To explore how the platform can be built further to appeal to more stakeholders.
SCOPE
The pilot experiment was testing out the MVP of the platform with a few trusted colleagues within the BBC in order to soft launch a potential methodology. The ‘experience’ was simply emulating a ‘TV’ platform.
CONTEXT
Each ‘black box’ was designed to be connected to an existing monitor in the participants’ home on their home monitor. The ‘black box’ was designed to work with a remote just like a normal TV. The whole set up was package into a letter box sized packed that could be posted to participants. The idea was to keep the platform as cheap and light as possible so that the process could be scaled up for future iterations of the experiment.
Once the ‘black box’ was connected, the monitor would display a ‘streaming’ platform comparable to any other on the market. The team was able to take inspiration from user expectations in the current streaming market and design a clean, easy to navigate, interaction layer which participants could use to select a BBC programme to watch on TV (including live streams).
Considering the platform would be used for longitudinal studies in the homes of users, there were two main data collection methods employed.
- Data automatically collected through the platform detailing the viewing behaviour of the user: timestamps, programme chosen, length of programme watched etc
- Data collected using cultural probes, as part of user research studies through getting the user to carry out specific tasks on the platform and give ‘prompted’ feedback about their experience with it
- Data collected at the end of the period of study through an online questionnaire designed to elicits the users’ opinion on the overall platform.
DESIGN APPROACH
Since the project was in its infancy with a lot of gaps in knowledge, the best methods to understand user behaviour with the platform and uncover way to improve design were qualitative. The purpose of the first pilot study was to design better interactions and troubleshoot the first round of set up and usage. While forms of contextual inquiry or 1:1 interviews could be used, they were not focused enough nor were they efficient enough to cast an exploratory net over the research questions.
Diary studies, through cultural probes, gives users the privacy to ‘play’ with the platform without fear of failure while also affording the research the scope to dial in on specific aspects of the interaction.
The cultural probes ultimately took two formats: paper and digital
In keeping with the branding designed for the platform – clean, playful & familiar; the paper cultural probes used were in postcard and sticker form. The online digital format was branded chat on slack.




Designs owned by BBC R&D
Recruitment was conducted internally. The pilot was a soft launch so it was important to keep the platform within closed circles. However, the colleagues recruited were selected carefully to ensure they didn’t know any details about the project and had expertise in other areas.
All participants were briefed on what the expectations of the study were and instructed on how to use the cultural probes. This was done in two different instances – during recruitment and just before posting the materials to their home addresses. Additionally, the instructions were reiterated in small, digestible chunks throughout the study at suitable times – in the format of ‘reminders’ to finish expected tasks in a timely manner.
The responses required from the participants were quite focused so the design of the cultural probes (postcards) were part-prescriptive and part blank slate. Participants were through a structured (consistent) manner in each postcard so they could give specific answers and then elaborate on uncovered, discovery behaviour. A few participants opted to respond to the queries on slack (online). Generally, the prompts online follower a similar pattern to the postcards. This was particularly useful, when participants opted to send in an image or a video to visualise their response. The prompts were scheduled through the week on specific days to maintain a balance between gentle reminders and spam. After collection of the materials and hardware post-study, participants were interviewed to get final thoughts.
CONCLUSIONS
The first pilot study went smoothly, mostly. Most importantly, all participants were able to finish off the tasks design at the beginning of the study and returned all materials back to the lab. Although, many conclusions and implication for future designs were drawn, this page will not detail them due to the project remaining in-house.
However, the methodology designed for the study, in conjunction with the cross-disciplinary team, was very successful. Participants were able to follow & understand the instructions on the cultural probe and respond effectively to record their thoughts on the platform. Furthermore, they were able to point out flaws and credit when due in an easy manner. Since the tasks were split to represent various customer journeys, the study goals were reached. The conclusions also covered exploratory aspects of the platforms.
On the aside, Bill Gaver and his team from Goldsmiths (now Northumbria) gave us a lot of inspiration in extrapolating where this project might go.

You must be logged in to post a comment.