This project builds on from the previous one in which I researched car infotainment systems from several popular manufacturers and then created wireframes of my own system based on this research.

What is a prototype and why are they important?

Paper prototype of an app designed for a BlackBerry-style smartphone.

Usability.gov states that a prototype is: ‘a draft version of a product that allows you to explore your ideas and show the intention behind a feature or the overall design concept to users before investing time and money into development. A prototype can be anything from paper drawings (low-fidelity) to something that allows click-through of a few pieces of content to a fully functioning site (high-fidelity).’

To summarise, it’s like a pre-production model to prove that a concept works. In this example, it could be argued that my concept, or idea, were the sketches and wireframes that I drew in the previous project. They show how the system might look on a very basic level but actually functional or really give a clear idea of how a system will actually look. This is what the prototype is for. A prototype that has some functionality can be tested [by a group of users] and evaluated early-on to determine how successful the design is, whether it works and what can be done to improve it. Even prototypes made out of paper and sticky notes can be a useful indication of how a final product may look – or certainly a lot more than a wireframe drawing.

Prototyping is an essential stage in the development of any product because it can:

  • Save you money and time. If you make a prototype and it doesn’t fulfill it’s purpose or you discover flaws with it, you can either bin the project or consider how you might rectify faults before you invest money and time into making the final product.
  • They can also save you time because many prototypes are not coded at all, they are just a series of images of interfaces that are linked together. It’s quicker to build a prototype than code a product from the ground-up.
  • Get honest user feedback to shape the development journey of your product. If you get a user to test a prototype and they really dislike a feature you’ve implemented, you can make sure you change it for the final product. This helps identify usability issues before you’ve even written a line of code (potentially).
  • Give clients and users an idea of how the final product will look. Wireframes lack colour and assets such as images and icons – a prototype includes all of these so gives people a much better idea of how a final product will look.

How to test a prototype

There are several different methods to testing prototypes, , the main one is using an ‘IA-Based View’ which can be shown diagrammatically almost like a tree with the home page of the app being the root and the branches (or first-level nodes, for technical terminology) being the main sections of the app such as data inputting pages or maybe in the case of a website an About page or a Products page. You can have multiple branches or nodes which represent sub-pages.

Example of an IA-Based View site tree for the website nngroup.com showing the different branches/nodes of pages on the site.

From the site tree, four main prototype methods can be established:

Horizontal: the prototype has all of the menus and some or all of the links go to the main pages but the rest of the design isn’t there. In the case of my infotainment system only a home screen showing the apps that the user could open would be present. Useful if you just want first impressions on an interface.

T: all menus open and again some or all of the links lead to the main sections, but users can do specific tasks on specific pages. In the case of my infotainment system, they might be able to navigate to a certain location using the Sat Nav but they won’t be able to use the phone or any of the other apps because these either haven’t been designed yet or are not relevant to the testing that the user is carrying out. Useful if you want to try out one specific part of your prototype, get feedback, then depending on feedback decide whether to apply the design to the rest of the prototype.

M: as above but opens up a little more functionality but still not 100% functional. Using my infotainment system as an example, users may be able to use the Sat Nav and the phone, but maybe not the radio or the music app or any of the other apps for the same reasons explained above. Useful if you want to compare two different interfaces or two different parts of the prototype to see if a design works across pages or sections.

Semi-complete: as T and M but the interface has been designed so that the user can use the majority of the system and complete tasks. Useful if you’re nearing completion or approaching a release date and want feedback on how the majority, or entire, system works as a whole.

The one that you choose depends on what you want your users to test, how much time you have, what the software you’re using to make your prototype allows and what you aim to get out of your testing as a whole.

Diagram showing Horizontal, T, M and Semi-Complete testing methodologies.

There are also several other things to consider when testing a prototype.

  • Will the test be ‘blind’? Will the testers have prior knowledge of the system or not?
  • Are you going to compare the prototypes to anything? If you are, label each different prototype or variable that you test A, B, C, D and so on. Most tests are A-B where two different things are tested, but you can test more than two.
  • How are you going to record data? Interviewing testers after the test would be a method of recording qualitative data to get their feedback without anything other than their thoughts and opinions to back up what they are telling you. Collecting data such as the time it takes to complete a task, the score on a game or anything that can be used to produce actual statistics is quantitative data. Both are valuable but have their uses.

From prototype to product – a brief history of BMW iDrive

BMW iDrive is one of the infotainment systems that I did some research on to design my own system. The groundbreaking system was unveiled in September 1999 at the Frankfurt Auto Show when BMW first publicly displayed the Z9 GT concept car. The system was also demonstrated in the Z9 Convertible which was unveiled at the 2000 Paris Auto Show. BMW had noted that in the years leading up to the Z9’s unveiling, ‘modern driver assistance as well as numerous communication and comfort functions covering new and increasingly comprehensive purposes and applications have been introduced in the cockpits of modern cars,’ inspiring them to create an infotainment system for their future models going into the Millennium which would allow the system to be comprehensive, easy to use and non-distracting.

The interior of the BMW Z9 concept car.

BMW’s press release on the Z9 Convertible put iDrive at the centre of attention, claiming that it is a ‘major step from purely physical ergonomics in the cockpit towards an all-round driving philosophy arranging all modern comfort, communication and driver assistance systems directly around the driver for his maximum convenience and easy use at any time.’ The iDrive system in the Z9 cars would allow the driver to have the ability to change up to 700 different controls on the car which at the time was impressive and unheard of, but BMW had also thought about how the driver would interact with the system. They acknowledged that such a system would require hundreds of different buttons which would not be viable, so they developed a control wheel (like a rotary button) positioned in the console between the driver and passenger that allows the driver to scroll through menus, selecting options and apps that they want to use. Clearly BMW’s design was successful because it is still in use today on their current models (though it has evolved and now features a touchpad, too) and even reviews of the Z9 GT concept car from 2014, some 15 years after it was unveiled, still praise the system: ‘The rotary/push button falls readily to hand for the driver and front seat passenger and allows the driver to activate functions without the need to look at them while driving. A large 8.8-inch monitor in the center of the dashboard displays all the information the driver requires in a simple graphic display, apart from the speedometer and tachometer which are conventional analog instruments. The monitor is positioned within the driver’s field of vision, allowing it to be viewed while concentrating attention on the road ahead.’ 

The interior of the 2001 E65 7 Series, showing the iDrive system.

The first production model to feature iDrive was the BMW E65 7 Series which was launched in 2001, just two years after the Z9 concept was unveiled. The first generation iDrive remained in production until September 2008 when the second generation was released and fitted to new models (though the first generation system had undergone minor changes in those 7 years).  It’s interesting to note that whilst the software has changed completely since the Z9 concept was made and the E65 7 Series was introduced in 2001, the very basic layout of the hardware elements of the system has remained largely the same. Even in today’s BMWs, there is a large display in the centre of the dashboard which displays all of the information that a driver could ever need apart from the speedometer and rev counter and there is still a wheel to control the system. This goes to show that the system was a success. Today, iDrive is a very highly-regarded infotainment system and is often praised for how easy it is to use and the features it offers, but when it launched on the E65 7 Series it was often criticised for being too difficult to use and many reviewrs felt that it could be too distracting. However, this was in a time before infotainment systems were common place in cars.

The interior of the 2018 BMW M5, showing the latest version of iDrive.

Prototyping session with Tom Haczewski from The User Story

The User Story is a local [Norwich]-based UX company who redesign websites for existing customers, one of their most recent customers being Virgin Wine. The User Story recently redesigned their website and the website that you see online today is designed by them. My course tutor arranged for Tom from The User Story to visit Norwich University of the Arts to do a session on prototyping, talking to peers and I about the benefits of prototyping, the differences between a prototype and a wireframe, how each are done in industry and then he gave us a practical task to complete which was to produce a paper prototype of a Halloween-themed app to encourage people to go to Norwich on Halloween to support local business. We started by splitting into two teams and brainstorming ideas, my team produced a Halloween-themed dating app that would allow you to meet singles up for a good night on Halloween. We made a paper prototype by drawing a basic outline of a tablet on one sheet of A4 and then drawing the individual parts of the UI on smaller pieces which we’d place over the tablet over the testing to simulate navigating through the app. The other team made an app which told users which events were happening in the city on that night. The paper prototyping was really fun and gave me a great idea of how paper can actually be useful for prototyping – before I was a little skeptical about it but Tom convinced me that it was a good option, if a little time consuming. The other team tested our app and by doing this I also learned that sometimes during prototyping you need to prompt the user – I went in expecting to run a blind test with little interaction with the tester but this proved to me that in reality that is often just not possible.

My peers and Tom (standing up behind the table) look at the prototype dating app that my team made.

 

Each element of the paper prototype interface was drawn on an individual piece of paper. We drew each date on an individual piece of paper and changed these around as the user filtered the type of person they wanted to date.

Prototyping my design

What I will prototype

Previously I designed two interfaces for my infotainment system, named ‘Stellardrive’. I created an interface for the large display in the centre of the dashboard and I also created a smaller, scaled-down interface for the instrument panel that the driver can glance at whilst they are driving to see key information. I’m going to prototype the instrument panel UI and buttons on the steering wheel as I feel this will be interesting and currently not many cars make good use of the space in the instrument panel for a second infotainment system screen. This is what I feel is the ‘unique selling point’ or ‘big breakthrough’ in my infotainment system. To do this I am going to create a simulator app for my car.

How the prototype will work

The prototype will consist of the steering wheel and instrument panel in front of a video of me driving. It will essentially be a very basic website housing these elements and when the user clicks on the buttons on the steering wheel the different elements of the user interface on the instrument panel will change accordingly, e.g. if the user presses a button on the wheel that opens up the settings app, the settings app will be displayed in the instrument panel.

In an ideal world, it would be great to actually simulate driving using a motoring game as the simulator and physical hardware such as a gaming steering wheel with mapped buttons as an input method to control the infotainment system prototype on the instrument panel which would simulate the actual driving experience with the infotainment system. However, due to complexities and time constraints my simulator will be a very basic software-only prototype with no representative hardware interaction (meaning that there will be no physical steering wheel for the user to test, for example) and driving will be simulated purely by the driving footage playing in the background behind the steering wheel and instrument panel. It’s not the most realistic prototype at all, mainly because there are no consequences of not paying attention to the road, but it does at least give an idea of what the system would be like to use if you had the distraction of driving.

To simulate being stationary the video is simply hidden and a white screen is shown instead.

What I will test

I will create a series of goals for the user to complete and then give them a certain time frame to accomplish these tasks in. Using the eye tracking hardware and software in the UX lab at university, I will monitor where the user is  looking to judge whether they are looking at the instrument panel or the road ahead. The aim will be to see if the user can complete the tasks whilst driving. The prototype will be an A-B-C throw-away high fidelity prototype, meaning:

  • Three different things will be tested.
  • The prototype is not of the final thing – it is a software emulation of what would be a combination of actual hardware and software if this prototype were to ever be put into production, so it is a ‘throw-away’ prototype despite being high fidelity.
  • The prototype looks realistic or semi-realistic, so although it is not of the actual hardware or software, it is high fidelity.

The three different tests will be:

  • (A) Testing usability: whilst stationary the user must change the source of the music from the default to a folder called ‘Kids music’ which is located on a USB drive, they then must skip a track.
  • (B) Testing the interface: the user must complete the same task but this time whilst driving.
  • (C) Testing safety: whilst Test B is being completed, the users’ eyes will be tracked to see where they are looking – this will test if they are looking at the road and how often they are looking at the road vs looking at the wheel and instrument panel to get an idea of how the user might be using the system if it were in a real car and they were driving.

These tests were chosen for the following reasons:

  • (A) Testing usability: it’s important to understand how quickly a user can learn a system – the faster they can learn it and the quicker they can complete the task, the easier the system is to learn and therefore use. In a car infotainment system this is vital because eventually you’ll need to know how to use it to do basic tasks such as changing the volume of a track or changing the track and you’ll need to do this without looking for buttons if you want to be able to do this whilst driving.
  • (B) Testing the interface: the interface may be easy to learn and use whilst the car is stationary with no distractions that may divert the users’ attention, but what about whilst there are distractions? The system is designed to be used whilst driving, so it is critical that it is testing whilst driving is simulated.
  • (C) Testing safety: it could be argued that this is more of an extension to Test B than a whole separate test, but this is vital. Safety is extremely important when designing an infotainment system – the system must display information that the driver can glance at and allow for the driver to access and change information whilst driving through the use of a great interface design and appropriate input methods (typically steering wheel buttons or voice control). It’s extremely difficult to design a system that fits both of these requirements so to test mine eye tracking will be used to see when the user is looking at the road and how much time they spend looking at the road compared to how much they are looking at the wheel and instrument panel. Gaze maps will be useful data to analyse here because they show a map of where the user is looking, but each point is numbered so you can see the order in which points were looked at, so if a user looks at the road quickly and then looks back at the wheel and never looks a the road again, it’ll be evident from the gaze map.

Testing format

The test will be very simple. The user will first be shown the controls of the wheel and where the buttons are and what they do (so the tests won’t be completely blind – the user will have some prior knowledge of the system) and then they will complete the test. Two of the three users will firstly complete the test with the video mode off to give them a tiny bit of time to learn the system without distractions and then they’ll do it with the video mode on. The other three will complete the test with the video on firstly and then with the video off, just to see if they could manage to complete the task for the first time whilst driving. This tests how easy the interface is to learn with and without distractions. I don’t really know how new users would test a system like this if they just had purchased a new car, I would personally test it out whilst stationary and not whilst driving for safety reasons, however at least here I will be testing both to see how easy the system is to use. During the test if I see the user is getting a little stuck or confused I might give some hints because I feel that a totally blind test would take too long and also not reflective on the learning experience that most consumers would have if this were to go into a production vehicle because most users would probably read an instruction manual or research online or ask somebody if they were confused by something like this. I feel that very few would sit down and try and learn it themselves.

How to create the prototype

There are many great prototyping tools on the market, Axure, JustInMind, Adobe Experience Design and Sketch just to name a few, however none of these tools really offer anything to create the prototype that I had in mind mainly because my prototype is closer to being a video game than it is a website or an app like these programs are designed to prototype. There are a number of tools available for designing and prototyping games, however I didn’t have a lot of time to learn new software and I never looked into whether or not these tools would be suited to making the kind of simulator I wanted. I had a rough idea about how I could code such a simulator by using code to hide and show assets and how this could be put on the web to make testing easier.

Originally I thought about using Adobe Animate (previously Flash Professional) to create such an app. My reasoning for using Animate was that I had used it before several years ago to create a personality quiz for A Level Graphic Communication which worked in a similar way to what I had proposed for my infotainment system simulator – it showed and hid images to create progress bars and ‘grey out’ buttons when the user had answered a question. I knew that in Animate I could make something similar for this project and each frame on the timeline in the Animate project would be a new screen in the app, just like each frame in the timeline for my quiz project had been a new page on the quiz. The quiz was written in ActionScript 3.0 which was a dated language when I wrote the quiz in December 2015 and some two years later is now even more dated. I remember feeling at the time that Adobe changing the name of Flash Professional to Animate in early 2016 indicated that Adobe themselves felt that the whole idea of ‘Flash’ and the ActionScript language to code Flash apps was obsolete and instead wanted to focus more on the animation side of Flash Professional than the Flash app development side. When I thought of using Animate, initially I had the idea of using JavaScript and HTML 5 to create the simulator. However, after some experimenting I found that Animate is not a great IDE and I don’t think you can code JavaScript with it in the same way you can write JavaScript in other IDEs or even text editors – I have a feeling that the JavaScript functionality in Animate is fairly limited. I did briefly consider using ActionScript 3.0 to write the simulator to create a Flash movie/app which I could embed into a webpage and although it would’ve worked and been relatively easy to do, I was more interested in sticking to current languages like HTML 5, CSS 3 and JavaScript than continuing to use obsolete languages and technologies like ActionScript 3.0 and embedded Flash web apps. I remember feeling that ActionScript 3.0 was a pretty poor language when I wrote my personality quiz two years ago, so I wasn’t really interested in using it again. I then decided to code everything from the ground up using Visual Studio and HTML 5, CSS 3 and JavaScript which is my preferred IDE. It would be a much more time-consuming and complex way of creating it, but I was prepared to learn and face the challenge.

Initially I had considered using Adobe Animate (nee Flash Professional) to create my prototype however I decided that ActionScript 3.0 and Flash was outdated so I instead I used Microsoft Visual Studio 2017 to code an HTML, CSS and JavaScript-based solution.

Creating the prototype

The wheel and instrument panel illustrations from the previous project were adjusted to look more realistic with colours, gradients, icons and textures and then they were exported as PNGs for screen (72 PPI) from Adobe Illustrator. These assets were last used in the wireframing project so they needed to be made to look more realistic in order to create the high fidelity prototype for this project.

Using image mapping in HTML, I was able to find the coordinates of the steering wheel buttons, create a clickable area in these coordinates and then link them to functions in JavaScript which would hide and how elements depending on which buttons were pressed. This is in essence how the prototype works. The code below is the HTML button mapping.

<map name="buttons">
 <!---Right-hand buttons-->
 <area alt="Answer" id="answerButton" shape="poly"
 coords="1056, 723
 1062, 719
 1116, 720
 1168, 765
 1065, 762
 1064, 751"
 href="javascript:featureNotAvailable()"
 target="_self"/>

<area alt="Home" id="homeButton" shape="poly"
 coords="1116, 720
 1181, 724
 1223, 756
 1168, 765"
 href="javascript:homeScreen()"
 target="_self" />

<area alt="Volume Up" id="volUpButton" shape="poly"
 coords="1181, 724
 1236, 724
 1281, 779
 1236, 767
 1223, 765"
 href="javascript:featureNotAvailable()"
 target="_self" />

<area alt="Volume Down" id="volDownButton" shape="poly"
 coords="1224, 830
 1281, 815
 1222, 872
 1177, 873"
 href="javascript:featureNotAvailable()"
 target="_self" />

<area alt="Option" id="optionButton" shape="poly"
 coords="1169, 830
 1224, 830
 1177, 873
 1114, 874"
 href="javascript:optionButton()"
 target="_self" />

<area alt="Hang Up" id="hangUpButton" shape="poly"
 coords="1065, 831
 1169, 830
 1114, 874
 1062, 874
 1057, 869
 1064, 842"
 href="javascript:featureNotAvailable()"
 target="_self" />

From the code above it can be seen that a map called ‘buttons’ is created and then inside this map are several areas, each of which is directly below the button it corresponds to on the wheel. The Alt text for the individual button maps relates to the button it is mapped to and the href link is to a function in the JavaScript file which simply hides and shows elements (defined in the HTML with IDs). Each element of the user interface, for example every button on the software interface, every rollover, every piece of album art and every piece of text was a separate PNG image which would be shown or hidden using JavaScript functions and positioned using CSS. Many of these assets were created in Adobe Photoshop. The code below is JavaScript hiding and showing elements depending on the button that has been pressed.

//Music app: hide and show the applicable elements for this view
function musicScreen() {
 screen = 'music';
 console.log(screen)

document.getElementById('home-1').style.visibility = 'hidden';
 document.getElementById('home-1-rollover').style.visibility = 'hidden';
 document.getElementById('home-2').style.visibility = 'hidden';
 document.getElementById('home-2-rollover').style.visiblity = 'hidden';
 document.getElementById('home-3').style.visibility = 'hidden';
 document.getElementById('home-3-rollover').style.visiblity = 'hidden';
 document.getElementById('home-4').style.visibility = 'hidden';
 document.getElementById('home-4-rollover').style.visiblity = 'hidden';
 document.getElementById('home-5').style.visibility = 'hidden';
 document.getElementById('home-5-rollover').style.visiblity = 'hidden';
 document.getElementById('home-6').style.visibility = 'hidden';
 document.getElementById('home-6-rollover').style.visibility = 'hidden';

document.getElementById('optionbar').style.visibility = 'hidden';
 document.getElementById('folder-1').style.visibility = 'hidden';
 document.getElementById('folder-1-rollover').style.visibility = 'hidden';
 document.getElementById('folder-2').style.visibility = 'hidden';
 document.getElementById('folder-3').style.visibility = 'hidden';
 document.getElementById('folder-3-rollover').style.visibility = 'hidden';
 document.getElementById('folder-4').style.visibility = 'hidden';

document.getElementById('titlebar').style.visibility = 'visible';
 document.getElementById('albumart').style.visibility = 'visible';
 document.getElementById('songtitle').style.visibility = 'visible';
 document.getElementById('songinfo').style.visibility = 'visible';
 document.getElementById('duration').style.visibility = 'visible';
}

The code above shows the Music app function which runs when the user selects to open the Music app from the home screen. The first block of the code simply hides all of the elements that were on the home screen by adjusting the ‘style.visibility’ property to ‘hidden’. This is actually a CSS property but it can be altered in JavaScript using this code. As mentioned earlier, the IDs (for example ‘home-3’ and ‘home-3-rollover’ are defined in HTML. The second block of code hides the elements for the Option Bar and the third block shows the elements required for the Music app such as the album art, song title, song information and the track duration (shown on a slider).

The prototype is only as developed as it needs to be to make the Music app work well enough to complete the task, thus other functions like the phone, satellite navigation, camera app and the volume up and down keys do not have any function in the prototype (but they would if the system were to be implemented into a real car). If the user attempts to use one of these features during the test to avoid confusing or frustrating them I made a JavaScript function that runs if the feature does not exist. It simply displays a modal window in the browser stating that the feature doesn’t exist, just to inform the user that what they’re trying to do won’t work or help them to complete the test.

//Display a modal informing the user that the feature is not available if they attempt to access a feature that has not been coded, e.g. phone buttons
function featureNotAvailable() {
 window.alert("This feature is not available in this view yet.");
}

Initially each button press would open a new HTML file (for example the Home Screen would be one HTML file and then pressing on the Music icon would open another HTML file for the Music app), but in the end this method proved simple to produce but a little messy and the page refreshing did not make for a pleasant user experience. It also didn’t make for a very accurate interface either since the refreshing was in itself a distraction before the driving video behind the wheel and instrument panel had even been considered. Eventually I rewrote the simulator to be on a single webpage (so using just one HTML file) to avoid page refreshing and instead of opening a new HTML file each time a button was pressed, a new function was loaded instead. As well as providing a better user experience it made managing the code easier because I also ended up with only one CSS and only one JavaScript file too whereas when I had multiple HTML pages I had individual CSS and JavaScript files for these pages too.

The video was recorded from my Ausdom dashcam which is mounted to the windscreen of my car. I went out specifically to record footage for my simulator at different times of the day and on different types of roads to record a variety of driving environments – here are the scenarios that I recorded:

  • City driving in the day
  • City driving at night
  • Driving in a multi-story car park in the day
  • Driving in a multi-story car park at night
  • Dual-carriageway driving in the day
  • Dual-carriageway driving at night
  • Completing a junction on a dual-carriageway at night
  • Country road driving in the day
  • Country road driving at night

The footage was collated in Adobe Premiere Pro and exported as an MP4 video. This video then just plays on loop in a div behind the wheel and instrument panel on the website. The idea of recording lots of different situations in the day and during the night was to see where users would be looking when the driving situation changed, I wondered if at night for example when visibility was worsened or when the video was showing the user trying to navigate a tight multi-story car park if the user would be paying more attention to the road if they were using the simulator as if they were actually driving.

The Ausdom dashcam mounted to my windscreen that I used to record the driving video.

The completed prototype

The completed prototype is a website showing the wheel, instrument panel, dashboard and a driving video that can be toggled on or off behind all of this. The user clicks on the buttons to navigate the infotainment system interface. The prototype’s functionality is very limited – it only allows for the task outlined earlier to be completed, meaning that only the Music app works and all you can do in the Music app is change the source of the music to a folder called ‘Kids’ music’ on the USB drive and skip tracks. There is no other functionality.

View the completed prototype here.

Scrolling is disabled on the webpage and the site is optimised for 1080p desktop displays because this is the resolution of the display in the eye-tracking lab. Below are some screenshots of the final prototype.

The prototype showing the Home Screen on the interface on a night-time view of navigating a junction on a dual-carriageway.

 

The prototype showing the Select Source option on the daytime country road driving view.

 

The prototype showing the Select Folder option on the nighttime view of country driving.

 

The prototype showing the Music app playing a Rihanna song whilst an ambulance goes past in the nighttime city driving view.

 

The prototype with the driving video turned off to simulate being stationary.

You can see that compared to the wireframe I produced for the previous project, the prototype bears a lot of resemblance.

The wireframe wheel and panel bear a lot of resemblance to the wheel and panel that is seen in the Stellardrive prototype. The prototype version features icons, colours and textures whereas the wireframe is of course all flat colours and a mix of whites, greys and blacks.

Going back even further, it is possible to see how the initial sketch of the instrument panel and wheel resemble the final prototype.

Comparison between the sketch, wireframe mockup and prototype versions of the instrumental panel and steering wheel.

Testing my prototype

Testing was undertaken by 5 fellow students all with different driving experience, however none of them being particularly regular drivers or having driven cars with infotainment systems and steering wheel controls.

Data collection

Data was collected from the Tobii Eye-Tracking system at university. I collected Gaze and Heat Map data which is useful for the following reasons:

  • The heatmap gives a good visual idea of how long a user has spent looking at one area of the screen with green representing a short time and darker colours like red and orange showing longer periods. The heatmap shows dots or ‘splodges’ where the user is looking with no lines connecting the dots. This is beneficial because it’s easier to see where the user is looking without lines everywhere (helpful if you have an application where the user is constantly looking across the screen, maybe in a game) but what it doesn’t do is tell you the order in which the user is looking at the areas of the screen.
  • The gaze map is beneficial because it shows the order in which the user is looking at the various areas of the screen by joining circles up and numbering each circle. This is good if you want to see how long a user was looking at something, for example it may be good to have this information to see how long a user was looking for a button or other asset. It does also show how long the user is looking at an area for, indicated by the size of the circle in the map it produces. The bigger the circle, the longer the user was looking at that particular area.

I also collected the real-time test data which produces a map a little like a gaze map, but it doesn’t join all of the circles up and it doesn’t preserve the entire map either. It does however record audio from the testing and also a webcam view of the user testing so that you can see their body language whilst testing. Body language during testing can provide valuable feedback and so can tone of voice during the testing, so this is also very useful data to have. Tone of voice and body language can show body language for interface testing that eye tracking maps cannot, for example they can show confusion, happiness, frustration or excitement or happiness which the user may feel or show when they have completed the test as a sense of achievement, especially if the test was difficult or required a level of motivation to complete.

I feel that these three different pieces of data are all essential to help test the usability, interface and safety of my prototype.

Establishing a ‘control’

In an experiment, a ‘control’ is a test that is conducted without changing any variables, or in the case of my experiment it’s like a ‘perfect test’ – what I would expect the users’ data to look like if they had perfect knowledge of the system and knew exactly how to use it. Of course, test users do not have perfect knowledge or really any prior knowledge of my system at all which means that as the designer of the system, I was the only person who would be able to complete a perfect test and get perfect data (or what I felt was perfect data). I knew exactly how the system works, which buttons did what and how the simulator was meant to be used. I completed the tasks I set for my test users and recorded my live test data, gaze and heat maps with the driving video on and off. As I was completing the task I was very careful to ensure that I was using the simulator as I had intended: looking at the road periodically as if I were driving.

Below is my live test footage.

Below are my gaze and heat maps from the testing.

You can export frames from the Tobii Eye Tracker, below are exports of my gaze and heat maps with the video on and off. The video above is a good way of showing how I reacted over time, but the images below are ideal for giving a quick snapshot of the whole test.

The gaze map from my test without video shows how I was looking more at the wheel during the test, as expected, however I wasn’t looking at the wheel for long (going by the small circles) because I knew the system and where the buttons were and what they do.
The heat map from my test without video also shows how I was looking more at the wheel during the test, as expected.
The gaze map from my test with the video shows how I was looking at both the wheel and the road. I was looking mainly directly at the road as if I were driving and the numbers on the map show that I was repeatedly looking at the road and then looking back at the wheel. If I weren’t, the numbers on the circles on the road and wheel area wouldn’t be so varied. The size of the circles is also varied across the wheel and the road which means I was looking at each for roughly equal amounts of time.
The heatmap from my test with the video shows how I was looking at both the wheel and the road. I was looking mainly directly at the road as if I were driving. The colours on the heatmap are mainly green suggesting that the amount of time I was looking at the areas for was relatively quick – glancing at both areas of the screen.

As can be seen, my gaze and heat maps show a good balance between looking at the road and looking at the instrument panel and the wheel to activate the controls.

Data from all users

The data from each of the 5 users was collated and displayed on a single image of the wheel and dashboard. Again, gaze and heat maps are very useful for analysing exactly where users are looking and also how long they’re looking at certain areas of the screen for. Below is a video showing the gaze and heat maps for all users testing my prototype with the driving video turned off.

The video shows how the users were looking mainly at the wheel and instrument panel which was to be totally expected and also very similar to my own test which acts as the control. Below is a still of the gaze map showing all of the gazes that were made by users during the testing.

A lot of time was spent looking in the instrument panel and also around the D-Pad buttons which was promising to see since these are the core areas of the interface to focus on. It was good to see that on the whole users were not focusing so much on the right side buttons which don’t play a huge role in the prototyping task other than the Option button which I would have expected to see more gaze circles around given that it needs to be pressed a couple of times to complete the task. Perhaps this indicates that the positioning of the Option button was not as logical to users as it was to me as the designer.

The gaze circles on the white area above the dashboard all have low numbers in them so these were the first places that the users were looking, however there is a bit of a flaw with the test because the button to turn the video off is in the top left corner of the prototype and this button was pressed immediately to hide the video in this test – this does however mean that the first gaze circles and areas on the heat map show users looking in the top left corner, so it is best to ignore these on the non-driving tests and the clear focus of attention in these tests is on the instrument panel and the wheel.

The heatmap is perhaps a little clearer to understand for this test than the gaze map, for example the green splodges on the driving video area are very small and not really noticeable which is good given that in this test where there is no video playing they are not relevant. The heatmap clearly shows where the attention is – it’s on the instrument panel UI just above the Music icon which is also shown in the gaze map above by the size of the circles.

The video below shows the same again but with the driving video turned on – it’s a shame that the Tobii software couldn’t export the video playing in the background.

And below is a still showing the gaze map at the end of the testing.

From the image above it could be assumed that the users were looking at both the road and the wheel and instrument panel given that both areas of the image above have a considerable amount of attention on them (even though the left side of the wheel and the instrument panel have more), but watching the video above this image it becomes clear that on the whole the testers started off looking at the road a little bit and then their attention turned to the wheel and instrument panel and crucially, it mainly stayed there with very few glances at the road again. The image above shows the order of the gaze circles and most of the circles in the road area have quite low numbers on them, typically between 1 and 100, whereas most of the circles on top of the wheel and instrument panel have higher numbers, usually between 100 and 200, so from this it can be inferred that the driving video was not paid too much attention. As mentioned earlier, the size of the circles is indicative of the length of time that the user spent looking at the area and all of the biggest circles on this gaze map are above the wheel and instrument panel, clearly this is where the users’ attention was.

The heatmap clearly shows that the area that got the most attention in the driving test was the instrument panel, specifically over the Music icon on the instrument panel display just like it was on the test with no video. There was also a little heat over the Option button and some around the D-Pad area, but rather worryingly not much on the area where the driving video would have been playing, This backs up the gaze map data which also shows that the users didn’t spend much time looking at the driving video.

Naomi Winter tests my prototype

Naomi Winter is a peer on my course who has the most driving experience of all of the people who tested my system, she drives every now and then and has even driven abroad but hasn’t driven any cars with infotainment systems or steering wheel controls. Below is a video of the gaze and heat maps from Naomi’s tests with and without the video on.

As a driver, Naomi understood the purpose of the driving video and she told me afterwards that she deliberately made an effort to put her focus on the instrument panel, the wheel and the road. Her gaze and heat maps show this and how she was repeatedly changing her focus between the wheel and the road as if she were completing the task whilst driving. This showed me that Naomi was able to complete the task whilst having the distraction of driving to contend with too.

Examining the still of Naomi’s gaze map it’s easy to see that she was looking back and forth between the road and the wheel. There is a good mix of numbers on each section with some low numbers and some higher ones on each area. Naomi’s heatmap in the video above shows that she wasn’t looking at any particular area for very long but she did focus mainly on the instrument panel and the road more than other aspects of the prototype.

Below is her live testing video which shows her reactions during the test. Naomi kept calm during the test but she did mention after the test in the interview I did with her that she felt I guided her too much during the test.

Celestin Jacobs tests my prototype

Celestin is currently learning to drive and so he drives fairly regularly and usually in busy towns on test routes. Interestingly, despite being a driver (or learning to drive), Celestin’s attention remained mainly on the steering wheel and the instrument panel, only looking at the road very periodically. This is evident in his gaze and heat map video below,

Celestin’s gaze map shows a very distinctive triangular pattern on the wheel – he was looking for buttons on the left and right side of the wheel and expected to see the outcome on the instrument panel which is exactly as I’d expect to see, but clearly a lack of focus on the road would lead me to believe that from this users’ test, the system is not easy to use whilst driving or the user was more concerned about completing the task than actually focusing on the road, even when the video changed.

Ameer Al Ashhab tests my prototype

Ameer hasn’t driven since he has been in the UK, but he has driven before in his home country, Jordan. Going by Ameer’s gaze and heat maps, I felt that it was possible that he found the interface difficult to work with, whether he found it difficult to navigate through the menus on the instrument panel app to complete the task or whether he found it hard to actually use the simulator is unknown to me. I did notice he tried to click on the icons with the mouse to open the apps as the instrument panel would be a touchscreen if it were to be installed in a real car and he tried clicking on the indicators to navigate forwards and backwards. I feel that Ameer is a very logical thinker, especially when it comes to design, so these actions didn’t really surprise me, but it was interesting that he attempted to use the indicators to navigate. On most phone apps to move back and forth between pages you press an icon in either the upper right or upper left corner which does look a little like the indicators on my prototype. Ameer was focusing a lot more on the wheel and panel than the road, I feel that he was trying to figure out how to complete the task more than using the prototype as a driving simulator.

The gaze map still from Ameer’s test further proves my point, it’s obvious that his attention was a lot more focused on the wheel and instrument panel than on the road.

Robin Wragg and Will Sparkes test my prototype

Neither of these two have a lot of driving experience – Robin hasn’t driven for several years and Will doesn’t drive. The data from their tests is similar to what has been seen before with the other three testers, the main focus was on the wheel rather than the road, possibly because these testers don’t drive all that much or at all so they were more concerned about completing the task than they were about using the prototype as a driving simulator.

Below are Robin’s gaze and heat maps.

Below are Will’s gaze and heat maps.

What my data shows

It definitely shows some interesting trends, the biggest trend it shows is that on the whole my testers were much more focused on completing the task than they were about trying to use the prototype as an actual driving simulator. Apart from Naomi, most of my testers looked at the road for a little bit of the beginning of the test and then spent the rest of the time it took for them to complete the task focusing on the wheel and instrument panel. This could suggest one or more of several things:

  • The interface is not easy to use whilst driving and there are possibly too many button presses or steps required to do things which means that your focus is taken off the road whilst you try to use the system (dangerous).
  • The interface is perhaps not the most logical to use or does not come across as being logical to everybody – if it was then more attention may have been paid to the road.
  • My testers didn’t fully understand the idea of the prototype, perhaps they were unaware that they were meant to try and use the prototype whilst also occasionally glancing at the driving video to simulate driving.
  • There was little motivation or reason to really pay attention to the road – after all, if you don’t look at it you don’t crash.
  • Following on the previous point, the prototype is not terribly realistic for this reason – perhaps the users didn’t really feel the need to pay attention to the driving video.

Personally, I feel that the last three points are probably the most likely given that the data I collected when the driving video was off suggested that the interface was logical to use. The average time it took to complete the task I set whilst the video was not on was approximately 28 seconds which suggests that the system is fairly intuitive and easy to learn. The gaze and heat maps from the tests with no video on also show that the users were looking in all the right places to find the buttons and the information on the instrument panel. I’d therefore say that the reasons most of my testers didn’t pay too much attention to the road was because they didn’t understand the prototype properly, most of them don’t drive regularly anyway and there’s no consequences for not looking at the driving video.

User feedback

After each test I spent between 5 and 10 minutes interviewing the testers, asking them the following questions to get a better idea of how they felt the test went and what they liked and disliked about my prototype and what they’d suggest I do to improve my infotainment system and my testing strategy.

  • Did you find the interface intuitive? Were the icons on the screen in a logical position, were they easy to understand, were they relevant to the tasks?
  • Was the interface large enough to see properly with the wheel in the way and were you still able to see the other assets on the instrument panel such as the speedometer and rev counter?
  • Were the buttons on the wheel labelled logically and were they where you’d expect them to be?
  • Were there any aspects of the test that you felt confused by? Were you ever unsure which button you’d have to press in order to complete a task?
  • Was it easier to complete the task with or without the driving video?
  • Do you feel that you would be able to complete the task whilst driving? Would it take you more or less time to complete?
  • Given that the other apps would be of a similar design, do you think they’d be easy or difficult to use?
  • If you could give a ‘like’, ‘could be better/improved’ and an ‘I wish’ about the navigation aspect of my prototype, what would you say?
  • Same again, but about any aspect of the prototype experience.

These questions should produce answers that answer a broad range of questions that I wanted the user to tell me about. The eye tracking data shows me where the user was looking during the test and from that I can gather whether or not my prototype was a success or not and whether users were confused by the experience, but it can’t tell me whether the user found the icons too small or whether the user believes that on the whole the design I used for the Music app that was tested would work with another app, such as the Satellite Navigation. This is why interviews after the testing was important.

During the interviews I found that quite often the tester would answer several questions in one answer (unintentionally), so I didn’t ask all of the questions but all of the answers were usually covered.

Naomi Winter’s feedback

Naomi didn’t wish to be filmed so instead I simply did a voice interview with her which you can listen to below.

Naomi’s answers to the questions are below.

Q: Did you find the interface intuitive? Were the icons on the screen in a logical position, were they easy to understand, were they relevant to the tasks?

A: ‘It was all labelled correctly, all the icons were good, but I thought the ‘O’ was a bit weird – I feel like that could… unfortunately I have no suggestions as to what it could be but maybe some sort of symbol – an option symbol rather than just an ‘O’ because it just looks like a ring or something that’s not clear. It looks kind of awkward, just like a ring or a circle. That’s the only thing I really picked up on with like the controls. I thought the D-Pad was good but you could maybe add arrow symbols – up, down, left, right which would make it easier because when you’re driving it’s easier to look at something and have a visual prompt rather than think ‘this is left.’ I know it sounds ridiculous, but just minimising the thought process whilst using it. But for the sake of a test I thought it was brilliant because it really does show that icons are useful. Logically, no, everything was in the right place.’

Q:  Was the interface large enough to see properly with the wheel in the way and were you still able to see the other assets on the instrument panel such as the speedometer and rev counter?

A: ‘Not really, no. I thought it was really small. It’s harder to simulate though – I don’t know why you did it on the right hand side?’ At this point I explained to Naomi that the driving footage in the simulator was taken from a right-hand drive car driving on the left side of the road. She said, ‘I thought it would be in the middle because when you drive, you’re sitting… you don’t sit in the middle of the car and have the wheel on the right.’ She was referring to sitting in front of the computer monitor during the test yet the wheel being on the right side of the screen. She continued with, ‘I thought the layout was weird because it was on the right-hand side of the screen and I feel it would have made more sense to just have it in the middle and then… because the footage was still from the perspective of the right-hand side of the car, you still get that feel but when you’re driving you have the steering wheel in the middle still, but obviously it’s hard to make the steering wheel life-size.’

I asked Naomi if it would make the prototype better if I put the steering wheel in the middle of the screen and made it larger and she said, ‘Yes, I think so, maybe. It didn’t even fill up half of the screen, can’t quite remember now, but it didn’t feel like it was really that obvious in the prototype anyway but it’s hard because in real-life it would be completely different. But for the sake of a prototype I see where you’re going with it, but I feel like yeah it could’ve been in the middle of the screen.’

I asked her whether she found things like the speedometer and rev counter difficult to see based on the feedback she had just given me and she said yes.

Q: Were the buttons on the wheel labelled logically and were they where you’d expect them to be?

Naomi had already mentioned that the ‘O’ labeling for the Option button wasn’t logical to her, so her answer was directly more at the second part of the question.

A: ‘I assume so, I don’t have a car with buttons so I’m not really sure. I’ve only ever driven a car once with buttons on the wheel and I didn’t use them because I’m not used to that, so for me – I assume so? I’m not sure, like, when I was using it I felt like I was looking in the right place, so intuitively yes because I was brought to… something in me said ‘oh look go there’ and I did, but apart from that, yeah, I couldn’t have anything to compare it to.’

Q: Were there any aspects of the test that you felt confused by? Were you ever unsure which button you’d have to press in order to complete a task?

A: ‘No, everything was fine, but I feel like – with the test itself I knew what to do the second time so I was trying to pretend like I didn’t and also looking at the road and I was like ‘oh but I already know what I’m doing’ so it’s… kind of hard because I knew what I knew what I was doing and I’m now I’m trying to pretend like I don’t. I was trying to pretend like I was drunk and it was kind of… awkward.’

Q:  Do you feel that you would be able to complete the task whilst driving? Would it take you more or less time to complete?

A: ‘Yeah.’

I then asked Naomi if there were too many button presses and I explained that I tried to keep it down an absolute minimum when I was designing it. She said there weren’t too many too button presses and said, ‘from that angle I definitely think it was effective because there weren’t any fancy jazzy buttons for – I don’t know – air con or anything on the steering wheel, it was literally just the essentials and those buttons can be adapted for other functions not just for music but for like other things that you’d want to do that would require up, down, left, right, option, home… like that kind of thing. So, from that perspective I think you did really well to choose which buttons to include because that sounds like it would be a bit of a challenge because there are quite a lot of things you could do from the steering wheel, but… no, I think you made the right choice, definitely.’ 

Q: Given that the other apps would be of a similar design, do you think they’d be easy or difficult to use?

A: ‘Yeah I don’t see a reason why not because, I mean each button would be programmed to do a different thing so yeah. Say if you were on the radio the up and down might change the station. Things like that – it could be easily adapted to each one. From the top of my head I can’t really think of any flaws right now.’

Q: If you could give a ‘like’, ‘could be better/improved’ and an ‘I wish’ about the navigation aspect of my prototype, what would you say?

A: ‘I like the buttons. I feel like I’ve said that a lot now, but it is definitely probably my favourite bit because it was so easy to use and it was like second nature to me which is exactly what you need – not just for driving but for UX in general I think. I wish that just for testing purposes that it was a little bit bigger and sort of more… ‘in your face’ a little bit, I guess, for lack of a better phrase.’

Q: Same again, but about any aspect of the prototype experience.

A: ‘What I said earlier about having the same task twice but with the video on and off – if you could change that just for the purpose of the test that’d be good because you’d be testing different things but that’s kind of a given and I know you’ve already thought of that but just to reiterate it – but you only had a week to make it so it’s fine – it’s fine, don’t sweat it – literally. Also another thing is I feel like I was prompted too much, I feel like you told me too much what to do at the beginning of the test and I feel like I knew what to do because you said ‘right so where would the option button be?’ and I said ‘oh obviously I’ve got to go to the option button’ – do you know what I mean? Way too much, way too much prompting. I knew exactly what to do because you were saying ‘right where would we find the music?’ and I said ‘oh, right, well I should find the music then’ – do you know what I mean? I feel like I was led towards the goal, I wasn’t just left to do it. It was so obvious to me where to look and what to do because I was coerced in that way. You didn’t tell me what to do, but you pushed me in the right direction. Just something to bear in mind, I think for testing.’ 

Celestin Jacobs’ feedback

Celestin’s answers to the questions are below.

Q: Did you find the interface intuitive? Were the icons on the screen in a logical position, were they easy to understand, were they relevant to the tasks?

A: ‘The only one I didn’t get was the settings button because there was a circle and normally you’d identify it with like a cog. I didn’t figure out straight away that you had to use the D-Pad to interact with the interface because I thought the buttons linked to it but other than that it was fine. Once I’d figured out I had to use the D-Pad it was easy.’

After this, I asked if the D-Pad was a logical choice for an input and Celestin replied: ‘It’s not that it’s not logical, it’s just that I don’t really understand why it’s there, because both of your button sets have the same objective. The objective of your D-Pad is to guide you through the interface, but the objective of your other buttons are also to guide you through the interface. I felt that you could have done one or the other.’ When asked what he would’ve expected to see instead of a D-Pad he explained that he would have expected to see a circular button with different pressure and click points on it to navigate left, right, up and down and press in the middle to select. Interestingly, this is what I was aiming to achieve in sort with the D-Pad. Celestin also felt that the hotkey buttons on the right side of the wheel could guide you through the menus and then the D-Pad only work when you’re inside the application.

Q:  Was the interface large enough to see properly with the wheel in the way and were you still able to see the other assets on the instrument panel such as the speedometer and rev counter?

A: ‘Yes, they were all fine. Once I knew that the interface was there [behind the wheel] it was fine because at first I was kind of looking at the road so I didn’t notice it but once I noticed it was I like ‘ahhh’.’

Q:  Do you feel that you would be able to complete the task whilst driving? Would it take you more or less time to complete?

A: ‘I could see a more skilled driver being able to choose his moments to do it but for people who aren’t so good at driving I feel that audio prompts would help or maybe like when you’re playing games and you go between menus and you get a vibration feedback? Maybe something like that so you don’t constantly have to look at it. But other than that, yes, good.’

When asked if Celestin felt if there were too many button presses he said, ‘Not really, there was only like three. I don’t see how you could have any less than that.’ 

Q: Were there any aspects of the test that you felt confused by? Were you ever unsure which button you’d have to press in order to complete a task?

A: ‘The only bit at first that confused me was when you press the settings button and then it didn’t work – it said ‘Feature not available’ – I didn’t know if that was a bug.’  I explained that the idea behind that was that you had to go into an app (e.g. Music) first and then press the settings button to access the specific settings for that app. Celestin said afterwards that that made sense now but he thought that at first you had to go into settings and then select USB. He said that he never used a driving interface before so felt that it was more his lack of experience with these systems than anything else. I recognised that this was a good observation since other people may expect the same thing.

Q: Given that the other apps would be of a similar design, do you think they’d be easy or difficult to use?

A: ‘I feel that they’d be successful designs. I think the only two things you need to add would be things like audio interaction and maybe like when you use the Sat Nav it brings it up on the windscreen – maybe show the map in the right corner, like a head-up display. I just think that would be easier to see. It’s a similar concept to playing a game, it’s very easy to just glance at the map in the top right of your screen than behind the wheel.’ 

Q: If you could give a ‘like’, ‘could be better/improved’ and an ‘I wish’ about the navigation aspect of my prototype, what would you say?

A: Celestin liked the menu design and the icons to go with the different apps in the system, he said, ‘it means that you don’t have to really read each one, you can just quickly glance at the music symbol [for example] which I liked. I disliked that the buttons confused me at first. I think you could add audio interaction and probably the head-up display that we mentioned earlier and perhaps reconsider how the D-Pad works. But other than that you’re fine.’

Q: Same again, but about any aspect of the prototype experience.

A: ‘Perhaps at the beginning you could have something that points things like – ‘this is wheel’ and ‘this is menu’ and it’s like ‘ready, steady, go’ because the first hing I was looking at was the world [driving video] so I didn’t even really notice that the interface was there but if you were in a car I’m pretty sure that you’d know that the interface was there beforehand, so you could show the tester your interface with an arrow. Other than that, not really, that’s all I can think of.’ 

Ameer Ashhab’s feedback

Q: Did you find the interface intuitive? Were the icons on the screen in a logical position, were they easy to understand, were they relevant to the tasks?

A: ‘The labels were OK, I only found myself to be struggling to find the source of the music. I felt that the ‘O’ symbol for changing the source of the music was vague, there was no label or anything. I thought that there was some misguidance going on there.’

When I asked Ameer what he’d expect to see instead of an ‘O’ icon he said it’d make more sense if the button said ‘Source’ or something similar.

Q:  Was the interface large enough to see properly with the wheel in the way and were you still able to see the other assets on the instrument panel such as the speedometer and rev counter?

A: ‘Yes’. 

Q: Were the buttons on the wheel labelled logically and were they where you’d expect them to be?

A: ‘I felt that for the wheel part where you move around the navigation I felt that you should probably put an arrow on the left-hand side of the wheel – the part where you move around. This is to indicate that you can move around because I didn’t know at first impressions.’ 

Q: Were there any aspects of the test that you felt confused by? Were you ever unsure which button you’d have to press in order to complete a task?

A: ‘No not really.’ (I had mentioned ‘other than the button to change the source’).

Q: Was it easier to complete the task with or without the driving video?

A: ‘It didn’t make a difference for me’.

Q:  Do you feel that you would be able to complete the task whilst driving? Would it take you more or less time to complete?

A: ‘If the road was clear and there were no cars around me I would definitely attempt to change the music but obviously if there were other cars around me that’d be a life risk.’ 

When asked if there were too many button presses Ameer said, ‘I thought the amount of clicking and button presses you showed me was logical and they’re all needed to make the experience work, so I didn’t think there were too many.’

Q: Given that the other apps would be of a similar design, do you think they’d be easy or difficult to use?

A: ‘Yes, but the navigation [Sat Nav] would need its own screen on the side.’

Q: If you could give a ‘like’, ‘could be better/improved’ and an ‘I wish’ about the navigation aspect of my prototype, what would you say?

A: ‘Visually speaking about what I’ve seen and been seeing on your wheel and dashboard it’s good you made it blue colour for the background and the icons are white and for the hovering red because people are more able to differentiate when you use the right colours and blue and white is definitely a good match. The hovering red – it stands out, so I would say there’s nothing wrong with that. I think everything else is fine.’ 

Robin Wragg’s feedback

Q: Did you find the interface intuitive? Were the icons on the screen in a logical position, were they easy to understand, were they relevant to the tasks?

A: ‘Yes, I forgot it wasn’t a touchscreen at first so I tried to click on Music. I think it was fine. I’m not sure how you’d redesign it, but the Option button being an ‘O’, perhaps there could be some kind of icon that could present what that does a little bit better and then the select buttons on the D-Pad, I’m thinking of TV remote controls and stuff, sometimes they have some kind of symbol there that suggests this, I think it often says ‘select’ actually. I think if there was something there I’d find it more intuitive then. As far as the buttons go, that’s all – simple and easy.’

Q: Were the buttons on the wheel labelled logically and were they where you’d expect them to be?

A: ‘Yes, I think I could imagine I could hit them with my thumbs whilst keeping my hands on the steering wheel.’

Q:  Was the interface large enough to see properly with the wheel in the way and were you still able to see the other assets on the instrument panel such as the speedometer and rev counter?

A: ‘Yeah they were really easy to see, I think they’re a good size – not too big.’

Q: Were there any aspects of the test that you felt confused by? Were you ever unsure which button you’d have to press in order to complete a task?

A: ‘There was just a very small thing where after I’d selected the ‘kids’ folder and it was still on that Green Day track, I thought it would change. That was all really. For a second I thought it had not changed to the ‘kids’ folder. I think everything else was straightforward.’ 

Q:  Do you feel that you would be able to complete the task whilst driving? Would it take you more or less time to complete?

A: ‘Yes I think it was a very simple, no-nonsense kind of thing. I don’t personally see how you’d improve it as it is right now – very good.’

When asked if there was anything Robin wished he could do but couldn’t do with the prototype, he answered, ‘I might be mis-remembering, but the number of seconds into the track. I’m not sure if I saw it or if it was very small. I’d like to have that information quite large actually, you want to see it as you glance. I can’t think of anything else.’ 

Q: Given that the other apps would be of a similar design, do you think they’d be easy or difficult to use?

A: ‘Yes, I think so, I haven’t really used Sat Navs myself in cars much so I don’t know how their systems tend to work, like integrated into the actual dashboard sort of thing, so I’m not sure how easy it would be to select the location you wanted to go to – I don’t really have that experience so I can’t really comment on that. But generally, I think the kind of D-Pad and select thing is adequate and it’s easy to understand. Yeah, I think that would suit a lot of applications.’ 

When asked if Robin had any further suggestions for improvement he said it’d be good if the instrument panel UI was a touchscreen, but then immediately afterwards said that it wouldn’t work as a touchscreen due to practicality reasons. I explained that there would a larger, second screen in the centre console which would be a touchscreen and he said that sounded good.

Will Sparkes’ feedback

Q: Did you find the interface intuitive? Were the icons on the screen in a logical position, were they easy to understand, were they relevant to the tasks?

A: ‘Yes.’

Q:  Was the interface large enough to see properly with the wheel in the way and were you still able to see the other assets on the instrument panel such as the speedometer and rev counter?

A: ‘It could have been a little bit bigger. I would imagine, because I’m very short-sighted, I would imagine if that was an actual car it would be a lot smaller and I would have trouble seeing it without my glasses.’

Q: Were the buttons on the wheel labelled logically and were they where you’d expect them to be?

A: ‘Yes, I did, but I had trouble grasping it at first because I didn’t really understand the interface properly but after I realised the screen was behind the wheel, I got it.’

When asked if it was the interface or the buttons on the wheel that were confusing him at first, Will replied with ‘just the interface.’

Q: Were there any aspects of the test that you felt confused by? Were you ever unsure which button you’d have to press in order to complete a task?

A: ‘No.’ 

Q: Was it easier to complete the task with or without the driving video?

A: ‘Neutral’ [same with both].

Q:  Do you feel that you would be able to complete the task whilst driving? Would it take you more or less time to complete?

A: ‘I think if I were to be using the interface more often I’d probably just know it off the back of my hand.’

Q: Given that the other apps would be of a similar design, do you think they’d be easy or difficult to use?

A: ‘Yes.’

When asked if he liked the design and felt that it was a positive user experience, Will also replied with yes.

Q: If you could give a ‘like’, ‘could be better/improved’ and an ‘I wish’ about the navigation aspect of my prototype, what would you say?

A: ‘I like the intuitiveness of the user interface. I think that the centre button on the D-Pad should be a little smaller and the directional buttons should be bigger. I don’t think it’s an issue on the actual demo, but I think if you used this in an actual car you’d want the directional arrows to be bigger so that you could feel them whilst you’re driving without having to look at them.’

When asked what would he would wish for, he said that when you press the Option button it’d be good for it to tell you that the feature was not available on the actual interface itself rather than in a window. He was referring to the modal alert in the prototype that informs the user that the feature is not available when they attempt to access a feature that is not included in the prototype. In the actual system these kind of messages would be displayed on the interface.

Will had no further comments about the prototyping experience.

What my user feedback tells me

Interestingly, most of the users said the same or very similar things. Below is a summarised list of what my users collectively liked:

  • They felt that the design of the Music app (and Home Screen, to a degree), could work well for other apps too like the Sat Nav and Radio and the buttons on the wheel would also work with other apps.
  • They felt that the buttons on the wheel were in a logical position and on the whole labelled logically which helps to make them more confident about using the system whilst driving.
  • They collectively liked the design of the buttons and button rollovers on the Home Screen. A lot of them said they were clear to see and the icons helped because it reduced the need to actually read the text beneath the icons whilst driving – you could just glance down at the screen to see what app you were about to open.
  • On the whole, they felt that they’d be able to complete the task whilst driving which indicates that they feel that the interface is easy enough to use and the button placement is good enough to use whilst driving.
  • On the whole, they felt that the design of the prototype itself was a success although there were some suggestions for improvement.
  • Nobody complained that there were too many button presses to achieve the task, so this suggests that the app is simple and quick to use whilst driving.

Here are some things that they collectively disliked:

  • Nearly all of them felt that ‘O’ was not a logical icon choice for the option button on the steering wheel. Many suggested replacing it with a cog or similar icon to indicate ‘settings’ instead.
  • Nearly all of them felt that D-Pad buttons needed labelled with arrows and something in the middle to signify ‘OK’, ‘enter’ or ‘select’ for the centre button.
  • Several of them weren’t sure at first whether to look at the road or the UI on the instrument panel – most ended up figuring out that by pressing the buttons on the wheel nothing was happening to the road but the UI on the instrument panel was changing so they spent the time of the test focusing on that rather than the road. Basically, it wasn’t obvious to look from the get-go.

Some points that individual users made that they thought could be improved:

  • Too much prompting during the test, some felt that they weren’t left alone enough to figure out the interface for themselves. I almost feel like in retrospect it would’ve been better if I hadn’t been there.
  • Some felt that by doing the same test but with and without the video was a bit pointless since the tester would know what to do for one of the tests which may mar the results unless the user tried to ‘play dumb’ the second time. I see where they were going with this and why this could be the case, but I wanted to see if the same test could be repeated with the distraction of driving and it would also show the interface was to memorise. It would also give me something to compare the tests with, but on the other hand I see that one set of results may end up showing completely different data from the other.
  • Some expected the music to change after the ‘Kids’ folder had been selected as the music source.
  • Some felt that the D-Pad buttons were good but should be resized to make the directional buttons larger and the centre/select button smaller.
  • Some didn’t realise that the interface was controlled entirely by steering wheel buttons and so they initially tried to click on the icons on the instrument panel UI as if it were a touchscreen interface.
  • Some felt that additional inputs such as audio/voice input and interaction would be helpful to use the interface whilst driving.
  • Some didn’t fully understand that the D-Pad could be used on the Home Screen to select the apps and then once in the desired app it could also be used but for other functions, e.g. skipping tracks in the Music app.
  • Some felt that the buttons on the right side of the wheel could’ve fulfilled the function of the D-Pad button.

On the whole feedback was positive with many being impressed by the prototype but of course there will still many improvements that they suggested I make.

Flaws with my testing strategy and how to improve

There were some fairly obvious flaws with my testing that may have led to unrealistic results, or didn’t really prove whether my interface was safe to use whilst driving or not.

Flaws with the testing method

It wasn’t a blind test

The test wasn’t blind at all – the testers all had some prior knowledge of the system which I justified earlier by saying that users might look at instruction manuals or look on the internet for support before using a system like this, but I was testing how easy the interface was to learn and by hinting too much to my testers about what each button does, I felt that sometimes I was giving too much to the point where one of my testers (Naomi) said that there was far too much assistance during the test. For a truly accurate representation of how easy the system is to learn and how easy the interface is to use, I should have either given less assistance or made sure that the test was blind, i.e. the users had absolutely no prior knowledge of the system.

The testers didn’t all understand that this was a simulator

That, or they were too concerned about completing the task to take much notice of the road video. I should have made it very clear to each tester before they started the test with the road video that the idea was to try and simulate driving, so they needed to make sure they glanced occasionally at the dashboard but mostly had their attention on the road ahead. However, I guess I wanted to test how much the testers looked at the road in comparison to the dashboard to try and judge how safe the system was to use whilst driving, but I wonder if the lack of focus on the road from most of my testers was because the system was difficult to use whilst driving or because my testers may not have known how to use the simulator. I’ll never really know the answer to this question.

Everybody should have done the test with the video off first 

I mixed it up when doing my testing, I wanted to see if the interface could be learned whilst driving, but I think realistically people don’t try to learn these systems whilst driving and are more likely to try and find out whilst stationary how to do things. If everybody had tested the prototype with the video off to start with, that would have tested how easy the system is to learn on first impressions and then if they did the test with the video it would have helped determine if they could remember how to change the source of the music and skip a track and put it into practice whilst driving.

There was no time limit set

From a safety point of view, there should have been a time limit. If you’re looking at your dashboard for more than about 5 seconds whilst driving you are at a much higher risk of having an accident. I should have set a time limit of around 30 or 40 seconds to allow for testers to glance up and down at the road and dashboard to complete the task. Really, the task was so simple that it should not have taken the user more than 50 seconds to find out how to do it either – any longer than 50 seconds and it could be concluded that the interface was too difficult or illogical to use. On average it took my testers approximately 28 seconds to complete the task whilst stationary with some completing it in 20 seconds or less and two of them taking 45 seconds or thereabouts, so this gives a good indication that for most users 50 seconds would be adequate time.

Flaws with the prototype itself

It wasn’t terribly representative of the actual experience

In real life you’d have a physical steering wheel with tactile buttons which you’d press to navigate through the interface. My simulator showed the navigation and the buttons that you’d press to navigate, but what it doesn’t show is any button feedback and several of my testers didn’t think to click the buttons on the wheel to navigate – several of them tried clicking on the icons in the instrument panel to navigate. Perhaps it should have been made clearer to them before the testing that clicking on the buttons on the wheel is supposed to simulate pressing the buttons with your fingers.

In addition to this, there were no driving controls included in the simulator

When you’re driving you have a lot more to think about than just what’s playing on the stereo. You have braking, acceleration, speed limits, red lights, gear changes and what other road users are doing (and more!) to think about. My simulator didn’t involve any of this, all the user had to worry about was the task in hand, there was no need to react to what was happening on the road because no matter what you did you’d be safe. If the user had to take the actual act of driving into consideration, for example changing gears, accelerating, braking and reacting to the road, then I guarantee that much less time would have been spent looking at the infotainment system and a lot more time on the road as the user would need to work out when to brake, accelerate, change gear and what to do when certain things happened on the road.

There was no consequence for not paying attention to the road

Following on from the previous point, the driving video was my own driving and was just a video, so no matter what the user did they’d never crash or hit a pedestrian or do anything dangerous because it wasn’t included in the video. When talking to the testers after the tests a lot of them said that because they knew there would be no consequence for dangerous driving, they didn’t really bother too much with the road. Speaking to friends about my work who didn’t even test it said the same thing. This could be why not many of my users really paid too much attention to the road – subconsciously they felt it was either pointless or safe not to, so instead they just focused on completing the task.

For all of these reasons, it was a ‘half-simulator’ at best

It only really simulated the steering wheel buttons and the instrument panel interface – it didn’t really simulate the environment in which these would be used, i.e. on the road. Though user feedback seems to suggest that it simulated the design very well.

How I’d improve my prototype and testing

It’s all about your testers

I’d make all of the changes mentioned above, so I’d set a time limit of 50 seconds, I’d make the tester ‘blinder’ and I’d make sure that everybody did the test without the video first, but I’d also change my testers too. Most of my testers, being university students, were not regular drivers or had driven a long time ago or don’t drive at all. I’d try to get a much more varied group of people, such as:

  • People who drive regularly but don’t use infotainment systems in their cars.
  • People who drive regularly but do use infotainment systems in their cars.
  • Newly-qualified drivers and drivers with less than 5 years’ experience.
  • Novice or nervous drivers of any age.
  • Drivers with at least 5 years’ experience.
  • Drivers with at least 10 years’ experience.
  • Older drivers who don’t like to use infotainment systems.
  • Older drivers who do like to use infotainment systems.

All of these would be a mix of males and females.

I feel that this would give a more meaningful test sample and also more varied results. The system may work really well for younger drivers who are more interested in technology, but not so well for older people who aren’t so interested or confident with using technology. At the moment I don’t know how somebody over the age of 30 would react to my system since nobody over the age of 30 has tested. Nobody who has driven regularly for more than a couple of years has tested it either and nobody who has used lots of different infotainment systems has tested my system, so these testers have nothing to compare it to. I’d get far more useful data by increasing my testing group.

Improving the Stellardrive prototype

Gaming hardware and software is often used to test prototypes for systems like this and even train drivers and pilots. It offers a semi-realistic experience without spending a lot of money of potentially damaging an actual prototype of a car or plane.

Ideally, the prototype itself needs to contain a mix of hardware and software. The software is obviously the prototype infotainment system itself and the hardware is the steering wheel and possibly a set of pedals and a gear stick. If it was possible, short of actually installing this system into an actual car and going on a real drive and trying to complete the task whilst actually driving, the ultimate prototype for this system would be to use a gaming steering wheel and pedals connected to a PC or a games console running a racing simulator game such as Project CARS, Forza Motorsport, Forza Horizon, Test Drive or GranTurismo (these are all simulator-style driving games, ‘arcade-style’ racing games such as Need For Speed would not be suited for this prototype), set the game view to an interior/drivers’-eye view and to somehow have the interface for my prototype behind the steering wheel on the screen. The last part probably would not be possible, so to get a really accurate representation I’d likely need to develop my own driving simulator and maybe also develop my own steering wheel controller to get the buttons I had in this prototype – most gaming steering wheels simply have the same buttons as ordinary game controllers do. This would of course be expensive, extremely time-consuming and likely overkill for a project like this, but in industry this is likely what they do. The benefits of a prototype like this over one like mine would be:

  • It’s an actual simulator that the user is playing – they need to steer themselves, brake themselves, change gear themselves and react to the road themselves. If they lose concentration or don’t focus on the road they will crash and there will be consequences, but at least no actual people will be injured or killed and no hardware will be damaged or destroyed.
  • For this reason, it’d be an actual test of how easy my system is to use whilst driving. If the user can complete the task in a short amount of time whilst still driving safely in the simulator game then the design can be considered a success.
  • The buttons would be tactile and give feedback (i.e., a button press) so it’d be a lot more like actually interacting with the system than simply clicking on the buttons on the wheel with a mouse is.
  • It would be a lot more obvious to the testers how to use the simulator – they likely wouldn’t even consider trying to touch the buttons on the infotainment systems if the simulator was running on a TV or a large monitor several feet away from them and the only input they had with the infotainment system prototype was a steering wheel with buttons on it.
  • It would also be a lot more obvious to the tester that they need to actually drive and focus on the road whilst attempting to complete the task.
  • Other parts of the infotainment system such as the rev counter and speedometer could be made more realistic easily – this helps to increase the authenticity of the prototype and make it seem more real.

In an ideal world that’s the kind of prototype and they’re the kind of testers that would yield the best data. If I were to improve what I have currently without changing my testers, this is what I’d aim to do:

  • I’d make the wheel a bit smaller and the instrument panel a bit larger to make the interface clearer to see.
  • I’d probably make a little explanation video or just talk to my testers beforehand to tell them exactly how the prototype is meant to be used.
  • If possible, during the test if the eye-tracking software detects that the user has been looking at the instrument panel and wheel for too long, the test will end saying that they have crashed or were driving dangerously by not paying attention to the road and their surroundings.
  • Likewise, but if they spend too much time looking at the road and none at all on the instrument panel and wheel the test will end and say that they were unable to complete the task whilst driving.
  • I’d try and animate the rev counter and speedometer to help draw the testers’ attention to the instrument panel from the get-go and also make the test seem more realistic.
  • I’d make two separate prototypes, one without the video and one with the video to eliminate the users glancing towards the ‘Toggle Video’ button in the upper left corner of the prototype in the non-driving simulation to help make the data a little cleaner and easier to understand to people looking at it who don’t know that there is a button there.

Improving the interface

The user feedback is invaluable and as a result if I were to redesign the interface (that includes the buttons on the wheel), I would change the following:

  • Change the ‘O’ symbol on the Option button to a settings cog icon or something similar.
  • Add directional arrow icons to the directional buttons on the D-Pad and add ‘OK’ to the centre button.
  • Make the centre button on the D-Pad smaller.
  • Make the duration bar for the music larger and thus easier to see whilst driving.
  • Add audio prompts and voice recognition capabilities to provide an alternative input method and also to make the interface easier to use for first-time users.
  • Add a head-up display on the windscreen for some apps, e.g. the Sat Nav map.

My testers all felt that these would be valuable changes to make to the infotainment system itself.

Conclusion

As mostly covered in the sections above, there were some flaws with my prototype and so the data I collected may not be truly representative of whether or not my system would be safe to use whilst driving, however I feel that I can be fairly confident that the interface of the system itself is a success going by my eye tracking data and the user feedback that I received.

The project has been challenging but it also has been great fun and has caught the attention of a lot of my peers due to its uniqueness and ‘outside-of-the-box’ thinking. It has really helped me to develop my JavaScript skills too – this project is the first time that I have ever really used JavaScript for a large project, typically in the past I have used languages such as Python and C#. I felt that JavaScript shares a lot in common with these languages, especially C#, but with all three being based on C that’s not too surprising. I feel that JavaScript is a nice language that I will want to continue to use to make interactive and interesting web experiences.

Bibliography

Affairs, A. (2017). Prototyping | Usability.gov. [online] Usability.gov. Available at: https://www.usability.gov/how-to-and-tools/methods/prototyping.html [Accessed 16 Nov. 2017].

Nielsen Norman Group. (2017). IA-Based View of Prototype Fidelity. [online] Available at: https://www.nngroup.com/articles/ia-view-prototype/ [Accessed 16 Nov. 2017].

DeFranzo, S. (2017). Difference between qualitative and quantitative research.. [online] Snap Surveys Blog. Available at: https://www.snapsurveys.com/blog/qualitative-vs-quantitative-research/ [Accessed 16 Nov. 2017].

Motor1.com. (2010). BMW Z9 Convertible Concept. [online] Available at: https://www.motor1.com/news/79410/bmw-z9-convertible-concept [Accessed 30 Oct. 2017].

Bmw.co.uk. (2017). [online] Available at: http://www.bmw.co.uk/en_GB/topics/discover-bmw/bmw-design/visions/bmw-concept-cars.html#z9gt [Accessed 30 Oct. 2017].

Boeriu, H. (2014). Did you know that BMW actually built a Z9 Concept?. [online] BMW BLOG. Available at: http://www.bmwblog.com/2014/09/25/know-bmw-actually-built-z9-concept/ [Accessed 30 Oct. 2017].

En.wikipedia.org. (2017). IDrive. [online] Available at: https://en.wikipedia.org/wiki/IDrive [Accessed 30 Oct. 2017].

Bmwon.com. (2017). bmw e65 | BMW Blog – News – Pictures – Comparisons – Reviews – Videos – Fans. [online] Available at: http://bmwon.com/tag/bmw-e65/ [Accessed 30 Oct. 2017].