Between April and November 2019 I researched web accessibility (specifically for the visually-impaired), built and tested a prototype mobile fashion e-commerce website using the research that I had found. The last post on this blog discusses my journey to creating the first working prototype in September 2019 and some of the design decisions that I took in order to ensure that the website would work well with a screen reader, as well as some early tests I did with NVDA and VoiceOver.

Since then, the website prototype has been through four rounds of usability testing, a whole dissertation written about it (and graded), lessons learned and a new project taking this to the next level has commenced. This post is a short ‘review’ – it describes some of the findings from my usability testing and gives some small insight to how the website was tested and designed.

Knife-edge deadlines, strong emotions, fascinating conversations, extremely high stakes in ‘do or die’ decisions and a massive sense of achievement and pride at the end of it made this the greatest of all of my university projects. And made me a UX designer.

Prerequisite Reading

Information about all of the initial research I completed and preliminary testing that I conducted can be found here.

Preparing for testing

Preparation for the first round of usability testing commenced on Friday September 6th 2019 when I contacted the NNAB (Norfolk & Norwich Association of the Blind) to enquire if they would be interested in testing the prototype for several rounds of testing. By this time a working prototype had already been produced on the off-chance that the testing would commence in September 2019. Some of the clothes designed for the website by Kyra Balla, the Fashion student I was working with, still needed to be modeled and photographed at this point, but otherwise the site was complete and working.

The NNAB were very receptive to my inquiry and on September 17th 2019 conversations about testing began. After explaining what would be required of the testers and what I was looking for (a range of people to use my prototype on an iPhone and/or an Android phone and provide some feedback). Finally, on October 10th 2019 it was confirmed that the first testing would take place on October 21st with up to five testers. I had been having a long email conversation with the correspondent at the NNAB who I didn’t realise was completely blind until I read this line in an email on October 11th.

Finally, Iā€™m your final tester. I use an iPhone with VoiceOver, and am totally blind.

It was at this point that I realised the scale of what I was doing and exactly how to interact with the visually-impaired. I had been having an email conversation spanning the course of a month with very long messages almost every day with this gentleman and had not known I was blind until I read that line just a week or two before the testing.

Formative user research

Formative user research was conducted before the formal testing had begun, but after I had made contact with the NNAB. I created a short survey using Microsoft Forms that I distributed via the NNAB to visually-impaired users to find out a little bit more about how they use computers, websites and apps and the kind of pain points that they currently experience.

Eight users were surveyed in total. Four were over 50 years old, the four aged between 22 and 49. Six were blind, three had partial-sight and three had other visual-impairments.

The results are shown below:

  • All eight used a computer daily
  • 35% of assistive software used was screen reader software
  • 28% of assistive software used was speech recognition software and 18% magnifying software
  • 62.5% of respondents used multiple pieces of assistive software
  • Web browsing and using social media were common tasks that were carried out, but buying things online wasn’t so much
  • Together, unlabeled elements, difficult navigation and poor scaling for magnification software made for 57% of all usability problems encountered on the web
  • Poor content order made for 14% of all usability problems
  • Advertising, low contrast, too much white space and unusable controls made for the remaining 29% of usability problems
  • Improving content order and formatting of content was the most commonly suggested method of improving the online experience, even more so than supplying or correctly alt descriptions on elements

This data helps to understand the common frustrations encountered when the visually-impaired use websites and gives some insight in to how they can be resolved.

This data helped to confirm that the designs discussed in this post from September 2019 could potentially work for the blind and partially-sighted. Content is arranged in a logical order, alt descriptions are used to describe items, there is strong contrast, low advertising and a low amount of unused space. With this in mind, the wireframes for the first prototype for testing were created. The full design logic behind these is explained in the post from September linked earlier.

Wireframe diagrams for the first testing prototype. Note: in reality the colours were dark grey and an ‘off-white’, not blue and white!

Testing Round 1 – October 21st 2019

One of the most important, and amazing, days of my university life. This day truly felt like ‘judgement day’. Had it all gone wrong on this day, my dissertation – and indeed future practice, could have taken a totally different approach.

October 21st 2019 – The most important day of my degree – 3 hours where it could have all gone wrong

Usability testing that I had conducted in the past hadn’t mattered too much, really. All of the testing I had done for Stellardrive and Nellie’s Nursery in Year 1 of university felt important at the time, but unfortunately we all know that Year 1 studies count for nothing. Testing the Cancer Research UK chatbot at Earthware in the summer of 2018 had been very basic and at the time of writing, the chatbot has not gone anywhere yet. Testing in Year 2 had been a mix of professional testing with The User Story in January 2019 during my time there and some ‘not-so-realistic’ usability testing of the hastily put-together Broads Web App in April. None of this testing had had any impact on the world or really too much on my degree, but there was one thing it had been good for: experience. Throughout all of the testing I had conducted during my time at university so far I had learned what to do and what not to do, so I felt prepared, if very nervous, for October 21st.

For the first time on my degree course, and possibly ever, I was putting my work in front of its intended target audience. This testing had high stakes. If it went wrong, my dissertation would be up in the air – just a few weeks before it was due – and half of it had already been written. I would not be able to prove that my ideas for an accessible mobile e-commerce website worked and seven months of research and professional advice would have come to very little.

The tension and my nerves on this morning were sky high – the likes of which I don’t think I will ever see my on my degree again – until I open my final grade!

The pressure felt strong on the morning of October 21st 2019.

The test

The test was written on October 7th 2019 and was very simple. For the first time ever, I decided to conduct an AA-B (‘double A, B’) test as opposed to the simple A-B test that I had often completed in the past. The AA-B model was chosen mainly because I needed to prove that the FFA website was not only accessible, but ‘more accessible’ than existing fashion websites. Personal testing with screen readers over the summer of 2019 had shown that whilst I deemed no fashion website particularly great to use with a screen reader, the Matalan and Next websites both represented an ‘average’ experience, neither amazing nor awful. These are also two large fashion outlets where millions of people shop every day.

AA-B tests take two existing products and test it against one new one. In this test, the two existing products were the Matalan and Next websites. The new one was the FFA prototype.

On each website, the user was asked to complete one specific task, as follows:

  • Matalan: ‘Find a Christmas jumper and add the second one you find to the cart’
  • Next: ‘Find the women’s monochrome spot soft shirt and add it to your basket’
  • FFA: ‘Find a women’s black and grey shirt and add it to the cart’

These tests evaluate how easy products are to find, how the user finds them and how easy it is to actually make a purchase if you are visually-impaired. The users were not aware of these criteria and no time limit was given since this is not a time-critical operation.

Afterwards, each user was asked a series of questions:

  • How did you find the experience on the Matalan website? {expand on any points raised}
  • How did you find the experience on the Next website? {expand on any points raised}
  • How did you find the experience on the website prototype? {expand on any points raised}
  • Was there anything you preferred about using the website prototype to the Matalan and Next websites?
  • Was there anything that you didn’t like about using the website prototype compared to the Matalan and Next websites?
  • On the whole, which was the easiest for you to navigate?
  • Which gave you the easiest-to-understand information about the clothing that you were being asked to buy? Can you remember any of the product details?
  • On the whole, which was the most enjoyable for you to use and why? What could be done to improve the experience of the website you least preferred?

These questions would give the user a chance to express their thoughts on the usability of each of the three websites and why they thought what they thought.

To understand more about the accessibility of the FFA website, these questions relating specifically to the FFA prototype were also asked.

  • How did you find the alt description text on the images of the website? Were they helpful?
  • Did you find the structure of the page logical? Was content in an appropriate order, was only important information read?
  • Did you find the navigation structure logical? Was it easy to find the product that you were asked to find and if you used the mobile menu, was it easy for you to use?
  • Is there any more or any less information you would have liked to have known about the product? Can you remember any of the product details?
  • How did you find selecting product options?
  • What was the checkout experience like for you? Was it helpful hearing the stages of the checkout? {expand on any points raised}
  • Is there anything more that could be done to make your experience better?

These questions basically question all of the design choices that I had made when designing the first prototype to find out if my design choices worked for the users or not. If a user didn’t find the content order appropriate for example, I’d know that this would be something to change for the next round of testing.

I knew that the Next and Matalan websites were not great for accessibility, so really the questions I wanted to know the answers to were these seven questions relating directly to the FFA website. The balance of my degree hung on the answer to those seven questions.

The website that was tested was compiled on October 20th 2019 at 22:21 PM and in terms of functionality is similar to the build shown in the video in the ‘Early September updates’ post, but features many style and cart improvements.

Meeting the testers

It’s always exciting meeting your testers for the first time no matter who they are or what they are testing, but this time it was very different. Not only was this the most important testing that I had ever conducted (and I knew it), I had never interacted face-to-face with any visually-impaired users before.

I had done some research on how to conduct usability testing with the blind prior to the testing, but I was still nervous. I didn’t really know how to act, what to say, what would sound patronising, what would be inappropriate and so on.

However, my advice to anybody considering doing testing with the visually-impaired is that actually, they are just normal people. Just be yourself around them. There’s nothing at all to worry about. The NNAB were excellent at introducing me to my testers, all of whom were lovely and very easy to talk to.

The testing experience

Pollsters can predict referendum and election results early on in the results announcements by observing how key seats voted. I remember pollsters vividly correctly predicting the 2016 EU Referendum result to leave early on in the night by observing that key seats in the North East of England had voted to leave. More recently, the same pollsters correctly predicted that the Tories wold win a landslide victory in the 2019 General Election when traditional Labour seats in the North East (again) turned blue for the first time in history.

The same is true for usability testing. Time and time again I have seen that early on in usability tests, you can tell if they’re going to be successful or not by whether key concepts are understood early on in testing. The buttons on Stellardrive were some of the first pitfalls discovered in that prototype and the overall testing result was ‘it’s OK, but needs work’ (i.e., not a resounding success). Similarly, buttons on early versions of the Nellie’s Nursery website proved to be difficult to understand and resulted in a poor usability test performance. Later versions of that product were quite the opposite however, with users immediately enjoying the animations in Elebase 4 usability testing. The Broads App was put in front of my 21 year old friend Hannah who didn’t know immediately how to use it, so when it was put in front of people three times her age later on that day, it struggled and the usability testing didn’t go great.

This time, it was a little different. It also started off not being a runaway success, with the first tester clearly struggling to use the FFA prototype on the Samsung S8 test device with the screen reader, citing the lengthy alt descriptions as being the reason why he was struggling. He personally preferred one of the other two commercial websites. For me, this signaled a big blow. I honestly thought that it was going to be time to go back to the drawing board and re-think everything.

Then, things changed. The second tester was an experienced screen reader user, and though she was using her own device with custom software which I had not tested the website on (a Samsung S4 Mini) and she had the screen reader set to read so fast I couldn’t make out what it was saying, but when she said to me:

‘I did it, now what?’

was the moment that the day – and my degree – changed.

At that moment, I knew that I was onto something. Unassisted, she had completed the task on the FFA website in a much quicker time than on the competitor websites.

There were two more testers after her, both experienced with using a screen reader, and both highly commended the prototype.

It was genuinely quite an astonishing moment when I saw my website work for a blind user for the first time. The pragmatic side of me kept it together, but the emotional, tired and amazed side of me struggled to hold the tears back!

Testing data – FFA vs Next vs Matalan

What you came for!

  • 75% of testers preferred the FFA prototype to the Next and Matalan websites
  • However, 58% of all tasks in this usability test could not be completed – showing that there are definite usability problems for the visually-impaired on mobile e-commerce websites
  • The Next and Matalan websites accounted for 75% of all task failures
  • 60% of all tasks that were completed successfully in Round 1 were completed on the FFA website
  • A user was 58% likely to be unable to complete a task on the Matalan website, but only 20% likely to be unable to complete a task on the FFA website – making the FFA website theoretically three times more usable than the Matalan website
  • The Matalan website accounted for 52% of all uncompleted tasks. FFA accounted for just 18%
  • 30% of all uncompleted tasks were on completing the purchase, making this the biggest point of failure
  • No users were able to view the cart or buy the product on the Matalan website, but 75% of them could on the Next and FFA websites

Criticism of the Next and Matalan sites included:

  • Buttons (such as ‘view cart’) not being in expected places
  • Unclear how to view the cart to complete checkout
  • Unclear how to complete the checkout process
  • Unclear how to change the product size
  • Popups were difficult to dismiss and hindered navigation

These comments are reflected in the test data. The Matalan website consistently performed worst, the Next website performed better but still presented usability flaws that prevented it from functioning as well as the FFA Prototype website.

The data shows that the FFA Prototype is the preferred website in almost every part of the six-step user journey. The data shows that the FFA Prototype:

  • Allows for easier selection of product sizes by specifying the item size before adding to cart
  • Allows for easier viewing of the cart because the user is notified that the product has been added to the cart and can view it immediately
  • A simple checkout process that helps the user to understand what information is required and how far through the process they are.

After the test, users were asked to discuss their experience with the sites. Reasons for preferring the FFA website were:

There was more relevant information and it was easier to navigate.

I found the items in the menu immediately.

Alt descriptions on images confirmed that I was on the right page.

The site was laid-out well.

It was helpful knowing how many stages the checkout had and that it was all on the same page.

I knew exactly what I was buying.

I like how it said ‘Added to cart’ immediately.

I never cry, but it was hard holding a tear back…

October 21st 2019 was genuinely one of the most amazing days of my degree. And certainly the most rewarding. The moment the second user completed the task, I just could not believe what had been achieved. Somebody without vision had successfully used a website I had made. I now understood what went into designing a website that those with disabilities could use. I had done some design for ‘social good’.

I don’t think there will be another day on my degree where I feel that same sense of achievement and pride. Apart from maybe graduation, but that will just be for myself.

Testing Round 2 – October 29th 2019

The testing from October 21st had been a massive success – the most successful usability testing that I had completed, in fact. However, there were some improvements that could be made to make the FFA prototype website even better to use for the visually-impaired, notably:

  • Adding a search facility would make finding products easier
  • Clickable headings and images would make navigation faster
  • Ensuring that the correct heading levels are used would make it easier to understand page structure (i.e., don’t just use heading level tags to make text big – only use it for actual headings)
  • Reduce the length of the alt descriptions to increase the navigation speed (screen reader would read less)

Arrangements for Round 2 of testing were actually finalised before Round 1 of testing had even commenced. October 29th was suggested on October 11th to be the date for Round 2. The reason I arranged for two sets of tests initially was because I wanted to show in my dissertation that I could iterate on a design and build on feedback. Until this round of testing had been completed, it was envisaged that this would be the final round of testing because the dissertation was due on November 15th.

The test

This test focuses just on the FFA website with no comparisons with other websites. The same users who completed the Round 1 tests were asked to perform a set of ten specific tasks with the updated prototype, which was compiled on October 27th 2019 at 18:00 PM.

The aim of this test was to assess navigation with- and without the search facility (new to this build), the logic of content organisation and the user’s understanding of the context of the content. Essentially, it tests how well a user is able to understand what is being read to them.

The tasks were:

  • By scrolling only, can you find where you might buy accessories on the home page?
  • Can you find a menu?
  • Using this menu, can you please find the women’s wear page?
  • How many products are on the women’s wear page?
  • Can you find a search facility?
  • Can you please find the grey kimono t-shirt using the search facility?
  • How much does the kimono t-shirt cost?
  • What is the kimono t-shirt made of?
  • Add an extra large version of this shirt to your cart and view the cart
  • Buy this item and proceed to checkout

It was advised that invalid credit card numbers and CVC numbers should be used to test the error-checking of the checkout page.

This prototype featured a search field.

By asking questions such as ‘how much does it cost?’ and ‘what is this t-shirt made of?’ I was checking to see if the user had understood what was on the page and whether they could find this information easily rather than just trying to determine if they could find a certain product. Asking them how many items are on one page tested how easy products/content were to differentiate from each other.

Again, after the test, a set of questions relating to the criticism of the original testing prototype and new features of this build were asked to give the user a chance to express their thoughts of the updated prototype. Additionally, questions such as ‘was it easy to select items from pickers and lists’ and ‘were elements were you expected them to be?’ were also asked to find out the finer details of the usability of key controls on the website.

Testing data

Again, it was another positive test on the whole with a few points for improvements. It wasn’t quite as resounding as the original test had been, but this time the website was tested in more depth so more usability problems were going to be detected.

  • Comprehension and completing the purchase were the strongest points – these are two critical KPIs (key performance indicators) of most retail websites, so this is a big success!
  • 50% struggled to initially find the website’s mobile menu, but found it easy to use once they found it
  • Users struggled to differentiate between products on the store, often citing the wrong number of items for sale
  • Only 25% of users could find the item with the search – users seemed unaware that they could type directly into the search bar
  • Blind users fared better in this test than partially-sighted users
  • Blind users were 100% more likely to be able to group content, twice as likely to find the accessories section and 50% more likely to be able to use the search facility than a partially-sighted user
  • Ambiguous headings and alt descriptions made it hard to find the accessories section and users were unaware that the search is active as soon as it is opened
  • Users praised the error-checking features which highlight incorrectly-completed fields and tells the user where errors are and allows them to correct the errors before buying the item
  • Users also enjoyed the ‘readback’ feature that reads back the checkout data to the user before completing the purchase

Testing Round 3 – November 1st 2019

After the second round of testing it was clear that the website was working well for those who were completely blind and relying solely on a screen reader, but perhaps not so well for the partially-sighted. At this point I faced a dilemma. The dissertation was due in 15 days and I had only written half of it. I had proven that it was possible to make an accessible mobile e-commerce website and I had even been able to iterate once on it. At this stage, I could have cut my ties with the NNAB and gone away to write the dissertation.

However, ‘nothing safe is worth the drive’ and the partially-sighted users were extremely keen to provide me with information on how I could extend the prototype to make it accessible to even more people. There are more partially-sighted users than totally blind users and by exploring how to make the site work with the partially-sighted I’d be opening myself up to learn about colour, font and legibility – which I had never done before.

The decision, therefore, was set. Between October 29th and November 1st, a minor build was compiled which added the following:

  • Borders to form elements (such as buttons and text fields) and increased text size in form elements to help distinguish controls
  • More logical heading titles and alt descriptions for images
  • A search icon to help those who struggle to read text find the search facility
  • An announcement from the screen reader that the search was active and the keyboard was displaying, ready for input
  • Error-checking and inputting suggestions in the readback feature
The visual design of the checkout was modified slightly for Round 3 of testing to make it easier for those with visual impairments to read.

The test

The short notice of this particular round of testing and the fact that I wanted to specifically try this with one blind user and one partially-sighted user and compare the results meant that only two, experienced VoiceOver users tried this prototype. The build was compiled just 75 minutes before testing commenced, making it the most ‘knife-edge’ compilation date so far!

Four tasks were required of each user, each task representing a key step in a typical user journey on an e-commerce website:

  • By scrolling only, can you find where you might buy accessories on the home page? (to test the relabeled headings)
  • Please find the grey raglan t-shirt and add a medium-sized version of it to the cart (to test picker accessibility)
  • Please purchase this item, deliberately excluding at least one field, deliberately mistyping either the card number, email address or CVC number (to test the error-checking and suggestions in readback)
  • Refresh checkout for the participant, this time ask them to exclude typing in one of the type fields (also to test error-checking)

By having two specific users and knowing each one, I could do something very unusual in a usability test and ask specific users to comment on specific things relating to their use/disability. Ordinarily, I would never recommend doing this as each user should be able to comment on everything they notice, but at this stage I was at the third stage of testing and knew exactly what valuable insight each one could specifically give me.

The partially-sighted user was specifically asked to:

  • Comment on the visuals of the checkout page and verify if the contrast had been improved
  • Find the search function on the site and comment on its contrast and visuals

The blind user was specifically asked to:

  • Comment on the improved readback feature. Does it help you to locate errors and correct them before checking out and purchasing the item?

Both users were also asked:

  • What did you prefer about this website to the Matalan and Next websites that were tested in the first test?
  • What did you prefer about this website to other websites you have used?

With the FFA prototype apparently becoming quite advanced in its accessibility features (compared to the majority of other websites), I thought it would be interesting to find out what it was exactly that made it easier for them use now.

Testing Data

The modifications were a success. The feedback provided to the blind user by the screen reader about the state of the search and checkout enabled the user to understand how to use these features better. The partially-sighted user could better identify form elements and their contents, easing their use of the checkout. Blind and visually-impaired users successfully completed the same tasks.

Testing data from Round 3 of usability testing.

Testing Round 4 – November 5th 2019

At this stage I was just 10 days away from handing in the dissertation. The dissertation that was supposed to contain the results from all of my testing, which was evidently still not complete. I was also 10 days away from handing in other project work that I was working on (which would become the successor to the FFA project) and was in the midst of applying to graduate schemes and jobs.

Becoming ‘too attached’

At this point, a lot of people including my friends, tutor and family were getting very concerned that I was becoming ‘too attached’ to the project and perfecting it. I found the whole project extremely fascinating and wanted to keep on developing it until every box had been checked and every I dotted, but time was not on my side.

Peers were also concerned that I was becoming too attached to the users in my testing sessions, often spending hours at a time during each testing session talking to them to find out more about how to make my website even better. The danger of becoming too attached to your testers is that soon it becomes like asking your close friends or family to test your work – they start to lose their impartiality and say the things you may want to hear.

To me, this was the single most exciting project ever and the continual successes at usability testing sessions and ‘good feelings’ of walking out of the NNAB feeling that I could change lives and that I was ‘onto something’ with each and every successful usability test was hard to not like. Having never built anything that changed people’s lives before, these feelings were new and felt very good. The drive to better it each and every time was extremely hard to resist, but in the end this had to be the final testing session and prototype tested.

The final prototype

The final prototype was one of the most important. This prototype was the one that truly turned FFA from being a website for blind people to being a website for everybody: blind, partially-sighted or sighted. This prototype added the following:

  • Text and background colour choices (‘high contrast’ options)
  • A few font, which had been chosen in conjunction with partially-sighted users at the NNAB during a lengthy discussion about fonts and glyphs (Source Sans Pro)
  • The ability to alter the font size between ‘small/default’, ‘medium’ and ‘large’ for better use with magnifying software

No changes were made to the screen reader compatibility or content order at all, the changes in this build from November 4th 2019 were all purely based on providing these new colour and text modes.

The new accessibility modes, as shown in the Round 4 prototype. These modes help to make the text easier to read for those with partial-sightedness.
Wireframes for the various high-contrast modes introduced in the Round 4 prototype.

The test

This prototype aimed to prove one thing: that a partially-sighted person could read the text on the website thanks to the modifications made to the fonts and colour options.

In order to test this, a comprehension test was written. It was tested with just one severely partially-sighted user who used the magnifying software on an iPhone to complete the test.

Nine separate comprehension questions were asked. Some of the answers to these were written directly in the text, others had to be inferred from the information that was provided on the page, testing an understanding of page context.

The six comprehension questions were:

  • Describe the ribbed dress
  • Describe the raglan t-shirt
  • How much does the ribbed dress cost?
  • What fit is the black dress?
  • State the texture of the raglan t-shirt
  • What sizes are available?

The answers to all of these were in the product description and did not have to be inferred.

The additional three contextual questions were:

  • What sort of women’s clothing does this site appear to sell?
  • Which occasions may the black dress be worn at?
  • Might a buyer wear this t-shirt at formal occasions?

Fixation time is often used by professional studies to measure legibility, however in this simpler study I assigned secret ‘keywords’ to each question that the tester would hopefully use when answering each question. The idea was that the more keywords were identified in the answers, the stronger the comprehension probably was. Measuring the accuracy of answers given for the contextual questions was slightly harder than measuring those for the comprehension ones, but if the answer given appeared to be correct and some reasoning was given to justify the answer, then the text was thought to have been understood.

To test legibility, a screen reader was not used in this test. It was an unrealistic scenario, but a screen reader would have interfered with collecting data that purely assesses how easy the text was to read because the screen reader would have read everything aloud to the user.

The user enabled the large text size (5.55vh, or approximately 28pt) in yellow text on a black background, which they commented was the best setup for them.

Testing data

Comprehension was good, with at least 75% of keywords identified in four of the six tests. Comprehension was best when the number of keywords and average length of the keywords was low. Where the average length of the individual keywords and the number of keywords that could be identified was high, for example where the number of keywords was six and the average length of each above eight characters, only three keywords could often be identified. This suggests that comprehension is best with short words in short sentences, as opposed to paragraphs. The exception is identifying the sizes, but the user stated that the options were ‘expected’, enforcing that form elements should have logical options.

The Source Sans font and succinct, relevant content successfully enabled a good understanding of the product.

Answers to the contextual questions showed a strong understanding of content. The first question was answered with a list of products found on the women’s page and mentioned the ‘casual’ description. ‘Nights out, special occasions – any time of year,’ was the answer given for the second question, taken from the blurb on the dress page. The answer for the final question was ‘it seems quite smart,’ followed by ‘good description,’ inferring that the description provided the context to answer the question.

Another ‘WOW!’ moment

The first time that my tutor saw the prototype running with the phone screen turned off (‘headless’) in September and the first time that a blind user successfully completed the user journey on October 21st were two of the most astonishing moments of my degree (so far) and in this testing there was another ‘wow moment’.

The partially-sighted user who tested this prototype could seldom read text on websites, so when he was able to read the text on this prototype and even infer details from it, I was stunned! He was too, I think! Where the first time the ‘headless’ demo had been run was a moment of ‘technological achievement’, this really felt like a moment of ‘designing for social good in action – and changing a life’.

It’s moments like these that you want to be a UX designer, or any designer, for.

A comparison of popular Sans Serif and Serif fonts. Source Sans Pro was chosen because it was a Sans Serif font that has clear distinction between the glyphs ‘l’, ‘i’, ‘1’ and ‘!’. This makes reading easier for the visually-impaired.

Testing evaluation

Four individual rounds of testing concluded that the FFA Prototype is more usable for visually-impaired users than two typical existing fashion e-commerce websites. Throughout the tests, comprehension, understanding of context, navigation and site feedback (such as adding items to the cart and notification of checkout errors) was good among both visually-impaired users.

It is safe to say that the prototype tested on November 5th 2019 works well for people with a range of severe visual impairments. However, a larger group of testers from different backgrounds, ages and with different impairments would prove the repeatability of these results and find more usability faults.

Had an additional round been conducted, it would have been good to get some people with less severe visual impairments, such as dyslexia, to try and use the website. Eventually the prototype would have probably been made to become a website that ‘truly works for most people’s vision’.

Tests were measured using task-based methods. These tests were easy to conduct, easily understood, showed a typical user journey, found usability faults and assessed comprehension. In future, more scientific methods of measuring these factors could be used, such as measuring legibility by calculating reading speed using fixation time.

The FFA site is of course not entirely representative either. It features just six products, whereas in November 2019 the Matalan site featured 382 products and the Next site a massive 6,601 products – over a thousand times the number of products. They of course have to have much more advanced filtering, search and category systems to make it easy to find these products – but the conventions shown on the FFA website show that there are certain things that can be done and likely implemented onto other fashion websites that improve the accessibility.

The final FFA prototype at the NNAB for testing on November 5th 2019.

Recommendations for designing for the visually-impaired

Testing revealed that designing for the visually-impaired relies on considering the way content is written and ordered and making use of existing HTML and CSS features to enable better compatibility with assistive software.

Every design decision taken must make finding content and calls-to-action quick. Users must also be made aware exactly what is going on and errors must be first prevented and explained second. Therefore, some practices to consider are:

  • Place content in a logical order, with the most relevant or important information announced first. Visually-impaired users cannot skim, so content is digested from top-to-bottom
  • Concise content improves comprehension and use logical heading titles to describe grouped content
  • Avoid excessive use of images and long descriptions because these slow down navigation
  • Avoid unnecessary image captions because these slow down navigation and confuse users
  • Alt text can be used to help reaffirm a user’s location on a page
  • Provide a search facility – this improves navigation speed
  • Announce state changes using assertive ARIA regions and ensure that the user can access or view these changes immediately
  • Fully describe how data should be inputted into fields and clearly highlight any fields that contain incorrect or invalid data
  • Break multi-stage processes into stages and announce how far through the task a user is

When designing user interfaces for the partially-sighted, the following should be considered:

  • Allow the user to change the font size and line spacing. 18-24pt font is recommended
  • Allow the user to configure high-contrast colours and use thick, coloured borders on elements to aid distinction
  • Ensure that the site is responsive and text formatting remains when page scale is increased
  • Arrange content into a single column and do not justify text
  • Ensure that words do not flow onto multiple lines

The key takeaway is that you must design altruistically in order to design for the visually-impaired. You have to put yourself in their shoes and understand how they use the computer before you can even begin to do this.

Demonstration of how line height, font size and colour choices helps to make text easier to read with magnifying software. Essential for the visually-impaired (shown in the Round 4 prototype).

Beyond FFA…

Firstly, it’s amazing to think that when I was developing the Nellie’s Nursery prototypes between February and April 2018, I made four prototypes in two months. When I was developing the FFA prototypes, I made the same number of prototypes in just two weeks. This project was a serious example of ‘RAPID’ application development in action! Each prototype was an iteration of the previous, adding new features each time, as opposed to the Nellie’s Nursery and even the Broads web app prototypes which were all complete rewrites using different codebases.

The FFA prototypes were the first pieces of university work I submitted that didn’t use jQuery and also have what I call ‘manageable code’. I like to think that the code is well-written. Often a few months after I code something I look back at the code ad think ‘what?’, but the code from FFA is still very similar to what I use today. I think my level of coding is now at a stage where it will stay fairly consistent until my next big breakthrough, whenever that may be.

The fourth and final prototype of the Nellie’s Nursery prototype, photographed outside Nellie’s Nursery on April 27th 2018 after an acceptance testing meeting. Photographing my work outside of the places they were tested at is nothing new!

The dissertation

The dissertation that FFA was made and tested for was written in stages throughout the course of October and November 2019. It begins by introducing the problems experienced by the visually-impaired when using the web and the research I did in April 2019 to back this up. It then goes on to show some provisional designs of an accessible mobile e-commerce website and describes each of the four phases of testing in detail to come to the conclusions shown above (the list of recommendations for designing accessible websites).

An additional paper which goes into detail with code examples and a more detailed list of UI and content considerations was also submitted with the dissertation.

The dissertation and its supporting documentation was submitted on November 15th 2019 and was marked and graded by November 29th. The grade was the highest I had ever received at university, beating even the unexpected success of Stellardrive from two years prior.

Sometimes, taking gambles pays off. It seems that on this occasion it did.

Industry attention was generally high throughout the project, with connections at various companies, including Microsoft, keen to read the dissertation and hear about my experiences of testing with the visually-impaired. FFA remains a unique project to talk about at job interviews and an excellent addition on my CV, portfolio and cover letter.

Eyes Without Sight

Eyes Without Sight is the next and current (at the time of writing) university project. It extends the accessible technology theorem to producing an app that genuinely helps a visually-impaired user improve their life by assisting them with tasks that would otherwise be difficult.

It is inspired by apps such as Microsoft’s Seeing AI app, which I first used in April 2019. This app describes to a visually-impaired user what they are seeing – it can describe images, people, currency, objects, animals and so on.

Eyes Without Sight was developed alongside FFA, starting in October 2019. The initial version was written in C# and used the Microsoft Computer Vision Azure Cognitive Service AI to identify the contents of an image and display the caption on the app. Later, it was rewritten in JavaScript and made into a PWA that could run in Microsoft Edge Beta on Windows 10. The Windows 10 PWA version of this app was what was submitted on November 15th and it used the FFA HTML structure and CSS files to make it equally as accessible as the FFA prototype. The Eyes Without Sight PWA featured a dark UI by default.

Eyes Without Sight was left as this very basic PWA that could just identify images (and not serve much other purpose) until December 17th 2019 when the project restarted with a view to give Eyes Without Sight a purpose and continue to produce accessible products.

The Windows 10 Eyes Without Sight PWA successfully identifying an image containing a sunset behind some trees.

Fosters Solicitors

As soon as I completed my dissertation, I was appointed by Fosters Solicitors to build them a highly interactive and visual web presentation which would be presented in February 2020 to their partners. This was a completely different approach to the one I had taken with FFA and Eyes Without Sight, which were designed primarily for non-vision or low-vision users. I had to adapt quickly from developing experiences for the visually-impaired to developing experiences for those who wanted something highly visual. Therefore, whilst the Fosters Solicitors presentation was being developed between November 2019 and January 2020, the accessibility projects came to a natural, but temporary, halt.

‘The ultimate UX feat’ – achieved?

When first considering completing a technical report and creating a mobile accessible website back in April 2019 I called the task ‘the ultimate UX feat’. No other university project that I have completed to date has involved quite as much user research and testing as this one, so this is arguably the most ‘UX-y’ of all of my UX projects.

It seemed like a massive challenge, but from a humble HTML page with some random elements on it to prove that accessible websites could be made came a fully-fledged prototype website that not only worked for the blind using screen readers, but for the partially-sighted using magnifying software and even for sighted users using no assistive technology at all. Admittedly, this was never the intended plan, but what happened happened because of excellent feedback during usability testing and a desire to push to the next level.

I’ve learned so much – everything from how to test with the visually-impaired to coding prototypes very quickly to formal writing and the LaTeX language that my dissertation was written in and to what exactly visually-impaired users need to help them use software and finding something that I am passionate about that excites me. But most importantly, I’ve learned how to design altruistically. This project made me a better designer.

Mission accomplished.

That feeling when designing accessible software… šŸ™‚