A lot has happened with my dissertation piece very recently. I’ve had a busy summer doing freelance work, creating a new portfolio site and volunteering at Untangled which has meant that development of my dissertation has been slightly more delayed than I would have liked, but over the past week or so I have made a lot of progress with creating a working first prototype.

Creating a brand and everything that’s been done so far

The brand and initial wireframes were created between August 10th and 13th 2019, you can read about that here.

Creating the first working prototype

The first working prototype was created between September 3rd and 8th, using HTML, CSS and JavaScript. It allows the user to complete on very specific goal – add the ‘Black Dress’ item in the women’s wear category to the cart and then purchase it. The aim of creating this prototype was to see if I could make one user journey that could be tested.

The prototype itself is fairly basic. It’s built on Dragonbase 2, so the codebase is very similar to the Bidwell Joinery website and the newest installment of my portfolio site, Pendragon 3, so it is responsive and uses Flexbox and next-gen formats for images. Being built on Dragonbase 2 helped to speed up the development time somewhat. The home page and category pages are very similar to the portfolio pages you’ll find on my portfolio site which meant that all that needed to be done was add the images and the copy. The product pages were coded from scratch, but were developed quickly as I could take my wireframe designs and work from those.

The cart, checkout and ‘purchase complete’ pages were coded from scratch too and I didn’t design wireframes from these. Instead, these were designed by very carefully considering the order in which text on the website is read by the screen reader. Whilst testing the first few pages with Microsoft Narrator, I noticed that the order of the text on the pages was extremely important. In the video below you can see that there was an issue where after adding the item to the cart, the user would have to navigate through the rest of the content on the website (mainly images) before they received any kind of confirmation that the item had even been added to the cart.

Testing a very early version of my prototype website proved that modal dialogs are generally bad for screen readers.

Initially the ‘added to cart’ message appeared as a modal dialog which covered all of the screen, but could not be read by the screen reader until the user had swiped through the content underneath it because the code for the modal dialog was placed underneath the code for the rest of the images on the page and the screen reader reads content in the order it is coded.

I fixed this issue by showing a div placed directly beneath the ‘add to cart’ button when the user activated that button. The code for this was also directly beneath the ‘add to cart’ code and so the screen reader will read the text in that div (which confirms to the user that the item has been added to the cart) as the next item after the button has been activated – see the video below showing the solution in action.

Demonstrating the solution to the modal dialog – it’s more screen reader-friendly.

Unfortunately, even this isn’t 100% amazing because the user has to know to move to the next heading or element after adding the item to the cart – it’s not immediately obvious to them that they need to do this in order to be able to receive the confirmation that the item has been added to the cart and have a quick view of viewing the cart without navigating back to the menu to select the cart.

Asides from this, and the fact that screen readers seem to read out the text in SVG images (the header on the site is a SVG image – screen readers reading the text in SVG images is both a good and a bad thing), the prototype appeared to work well. The hamburger menu was easy to navigate thanks to using <nav> HTML elements and links, images and buttons were easy to find and understand thanks to their alt text.

Considering the text order

Each page has been designed with screen reader-friendliness in mind. That means that information is presented on each page in the exact order that I assume a user would want to receive the information.

Look at the screenshot of the product page below.

The information is read in the following order:

  • ‘Black Dress’
  • ‘£35.00’
  • ‘Prices include VAT’
  • ‘Product description’
  • The list beneath ‘Product description’
  • ‘Wearing suggestions’
  • The short paragraph beneath ‘Wearing suggestions’
  • ‘Select size’ list
  • ‘Add this item to your cart’
  • ‘Add To Cart’ button
  • ‘Black Dress has been added to your cart’
  • ‘Product specification’
  • ‘Size XS’
  • ‘View Cart’

The order of the content takes the user on a journey.

  • They are introduced to the product. The name of the product is descriptive and the price (and whether or not it includes VAT) is made known early on.
  • The user is then given a brief, bullet point list of the properties of the product.
  • The user is then given some information about who, what, when and how to wear this product. It’s not as important to the user as learning about the product details, so this piece of information can some second.
  • The user would expect to be able to select a size and then add the item to the cart, so next the user is presented with the list of sizes they can choose from.
  • Then, the user can add the item to the cart using the button.
  • There is then confirmation that the item has been added to the cart in the form of the heading that reads ‘Black Dress has been added to your cart’.
  • The size of the item added to the cart is confirmed.
  • The user can view the cart.

A sighted user would be able to choose their own journey and can probably deal with elements being in a slightly odd place. A blind user can’t deal with this – it’s up to you to guide them.

Look at the cart page above. Again, the user has to be taken on a journey. Firstly, the title ‘Your cart’ is read out, which introduces the user to the nature of the page.

Then, the product name is read out (‘Black Dress’), followed by the product information heading and the information beneath that. Namely the price and the size. Then, the ‘Buy this item’ can be activated. It’s likely that if the user is in the cart the actions they want to perform most are:

  • Confirm the product details
  • Buy the items

So the user is able to do these two things first and foremost. ‘View this item’, ‘Remove this item’ and ‘Empty Cart’ links and buttons are not accessible to the blind user using the screen reader until they have heard about the product details and had the option to buy the item.

On the checkout page, information should be asked for in a logical order. In this prototype, the card type is the first item that is specified, but then the postage is asked for. It would probably be beneficial to move the postage options to the end of the checkout form and have the billing address and card numbers and holder information after the card type has been selected.

At the end of the checkout, the user is read out the product details and then the price (and subtotals) before they can complete the purchase. This makes sense, since confirming the purchase is the final part of the journey.

Considering the text content

It was clear to me that the alt text for each of the images on the product page was going to be similar. What matters to the blind user when describing the product images is not necessarily the model, the setting or whether or not she or he is a brunette or wearing glasses, but rather the clothes themselves. That means that alt descriptions for product images need to focus on:

  • The colour of the clothing
  • How it fits the model (i.e. what parts of the body does it cover or expose?)
  • The fastenings on it
  • Whether or not it has pockets

In as few a words as possible.

Since a lot of the images would have similar alt text, since they are effectively all images of the same piece of clothing, a blind user would only need to hear this read out to them once before deciding to commit to a purchase. That means that there can be one image of the product at the top of the page, followed by some information, followed by purchasing options. The site still works for sighted users as there are more images of the product displayed underneath the buying options.

Product information equally needs to be short. Lovely descriptive paragraphs are nice to read, but it’s not so nice to listen to a screen reader read them all out in a monotone voice. Instead, lists are used to keep the descriptions short, but descriptive. Pieces of prose are generally kept to being a few sentences long at maximum. You have to remember that a sighted user can sub-consciously skip text that they don’t want to read by looking a few lines ahead and seeing what else there is to read. A blind user cannot do this. A blind user does not know what headings are coming up and so doesn’t know which sections of text to skip or not bother reading. I aim to ensure that all information presented on the website is useful and concise.

Keeping the text short means that the user is also able to get to the all-important ‘add to cart’ button faster – they are less likely to give up trying to find that button with the screen reader and the button will be more obvious for them to find.

This is not conventional, but it works better for screen readers and it does also work for sighted users too.

Testing a user journey with NVDA

The site has been designed with mobile use in mind, but to be truly ‘accessible’ it needs to work on any device, hence for testing with NVDA – a desktop screen reader.

Testing the complete user journey with NVDA on Microsoft Edge.

The video above shows the process of finding the Black Dress to view, add to the cart and finally buy. The video is nearly 20 minutes long, but it does not take anywhere near 20 minutes to complete this process. If you are proficient with using NVDA and do the user journey without explaining what you’re doing and what it means, you can complete this journey in well under 5 minutes, see the video below!

Testing the complete user journey with NVDA on Mozilla Firefox with the screen turned off to simulate blindness.

This video shows a ‘headless’ demonstration of the prototype in action (‘headless’ meaning that no monitor is running). The ‘headless’ demonstration shows how the prototype would work if you couldn’t see anything on the device. On an iPhone using VoiceOver, this is achieved by tapping on the screen with three fingers to enable the ‘screen curtain’ feature. On the Surface Pro 4 that I tried this on (and many other Windows laptops), it’s achievable by plugging in a second monitor, configuring Windows to project only on screen 2 and then turning off screen 2.

‘WOW!’

The headless demo was one of the most astonishing moments of my degree so far. It was one of the moments that made me go ‘WOW’. For the first time, I tried a real prototype that I had made in the way that a blind person would experience it and for the most part – it worked!

It’s certainly an odd experience interacting with a website when you can’t see anything and only being able to use the keyboard and do what the screen reader is instructing you to do.

The main flaws with the prototype tested on September 8th was the fact that there was no immediate notification of the item being added to the cart (whilst conducting the headless demo, I forgot that I needed to interact with the screen reader in order to activate the message telling me that the item was now in the cart) and the interesting way that NVDA interprets the SVG logo.

It makes perfect sense that the screen reader tries to read the text and shapes inside an SVG image, after all, an SVG image is just CSS and mark-up code – much like any other website. When inserted into HTML as an object (which the FFA logo needs to be as it uses a very specific font which it needs font files in order to display correctly and the SVG can’t read these font files if it’s inserted into the HTML as a regular image), the screen reader sees the SVG file as an extension of the HTML code and so tries to read it. The solution is to use a JPEG or PNG file instead of an SVG file, but of course the file size would be larger and the SVG’s limitless scalability would be lost if a bitmap format such as JPEG or PNG was used instead.

As for the other drawback – I made this site and if I didn’t think to move onto the next item after adding the item to the cart, how would a regular user?

Testing with VoiceOver

Another ‘wow’ moment was testing what I have made so far with VoiceOver on a iPhone. The experience was very similar to using NVDA, with the notable exception of course being that the mobile screen reader uses gestures.

You can see a video of the user journey (browsing the home page, browsing the women’s wear page, finding out about the black dress, adding it to the cart and finally buying it) being performed on an iPhone 7 with VoiceOver below. The video also shows how the mobile hamburger menu works.

Using the FFA prototype with Apple VoiceOver.

On the whole it worked pretty well with just a few things to note:

  • Adding some CSS to remove the text decoration from the lists in the description could help make this a nicer experience for the blind to use because the screen reader wouldn’t read out ‘bullet’ before each time.
  • Like when using NVDA, I noticed that it was to difficult to know that the item had been added to the cart without swiping onto the next element.
  • There are some labels above the ‘select size’ and ‘add to cart’ buttons which are potentially a bit confusing. If you watch the video, you can see I try to tap these to active them, but they are just text. They can probably be removed.
  • There’s nothing indicating that the menu button is clickable – a user might just swipe onto the next element. Likewise, when you open the menu the first thing you hear is ‘close’ – you have to swipe onto the next item in order to navigate through the menu (otherwise it works well!)

VoiceOver handled the SVG logo better than Narrator or NVDA. It simply read ‘FFA’ without trying to ‘read’ the shapes or anything else.

Small technical details – how the cart was created

The cart is a very basic one. It only allows for one item to be in it at any one time because my initial ideas for the usability test and the user journey only require the user to find and add one item to the cart.

The cart works by taking several key bits of information (such as the price and size) from the product page of a product and storing these as properties of an object. These individual properties are then written to local storage files which are read and retrieved by the cart and checkout pages. The benefit of this is it’s really simple and the cart is ‘remembered’ (i.e. when you refresh the browser window or even close it and reopen it, the item in the cart is still there). The disadvantage is that only one item can be displayed in the cart at any one time.

Usability testing

As of yet, no official usability test for this has been written, but ideas have been considered.

Contacting people for testing – enter the NNAB

On Friday September 6th, I contacted the Norfolk & Norwich Association of the Blind (NNAB) to ask if they would be interested in assisting with usability testing. The response s far seems very positive with them being interested in testing this for me.

A short survey

The first part of testing will likely involve a tiny bit more market research. Kyra (the Fashion Design student I am working with) wants to find out a little more about the types of clothes blind people prefer to wear (especially what kind of fastenings they prefer) so that she can better design clothes for the website.

I want to find out what mobile devices the blind people prefer to use so that I can choose the device that bit suits them to try this on. I plan to test this with VoiceOver on an iPhone, but they might prefer to use something different. What if the test group don’t use mobile phones at all? After all, they’re not commonly used among the blind.

I’d also like to get some direct quotes for them about how they find using the web and how often they use it. I think it would be extremely interesting to find out.

A user journey with goals

It’s more than likely than the testing will take the form of completing some basic user journeys to prove that a goal can be achieved. It will likely be as simple as the user journey shown in the videos in this post – after all, what else can you really do on an e-commerce website? The journey tests:

  • Navigation
  • Alt text
  • Content hierarchy
  • Textual content

All of which I identified as being problems with most commercial fashion sites. In fact, I wasn’t able to reach the checkout with most, so just reaching that would be a goal in itself.

It would be good to test the mobile navigation too.

Data will mainly be observational and qualitative. Users will be asked if they have a clear idea of the clothing that is being sold and explain to me how they think it looks and feels.

I propose to run two testing sessions – maybe one or two weeks apart from each other so that any issues identified in the first session can be rectified and improved upon for the second.

A/B testing

To prove that coding designs that target the blind is superior to just using an ‘off-the-shelf’ solution, I may create a similar website using something like the popular WordPress extension WooCommerce and get the blind users to try both to see what they preferred. I have used WooCommerce in the past to build an e-commerce site for a glass-maker. It’s free and easy to setup and has a lot of support – but how good is it for the blind? I’d like to try it with a screen reader to find out.

SALT Glass Studios’ redesigned website was launched on November 7th 2018 and the final modifications were completed by early 2019. This site is made using WooCommerce.

I may also test some different information architectures – how about testing the current home page design (split into ‘men’, ‘women’, ‘for that bit extra’ and ‘sustainability’) against an IA designed more to help users find clothes for certain occasions. Or even certain types of clothes? Which would help the blind find what they need quickest?

A/B testing would definitely help to prove that my prototype is better for accessibility and also potentially help guide the way for a better IA for accessibility.

Clothing for the website

Kyra and I purchased the first fabric for the clothing that will go on the website on September 6th. She’s going to create a dress out of the fabric and we will somebody to model these clothes and put them on the website as products before usability testing commences.

The benefit of having Kyra involved is that she knows all about the clothing and so can help me write descriptions that accurately describe the clothing. This is exceptionally important because without these descriptions, the users won’t be able to picture the clothing in their head and so the site won’t be a success. The site is no good if the user can’t picture the clothing.

Kyra has started to make this dress for the website.

Next steps

In short:

  • Get some usability testing sessions arranged and sorted.
  • Come up with a detailed plan for usability testing.
  • Fix those problems that I identified during my own testing with the different screen readers.