April 30th 2019

The first day of the new (and super short!) term!

I got NVDA to speak German

In the previous post I mentioned how NVDA running on Windows 10 hadn’t been able to interpret the ‘lang’ span class in the HTML and thus could not speak the German correctly. VoiceOver on an iPhone 7 had been able to read it, though. I suspected this might be because my copy of Windows 10 didn’t have the German language packs installed whereas iOS on the iPhone likely did – and my assumptions were correct!

My copy of Windows 10 only supports British English, so I traveled back in time 10 years, got my copy of Windows Vista Ultimate out, installed it in a virtual machine and then installed the language packs (language packs could be installed on ‘Ultimate’ editions of Windows Vista and 7 as part of ‘Windows Ultimate Extras’) and gave it a try. And it worked! See the short video below.

It’s not perfect, but it’s close enough!

Please forgive me for using Internet Explorer in this demonstration! This was the only browser I had installed in this virtual machine of Vista! I also had to use NVDA 2017 because that’s the last version that supports Windows Vista. My other video demos are from NVDA 2019.

Coding a prototype with form elements

I’m going to be designing an e-commerce website, so forms form a huge part of the site. Forms on e-commerce websites are needed for everything from selecting product SKUs to providing shipping and billing addresses. It’s therefore important that they are easy for the target audience to use.

Forms, like several other key HTML elements mentioned in the previous post, can have special ARIA tags added to them to help improve their usability for the disabled.

There are several types of form element that I’m going to be focusing on:

  • Short text input fields
  • Long text input fields
  • Check boxes
  • Radio buttons
  • Drop-down menus
  • Submit buttons

These are found in most web forms.

Form elements often have a label associated with them which is bound to the element itself – this is so that the screen reader can announce which element the label that it reads is for. Below is the HTML syntax for a short text input (one line).

<label for="name">Name:</label>
<input id="name" type="text" name="textfield">

This displays as:

The screen reader will announce the label and the type of input field that it is so that the user knows what data they are expected to input and how much data they are expected to provide.

The syntax for a textbox (long text input field) is much the same, except ‘textarea’ is used rather than ‘input’:

<label for="address">Enter your address:</label><br>
<textarea id="address" name="addresstext"></textarea>

A good example of where you’d see this on a form is an address field, where addresses tend to be multiple lines long. They display like so:

Check boxes allow for multiple choice selection and have the following syntax:

    <legend>Select your county:</legend>
    <input id="norfolk" type="checkbox" name="counties" value="norfolk">
    <label for="norfolk">Norfolk</label><br>
    <input id="suffolk" type="checkbox" name="counties" value="suffolk">
    <label for="suffolk">Suffolk</label><br>
    <input id="essex" type="checkbox" name="counties" value="essex">
    <label for="essex">Essex</label><br>

This is an example where the user is able to select a county from a check box list. The items sit in the fieldset landmark which some screen readers can navigate to. There are three items in the check box list and each has a label bound to it. The screen reader will announce which items have been checked and often also announces the number of items that are in the check box list.

This displays like so:

Radio buttons work in a similar way – the key difference between the two (other than the way they look) is that radio buttons do not support multiple selections.

    <legend>Choose a country in the UK:</legend>
    <input id="england" type="radio" name="country" value="england">
    <label for="england">England</label><br>
    <input id="scotland" type="radio" name="country" value="scotland">
    <label for="scotland">Scotland</label><br>
    <input id="wales" type="radio" name="country" value="wales">
    <label for="wales">Wales</label>
    <input id="northernireland" type="radio" name="country" value="northernireland">
    <label for="northernireland">Northern Ireland</label>

Again, it sits in the fieldset landmark and each item is also bound to a label. The screen reader announces the same kind of thing as it does for the check boxes. They display like so:

Drop-down menus have the following syntax:

<label for="continent">Choose the continent</label>
<select id="continent" name="continent">
    <optgroup label="Western Continents">
        <option value="1">North America</option>
        <option value="2">South America</option>
        <option value="3">Europe</option>
    <optgroup label="Eastern Continents">
        <option value="4">Africa</option>
        <option value="5">Asia</option>
        <option value="6">Far East</option>
    <optgroup label="Oceania">
        <option value="7">Australia</option>

This displays like so:

Drop-down menu items can be categorised, for example here we have continents categorised into where they are in the world. The optgroup is optional, but some screen readers can navigate long context lists by optgoup title rather than just by going through the list of items in the context menu. This makes it quicker for a blind user to scroll through lists.

Form submission buttons are very simple indeed.

<input type="submit" name="submit" value="Submit">

Regular buttons can be made more accessible in the same way that images can be, by adding alt text.

<button alt="page 1 button">Page 1</button>

Some screen readers will read the alt text which helps to describe the button content.

View the accessible forms prototype here.

Coding a prototype with carousels

I wanted to do this for two reasons:

  • See how blind users would interact with an element that relies on swiping on a mobile device when their whole web experience revolves around swiping.
  • See if a common web element could be made to be accessible.

To create the carousel I used the JavaScript library OwlCarousel. OwlCarousel is used in the menu for the Storehouse Issue 18 website and also features a lot on my portfolio website. OwlCarousel is powered by jQuery and is very easy to implement as well as being responsive, making it an ideal method to create carousels for a prototype that I wanted to make quickly and run on an iPhone.

All you need to do is link the appropriate JavaScript scripts and CSS stylesheets and then use the following HTML to create your carousels.

<div class="owl-carousel owl-theme">
    <div class="item">

That will create a carousel. You can add additional slides by adding more ‘item’ classes inside the ‘owl-carousel owl-theme’ class and you can add more content to the slides by adding more inside the ‘item’ class. I copied and pasted a carousel from my portfolio website which contained a heading, one image, some paragraph text and one button per slide. I added alt descriptions to the images and buttons.

This carousel is also set to advance the slide every 5 seconds.

It looks like this:

View the accessible carousel prototype here.


NV Access. (2017). NVDA 2017.4 drops support for Older Operating systems. [online] Available at: https://www.nvaccess.org/post/nvda-2017-4-drops-support-for-older-operating-systems/ [Accessed 29 Apr. 2019].

Shultz, G. (2008). Examine the new Ultimate Extras available for Windows Vista Ultimate. [online] TechRepublic. Available at: https://www.techrepublic.com/blog/windows-and-office/examine-the-new-ultimate-extras-available-for-windows-vista-ultimate/ [Accessed 30 Apr. 2019].

Webaim.org. (2013). WebAIM: Accessible Images. [online] Available at: https://webaim.org/techniques/images/ [Accessed 30 Apr. 2019].

Webaim.org. (2018). WebAIM: Alternative Text. [online] Available at: https://webaim.org/techniques/alttext/ [Accessed 30 Apr. 2019].

Webaim.org. (2018). WebAIM: Creating Accessible Forms – Accessible Form Controls. [online] Available at: https://webaim.org/techniques/forms/controls [Accessed 30 Apr. 2019].

Webaim.org. (2018). WebAIM: WebAIM’s WCAG 2 Checklist. [online] Available at: https://webaim.org/standards/wcag/checklist [Accessed 30 Apr. 2019].

May 1st 2019

Today I tested the prototypes that I made yesterday on an iPhone 7. The iPhone had to go back at NUA this afternoon, so it was critical that I found some time this morning to do this.

Once again, I tested with and without screen curtain enabled. Screen curtain helps me to envisage what using the prototype might be like as a blind person because I am able to use the prototype without having anything present on the screen.

Using the Accessible Form Prototype with VoiceOver on an iPhone 7

It was easy to listen to the screen reader ‘read’ the entire page contents out to me. Filling in some of the fields was a little more difficult, especially the ‘drop-down’ menu where there are special gestures that need to be used. These gestures and the picker element are of course built into iOS and can’t be changed, so the best that I can do is use good HTML markup that allows the screen reader to explain fully to the user what is on the screen.

Text boxes (long and short) are announced by VoiceOver with their respective label and the user is instructed to tap to enter text. The keyboard in accessibility mode requires some thinking:

  • Tap on a key once and the black selector box appears and VoiceOver announces the letter in ‘normal English’, shortly followed by phonetic English for those who may have misheard the normal English letter pronunciation.
  • Quickly double tap on a key to type the letter.

Double tapping on ‘done’ inputs the text into the field.

Check boxes and radio buttons that have numbers assigned to them in the HTML markup are announced in order and VoiceOver also announces the beginning and the end of the form element. VoiceOver also announces whether or not the item has been checked or selected when it is reading through them.

The drop-down menu is called a ‘picker’ on iOS. VoiceOver reads the button label and the button text, then directs the user to tap on the screen if they want to edit the selection. I wonder marking the drop-down menu as ‘required’ would automatically open the picker? Once in the picker, the user can then use a single finger to scroll up and down through the different options. Options are read in order from the current selected option. Double tapping selects an option and closes the picker – or tapping on ‘Done’ does the same. I found the picker the hardest bit to use with VoiceOver, it took several attempts to get it right!

Using the Accessible Form Prototype with VoiceOver Screen Curtain

Interestingly, I found it much easier to use the form prototype with the screen curtain enabled – perhaps this is because I couldn’t see what was going on or where I was tapping, so VoiceOver was more useful. I had also had some practice beforehand.

The same gestures apply and the same instructions are read by VoiceOver for the user to follow. The big difference is typing – VoiceOver does not have any gestures (that I knew of when recording this video, at least) for inputting text or numbers, so the user would need to use something like Siri or another voice assistant to input text. This could make inputting confidential data difficult.

Unfortunately I didn’t have Siri configured on this iPhone, so wasn’t able to test this, but when I next test this I will give it a go.

What I learned by testing the Accessible Form Prototypes on an iPhone 7

I learned a lot! The usual ‘there’s a lot of swiping!’ and ‘it’s hard’ and ‘it’s a different way to browse the web’, but generally I found that besides the ‘picker’, it was OK. It still takes several minutes to complete a form that somebody with sight could complete in a few seconds, though. This is just done to the nature of how screen readers work and that whole pages have to be read.

I also want to add that VoiceOver is quite clever in that to select an item to open or edit it, this is the process:

  • VoiceOver announces that the element can be interacted with (e.g. it may say ‘tap to open picker’).
  • The user can tap or double tap anywhere on the device screen to execute this action.

This is a great idea because of course the user might find it difficult to see the device, so they might not be able to tap on specific elements. In the case of a user who is completely blind, how do they know what to tap on? This means that designing UIs for the blind or visually impaired is potentially made a little simpler because button placement may not be a huge problem to worry about if screen readers, like VoiceOver, work like this.

As mentioned in the short video below (after recording a lot of 4K footage, the camera batteries were almost dead!), this has also got me really excited to start producing a prototype to test and continue to research this subject area for my dissertation.

Using the Accessible Carousel Prototype with VoiceOver on an iPhone 7

The carousel was quite interesting to use. It was not as bad to use as I thought it might have been, mainly because the way it works is that the screen reader reads it like it reads any other HTML element – in a list – and in the order that the code is written. It does however start with the slide that is currently ‘active’/’in view’. This is good because it means that the blind user can have the contents of the carousel read aloud to them without the need to interact with the carousel itself (which they probably wouldn’t be able to do). It’s bad because by default there’s nothing telling them that they’re on a carousel (no ARIA landmark exists for a carousel, as far as I know) and also nothing to tell them which slide in the carousel they’re looking at. It’s up to us developers to make sure that they know that.

Unfortunately it’s not possible to add ‘alt’ tags to headings and paragraph tags – my initial idea was to place these in the carousel headings, so instead I added them to the images in the carousel. I added ‘carousel item 1. broads authority project’ and similar for each one so that VoiceOver would read this. Adding the full stop meant that VoiceOver was able to read this with the internation, i.e. pushing between the full stop and the word ‘broads’ .

It required quite a lot of swiping to go through the carousel and it’s quite easy to get ‘stuck’ in the carousel because it seems to keep looping through the different slides in the carousel. I think this is because the carousel autoplays and a new slide is moved into view every 5 seconds and since it starts reading at the current active slide and there always is one. If the carousel doesn’t autoplay then maybe it just goes through the slides in the order that they’re written in the HTML. This might make it easier to move on from the carousel onto the next part of the website. Sadly, it’s not possible through landmarks in VoiceOver, so the user is forced to go through the whole carousel by swiping to have the next element read out aloud to them.

The carousel is the first element that I have coded into one of these prototypes that is not ‘accessibility-friendly’ out of the box. It doesn’t follow any ARIA tags or have any special landmarks or HTML attributes that can be coded into it to make it easier for the blind or visually impaired to use. Therefore, the fact that it works at all proves that it’s possible to code components that appear on websites to be more usable for the blind and visually impaired – as designers we are not just limited to items that have special ARIA tags and can have HTML tags.

Using the Accessible Carousel Prototype with VoiceOver Screen Curtain

As usual, much the same experience but with one key difference. It’s an absolute pain for the blind user to have to swipe through all of the different element, specifically the breadcrumb buttons and the navigational buttons at the bottom of the carousel. To be fair, it’s annoying for users to do this when they can see, but I noticed this a lot more when using the iPhone with the Screen Curtain enabled (i.e., like a blind person). The breadcrumb buttons are a very visual feature of a carousel – you tap on them and they take you to a slide, these are no good for blind users because the buttons do not provide any contextual reference (besides the order that they appear in – which the blind person can’t see anyway) unless some kind of alt text is provided for the button, perhaps.

Designing for the blind vs designing for everybody

At the end of the video above I raise a good point about how in the real world I’d need to be able to design something that works for blind users and sighted users. Since there’s no way to detect a blind reader (other than detect the use of a screen reader and assume), in the real world designs should be usable by everybody. For my dissertation I need to decide if I’m going to produce a prototype that is focused just on being usable for the visually impaired, or something that can be used by everybody.

What’s next?

I can’t keep loaning this iPhone, so I might start looking into screen readers for Android so I can use my own phone to do the testing on. I’ve been using an iPhone and VoiceOver because it’s meant to be one of the best screen readers available and iOS is the OS of choice for the blind according to some sources, so it makes sense to continue it.

For long-term development, I may end up resorting to purchasing a used iPhone 6 or 6s and then selling it after the project is completed. I like iPhones, but not enough to switch to using one as my ‘daily driver’ from my Samsung S8. I’d want at least an iPhone 6 since it should still be compatible with the newest iOS and its size makes it much more representative of modern smartphones than a 5s or older does.

In terms of this prototype, the carousel and form have proven that it’s possible to make things that sighted people use with ease easily easy for the blind to use too. Theoretically, a whole website can now be built and I should be able to use it on an iPhone with VoiceOver and Screen Curtain enabled. This now means that my proposal for a technical report focusing on creating and testing a prototype website for the visually impaired is technically possible. I’ve proven that in a relatively short space of time it is possible to create very basic prototypes.

This now means that it’s time to possibly think about specialising a bit more and consider:

  • Which visual impairment(s) I want to focus on specifically.
  • Which pages and features I want to prototype.
  • Which device and software I want to test this on.
  • Whether or not I want this website to be usable by sighted people as well as the blind – this would mean that I’d need to code a UI.

I’d like to conduct some more research into the other visual impairments and find out what can be done to build software for people suffering from those. Those would involve some UI coding which I think is a good idea too. That way I can show that I’ve considered a range of disabilities and can demonstrate that through research and being an empathetic designer I can code websites that work for a range of people.

On May 2nd I have a report proposal discussion session with Jamie and my peers in which I hope to show some of the work that I have completed so far and explain my ideas to them. The last time we had a session like this I had only just decided to change my idea from focusing on the social effects of the gig economy to something relating to accessibility. Ameer and Namii likely have an idea of the kind of work I have been producing because some of it has been posted on my social media (I have some people in industry follow me who I wanted to see my stories and posts about designing websites for the visually impaired) but it will be good to show them what I’ve done and the research that I’ve done too.

You can never have enough research, which is why I’d like to also do some research into what it’s like to be a blind person. How do they feel, what are they scared of, how do they describe ‘beauty’ and what questions do they get asked? All of this again helps to me make an empathetic designer, which is a term I keep referring to, but it’s important to have empathy for your target audience if you want to design a successful product for them.

May 2nd 2019

Excellent discussion with Jamie this morning about the dissertatin. He liked my ideas and my title, which was excellent news. He suggested that I focus on either blindess/near-blindness/partial-sightedness OR dyslexia and colour blindness (basically screen reader and voice interface vs. a graphic user interface) and that I modify the target audience to the ‘severely visually impaired’ f I were to focus on blindness. Given that my recent prototypes have focused around the blindness aspect and designing a site that can work well with a screen reader, I’m probably going to focus on this. This probably means that I won’t do much more research into dyslexia and colour-blindness and creating interfaces for sufferers of those since my dissertation likely won’t be focusing around this now and instead I’ll channel the research into screen readers and designing for the blind.

Jamie suggested that I add a brief description of how the prototype will work and the kind of tests that I will conduct and also consider the flow of the app prototype. I spent some time this afternoon looking at that and considering how the app might flow.


This is a very basic online shopping experience flow, where the user basically finds a product to buy, adds it to the basket and buys it. Following this flow will enable me to test out several components – images, buttons and potentially carousels and/or parallax sections will probably be common on the home page and as the user goes more into the flow, things like forms will start appearing. I’ll be able to use observations and user interviews to determine how easy it was for the candidate to complete the test. The test will be strcutured and there will be specific tasks for them to complete. At the moment I think it might be as simple as finding a certain product, adding it to the shopping basket and then ordering it. This alone would test:

  • Headings and content structure: how easy is it for the user to find the item they want to buy?
  • Images and drop-down menus: are users able to imagine what the item looks like and able to select different product options from the menus?
  • Navigation: are users able to navigate between the pages?
  • Scrolling: can users easily scroll up and down the page and navigate between landmarks to speed this up?
  • Forms: are users able to complete forms? Are the forms the right length?
  • Buttons: are users able to complete the order by using the purchase button?

This tests quite a lot of variables.

He also suggested that I do some more research into how to conduct usability tests with the blind to see if there were any techniques that I could follow for my own work.

I also need to give some thought into:

  • What kind of things are going to be available to buy on this site?
  • Who will I contact to find testers?
  • When will all of this be done?
  • How am I going to code the back-end, e.g. adding items into the basket?

I’ll likely reach out to healthcare institutions and charities to find testers. I’m expecting this to be a little bit of a challenge. I’ll probably offer to visit them to do the testing.

Dissertation timescale

The timing for this is interesting. The entire report is due in December, but because I am doing a technical report there is a chance that I won’t need to complete project work/coursework between September and December as the time that would have been allocated for that will be allocated to me completing the practical side of my dissertation instead. My fear is that if I do the whole thing over the summer:

  • I might get it wrong and then there’s potentially 3 or 4 months’ of time wasted.
  • I might be bored for the first half of Year 3 and have little to do.
  • It might be hard to develop because of having to loan an iPhone to test this on.

So my plan is probably going to be research this more over the summer, make any changes to the question, structure and tests that need to be made and consider how I’ll code this and how it will be tested. Also start to make arrangements to get this tested and make enquiries.

This means I’ll probably be coding and testing this in September and October. That will mean that I hopefully get bored in the early stages of Year 3.

My dissertation idea at The Big Book Crit

When talking to website developers and designers at The Book Big Crit tonight a lot of them expressed an interest in my project idea and were keen to hear more. They said that I had picked a challenging, yet exciting dissertation topic to research and produce. They were very interested to hear about my experiments up to this date with HTML ARIA prototypes and VoiceOver on the iPhone. A lot of them said that accessibility is becoming such a key issue in websites these days that I am researching the right industry!

May 4th 2019

Some research today into what blind people think about certain things.

How blind people describe ‘beauty’

The responses below are from blind people are interesting. A lot of sighted people probably assume that the blind are less prejudice and shallow when it comes to the way people and things look, but they’d be wrong.

One of my friends said ‘oh you could totally date a fat girl because you know, it doesn’t matter.’ And I’m like: ‘well, I have to be attracted to someone.

I think there’s a perception amongst sighted people that blind people don’t give a sh*t about what people look like because we can’t see them.

I think I’m pretty shallow – just like everyone else.

Appearance is very important and I try to be very presentable.

I pick up on all of the ways that we’re shallow.

Blind people are just as interested in the way things look as sighted people are and when talking about attraction, appearance plays just as important a role in attraction for a blind person as it does for a sighted person. These quotes are mostly aimed at physical and sexual attraction towards people, but they’re just as valid for the way that products and digital products look. I said earlier in this post that if I were to design this app for just the blind then the appearance wouldn’t matter so much because they ‘can’t see, so what difference does it make?’ but now I am beginning to wonder if that’s true or not. Would they be able to tell that I’ve put an ugly HTML prototype in front of them that has no style whatsoever in the same way they can judge if somebody is attractive or not?

When asked how they can tell when somebody is ‘beautiful’, they said:

I guess other senses ‘kick in’ – the tenderness, the smoothness, the shape.

It’s little things. It’s like a sound or a sense or a touch and it all builds into this generalised thing.

These answers did not surprise me as much, but when it comes to interacting with a human or an animal it’s easy to determine that somebody is beautiful even if you can’t see them. When we get to know people we get to know their personalities – we hear the tone of their voice, how they sound when they laugh and the types of things they enjoy doing and like to talk about. These things help us to ultimately fall in love with people. Then there’s the intimate interactions with people – the holding hands, the cuddling and the way they smell and hold you and that kind of thing. I can see how these things help all people (not just the blind) fall in love with people. If you’re blind you probably begin to create pictures in this head about how this person might look based on these other senses.

But when it comes to a website it’s a bit harder to replicate these other senses because ultimately, a website is not tangible and has no emotions of its own (but they can evoke emotions of those who use them). How do you replicate a sense of intimacy with the way a website looks? How does a blind person know how a website actually looks, asides from taking a good guess based on what a screen reader is telling them about the page structure? Perhaps the website needs to invoke some kind of emotion, maybe through the content on the site and how the screen reader reads it. But let’s not forget that most screen reader voices are utterly monotone and so are probably difficult to form attachments to in the same way that we can have human voices with have internation and different pitches, volumes and tones depending on who we’re talking to or what we’re talking about. Maybe if a website is difficult to use then that paints a negative image of how the website might look in their head. After all, are things that are ‘difficult to use’ ‘beautiful’? Often we group the word ‘beautiful’ (when referring to technology) with words like ‘modern’, ‘simple’ and ‘elegant’ – not ‘complex’, ‘difficult’ and ‘challenging’.

A quick Google Image search for ‘difficult UIs’ brings the following results:

Are those two UIs on the left ‘beautiful’? It depends on how you define ‘beautiful’, but the answer is probably not. So by making this an easy website to use, perhaps the blind user will envisage the site looking attractive.

Watch the video below for thoughts on how blind people define ‘beauty’.

What scares blind people?

I am totally terrified of trains. I was in San Diego crossing tracks and I turned myself around and I found myself in the middle of somewhere where I thought I shouldn’t have been.

I would not want to be on a big hill without a cane – or without holding onto something. If I have a cane or I’m holding onto someone, half of that feeling goes away.

If I lost my cane that would really suck. In some places I think I can do just fine without it, but just having it in my hand is part of me.

The fear of the unknown scares me. If I don’t have a good representation of something I’m about to do is, whether that’s physically or mentally or whatever, it scares the sh*t out of me.

I do get anxious in new environments. Like particularly in following pre-configured things, like standing in line. I’ll think ‘am I doing this right?’

Fear of rejections in social circles and work settings. They could be like ‘oh we found a better fit’, but really they’d be like ‘oo I don’t know how that blind person would do that thing, so no, we’re not going to hire them.’

Another fear I have is getting mugged, taken advantage of and just getting hit by a car.

We always say in the community that it’s not ‘if’ you’ll be hit by a car, but ‘when’. I’ve been hit twice.

I don’t really go to gym classes because I am so scared of looking different from other students in the class because I don’t know how to do whatever we’re doing exactly right.

I feel that if I do something different people will not associate it with me being a novice, they’ll say ‘oh she’s blind, that sucks, she doesn’t know what to do.’

These feelings are not terrible surprising, especially the ones relating to being placed in dangerous situations and unsure about how to act in social situations and have fear of rejection. A lot of these boil down to having security and peace of mind and trying to anticipate the unexpected. This further enforces the need for this website to feel intuitive and natural. The user needs to be able to use it without feeling awkward or scared. In public places they need to be able to use headphones to hear the screen reader and be able to naturally swipe and tap on the device so that they fit in, as sad as that sounds, really.

Getting mugged and being taken advantage of came up a lot of times. This is an e-commerce website I’m creating here – how does the blind person know that they’re not being ripped off? I guess other than the screen reader reading the price of the item and the blind person being able to make an informed decision as to whether they want to purchase the item at that price (like any other rational consumer would), there isn’t a lot that can really be done other than use language that does not scream ‘rip off’. It would be interesting to research some language used in sales and on e-commerce websites that helps to ensure buyers that they are safe.

Blind users may not be able to see any security features on an e-commerce website, such as HTTPS padlocks and things like that. How are things like that communicated? I guess the screen reader can read a full URL.

Usability testing with the blind could be interesting. As it stands, most people are very cautious about how they act and what they say in most usability tests – it’s quite rare that somebody is their normal, calm and collected self when placed in front of a computer and asked to complete a task under supervision. One of the people who tested my Year 1 project said ‘this feels like an exam!’ after I had asked her to complete a task. Going by some of the quotes above, it’s potentially going to take some serious technique to ensure that users feel as at ease as possible during testing. I will need to research how to conduct usability testing with the blind and write this into my report proposal.

The quote below though is interesting:

I am not [scared of being blind]. It’s challenging, it’s an inconvenience, but being blind is doable.

This shows a positive outlook. This particular blind person is determined and if he speaks for the majority then it goes to show that whilst usability is important, perhaps because the blind face so many challenges they are more determined to sit and figure out how to use something than a sighted person might be.

Watch the video below to find out what scares the blind.

Making customers feel ‘safe’ when selling to them

Here are some general tips that sales people use to make their customers feel safe:

  • Describe customer service first: emphasise customer care and show that you go out of your way to satisfy customers.
  • Engage in relationship sales: don’t rush to close sales, get to know the customer and their personalities.
  • Provide testimonials: ‘mud sticks’ and customers read reviews all the time for any product that they might be purchasing.It looks more genuine if reviews come from customers rather from yourself.
  • Identify the needs of your customers and make it show: explain the benefits of your products rather than focusing on the features of the products.
  • Offer buyer protection, e.g. money back guarantees: customers like to know if that they are dissatisifed they can get their money back.

These can be translated into e-commerce sales as:

  • Ensuring that somewhere on the site customer service is mentioned.
  • You can’t get to know a customer personally through the web really, but you can use things like targetted advertising to show that you have considered the likes and dislikes of the customer and also have features such as wish lists and shopping baskets that enable the user to ‘save’ items before they commit to a purchase.
  • Testimonials can be provided in the form of customer reviews. Schemes like TrustPilot are well-known and go down well with customers.
  • Mentioning a buyer protection policy is definitely a must.

If the blind person can have access to these basic bits of information then they should potentially feel safe on the e-commerce website.

Usability testing with the blind

I did a quick bit of searching online to see if there were any interesting tips or guidance that I could follow for conducting usability testing with the blind. I found some general advice:

  • Ask the testers which screen readers and devices they use so that they feel comfortable during the test.
  • Don’t talk over the screen reader.
  • Don’t treat service animals (e.g. guide dogs) as pets – they are working.
  • Don’t worry about using the word ‘see’, this shouldn’t offend people.
  • Ensure that the test location has disabled access.

The actual test process is apparently very similar to testing with the sighted, but there may be extra time required to set up hardware and software for accessibility, so allow for up to 25% additional time for this.

Additionally, the online version of the book Just Ask: Integrating Accessibility Throughout Design adds the following considerations:

  • Ensure that users can reach the room and that the testing room is clear of any obstacles.
  • If testing at the participant’s site, don’t move anything without asking first.
  • Record the screen reader separately to the participant (ideally through software): the participant sometimes talks over the screen reader, so recording each audio source separately helps to make analysing test footage easier.
  • Consider recording keystrokes if the user is using a keyboard – this is to determine if the user is navigating by going to the next headings, landmarks or just simply using the up and down arrow keys to move to the next element. This can help determine efficiency of navigation.
  • Consider using a wide-angle camera lens: blind users don’t need to look at a monitor, so sometimes they move around a little during testing. You want to keep them in frame.
  • Take speakers: some users use headphones to listen to the screen reader – you want to be able to hear the screen reader too, so take speakers so that you can clearly hear what is happening too.
  • Take lights: blind users may not have rooms that are well-lit. To improve video quality (if recording) and conditions for yourself, add lighting.
  • Encourage participants to become familiar with the setup and adjust it to their needs.
  • Don’t assume that the participant will recognise your voice, so make sure that you introduce yourself.
  • Explain noises and actions to the participant, e.g. explain any beeps from cameras.
  • Offer your elbow to lead the participant – don’t take their hand or cane as this can unsteady them.
  • Give directions about where the candidate needs to be seated.
  • Tell the participant where the room for the guide dog is.
  • Provide any documentation in the preferred format of the participant.
  • Set the speed of the screen reader to the preferred speed of the participant – note that this is often faster than what most sighted people can interpret.
  • Consider having a debrief after each task rather than at the end of the entire test – users will spend longer on tasks than non-disabled candidates would.
  • Pay in a format that the blind can understand, for example if paying by cash sometimes the blind person will have a certain way of folding money that allows them to identify what each banknote is.

There is a lot to take into account here!

I’m still not sure how or where or when the testing is going to be happening. Before testing can even begin I will need to get a user testing group together so that I can get this tested. I will likely approach charities and hospitals who will likely have their own guidelines and recommendations for interacting with and conducting tests with the blind. They will know each person individually too, so they can tell me if a specific person needs something set up in a certain way or similar.

It would be very beneficial to look at the testing environment before I conduct any testing to see if I need to consider moving obstacles and bringing additional lighting and so on. It would also be good to see what kind of equipment there is currently and what else I may need to bring to make the testing a success. In the past I’ve often said ‘my usability testing would’ve been better if I had done <insert thing here>’, but this is in Year 3 and for my dissertation – it’ll be the most important testing I do at university – I’ve done enough usability testing now in university and on internships to get this as ‘right’ as possible.

Submitting the report proposal draft

I sent my updated report proposal to Jamie for evaluation. I hope to receive feedback on Tuesday 7th May.


YouTube. (2017). Blind People Describe Beauty | Blind People Describe | Cut. [online] Available at: https://www.youtube.com/watch?v=JO7X9ZPoAp8 [Accessed 4 May 2019].

YouTube. (2017). Blind People Describe What Scares Them | Blind People Describe | Cut. [online] Available at: https://www.youtube.com/watch?v=DU-qgDhO3vM [Accessed 4 May 2019].

Henry, S. (2007). Just Ask: Integrating Accessibility Throughout Design. [S.l.]: Lulu.com.

Johnston, K. (n.d.). How to Make Customers Feel Comfortable. [online] Smallbusiness.chron.com. Available at: https://smallbusiness.chron.com/make-customers-feel-comfortable-35677.html [Accessed 4 May 2019].

Lawton Henry, S. (2007). Conducting Usability Testing | Accessibility in User-Centered Design | Just Ask: Integrating Accessibility Throughout Design. [online] Uiaccess.com. Available at: http://www.uiaccess.com/accessucd/ut_conduct.html [Accessed 4 May 2019].

Sarris, S. (2015). Usability Testing with People Who Have Vision Impairment. [online] DigitalGov. Available at: https://digital.gov/2015/04/10/usability-testing-with-people-who-have-vision-impairment-is-difficult-reality-or-perception/ [Accessed 4 May 2019].

Further reading

I have identified that the following texts may be of benefit to me throughout the course of my dissertation.

Chisholm, W. and May, M. (2008). Universal Design for Web Applications: Web Applications That Reach Everyone. [Sebastopol, Calif.]: O’Reilly Media.

Connor, J. (2012). Pro HTML5 Accessibility: Building an Inclusive Web. [Berkeley]: Apress.

Horton, S. and Quesenbery, W. (2014). A Web for Everyone: Designing Accessible User Experiences. Brooklyn: Rosenfeld Media.

Thatcher, J., Lawton Henry, S., Kirkpatrick, A., Rutter, R., Heilmann, C., Waddell, C., Burks, M., Urban, M. and Lawson, B. (2006). Web Accessibility: Web Standards and Regulatory Compliance. Berkeley, CA: Friends of ED.

The first chapter of the book above is available online for free at the link below:

Lawton Henry, S. (2006). Understanding Web Accessibility [Book Chapter on UIAccess.com]. [online] Uiaccess.com. Available at: http://uiaccess.com/understanding.html [Accessed 4 May 2019].

These are older texts, but come with solid recommendations from industry professionals and are written by experts in the field.

The full list of recommended texts for researching web accessibility can be found at the link below:

Uiaccess.com. (n.d.). uiAccess | Accessibility Books. [online] Available at: http://www.uiaccess.com/books.html [Accessed 4 May 2019].