Following my previous post about the report proposal, here is an account of the progress made between April 20th and 29th 2019.

April 20th 2019 – further research

It might be Easter Saturday and don’t worry, I did get outside and enjoy the sun, but in the evening I was looking around the WebAIM website (which I used a lot yesterday to research ARIA and how to write good HTML for accessibility – details in the last post) and noticed that they had two interesting reports to take some information from.

Research from WebAIM: The WebAIM Million

WebAIM (‘Accessibility In Mind’) provides a staggering amount of research about usability of websites for the visually impaired and also advice on how to develop websites for accessibility, with coded examples.

In February 2019 they analysed the home pages for the top one million websites and collected an immense amount of usability data based on this analysis. The results provide an up-to-date account of the current state of the web for the visually impaired.

  • Home pages on average had 59.6 detectable accessibility errors each.
  • 7.6% (1 in 13) of all elements on the home pages of the top one million websites had a detectable accessibility error.
  • 97.8% of websites fail to meet W3C WCAG standards. WCAG are accessibility guidelines.
  • On average, each home page had 36 occurrences of low contrast text, making it the most common usability problem.
  • A third of all images (12.3 images per home page on average) were missing alt text for screen readers.
  • 59% of form inputs were incorrectly labeled.
  • Home pages with ARIA (described in more detail later in this post) averaged 11.2 more detectable errors than pages without ARIA.

This data shows that there is still significant work to be done to ensure the web is made accessible to everyone. It is hopeful that this research will promote greater interest and effort to this end. While the volume of errors is disconcerting, most of the significant errors are of just a few types.

Jared Smith, WebAIM researcher, February 2019.

Research from WebAIM: The Survey of Users with Low Vision

Another report from WebAIM, this time a survey of users with low vision to find out about the devices they use and how they use them. 248 users were interviewed in September 2018.

  • 75% of respondents had multiple types of visual impairment – 61.3% suffered light or glare sensitivity and 46.8% suffer contrast sensitivity.
  • 51.4% of respondents used a high contrast mode.
  • 71.2% of respondents that adjust page contrast prefer lighter text on a dark background.
  • 45.2% of respondents use a screen reader.
  • 48.4% of respondents use screen magnification software.
  • 44% of respondents use browser zoom/page scale controls.
  • JAWS is the most commonly used screen reader, with NVDA and VoiceOver following. Microsoft Narrator was used by just 0.8% of respondents.
  • Only 8% of respondents were detected as having increased the default text size. Very respondents adjusted paragraph, line, word or letter spacing.
  • 60.4% of respondents always or often use a keyboard for navigating websites. WebAIM note that this is a very high percentage.
  • 22% of respondents don’t enlarge web content, but 18% of respondents enlarge web content to over 400% scale.
  • Users rely on screen magnification tools built into operating systems, with very few using third-party solutions.
  • 64.3% of respondents use iOS devices and only 7.8% do not use a mobile device at all.

What WebAIM’s research tells me

Like the research conducted by the government and JBIR, it highlights a number of things:

  • Nowhere near enough websites are good enough for the visually impaired. What is the internet really like to use for somebody suffering from this condition if 98% of home pages aren’t deemed ‘usable’ for them?
  • It’s very important that whatever solution I propose and create works well with:
    • Screen readers
    • Browser scaling
    • Screen magnification software (it needs to work with the ones built into OSes)
    • Increases of page scale of around 400%
    • Keyboard navigation
    • iOS and Safari (presumably that is the browser they are using on Safari if they need to use VoiceOver)
  • Low contrast text and images were a key problem, so my prototype needs to avoid these.
  • Form inputs need to be labeled correctly so that screen readers can interpret them.
  • NVDA will likely be the screen reader that I test with whilst developing my prototype on a PC, then VoiceOver will be used when testing on iOS.

My first attempt at using NVDA

After having read a quick tutorial about how to use NVDA on the AFB (American Foundation of the Blind) website (and watched their tutorial video), I attempted to use NVDA myself for the first time. The AFB tutorial recommends learning NVDA by navigating the NVDA Quick Commands Reference HTML page using the screen reader because this HTML is marked-up perfectly for use of navigation by NVDA, so what’s I did. Using the following shortcuts, I was able to begin to understand how a blind user may ‘scan’ a website using a screen reader:

  • H – tap the H key to navigate the page by going to each heading element.
  • Tab – move between elements on the page and in the application window (e.g. the address bar of the browser). Often doesn’t move between blocks of text, however.
  • Shift+relevant key – move back to previous selected element, e.g. Shift+H to move back to previous heading.
  • K – navigate to the next link in the document.
  • Space – activates link text.
  • Enter – interact with selected element.
  • Arrow keys – move up and down pages or left and right across elements and context menus.
  • NVDA+N – open NVDA Quick Acess Menu.
  • NVDA+F7 – open a window showing the list of elements in the document.

The ‘NVDA key’ is insert by default.

Below is a video of me attempting some basic navigation using NVDA.

My first attempt at browsing the web with NVDA

Having gained a very basic understanding of how NVDA works and how the visually impaired navigate pages, I tasked myself with:

  • Opening Firefox as a blind user.
  • Navigating to the BBC News website.
  • Accepting cookies.
  • ‘Scanning’ an article on the BBC News website.

It was a lot harder than it sounds! Video below.

I appreciate that a lot of this will be down to my inexperience with using NVDA, but these two videos do prove several very important things to me.

What putting myself in the shoes of a visually impaired user told me

  • It can be very difficult for a visually impaired person to complete even the most basic of tasks in the beginning – this explains some of my research earlier where when asked about software upgrades some users said ‘it’s hard in the beginning’.
  • How important the keyboard is for the visually impaired. The keyboard appears to be the core peripheral for interacting with the screen reader. No wonder 60% of visually impaired users are reliant on the keyboard.
  • Obvious, but ‘how to use a screen reader’. With 45% of visually impaired computer users using one, it’s very important to know how they work. Screen readers are the most common way for the visually impaired to use a computer.
  • How important it is for HTML markup to be correct. The last post about my report proposal goes over the importance of this and completing these small tasks and using NVDA for myself proved this. I can now see how HTML markup is used to ‘scan’ pages, so if you want visually impaired users to be able to quickly find content on your site then it is imperative to ensure that the correct heading levels are used and that semantic HTML is used rather than visual HTML.

What’s next?

Continue to research this and improve my own understanding and ability to use the screen reader to use the Windows operating system and a web browser. Once I am better at that, it would be good to start learning how to use VoiceOver on iOS to achieve similar tasks since my dissertation is focused on mobile websites.


Ashton, C. (2018). I Used The Web For A Day Using A Screen Reader — Smashing Magazine. [online] Smashing Magazine. Available at: [Accessed 20 Apr. 2019]. (2019). NVDA command key quick reference. [online] Available at: [Accessed 20 Apr. 2019]. (2016). Using the Internet, Part 1 | American Foundation for the Blind. [online] Available at: [Accessed 20 Apr. 2019].

Smith, J. (2018). WebAIM: Survey of Users with Low Vision #2 Results. [online] Available at: [Accessed 20 Apr. 2019].

Smith, J. (2019). WebAIM: The WebAIM Million – An accessibility analysis of the top 1,000,000 home pages. [online] Available at: [Accessed 20 Apr. 2019].

April 22nd 2019

Adding the research from WebAIM into the report proposal

I added the research from April 20th into my report proposal as WebAIM is not only a great source for this kind of information, but also for providing insight to coding ARIA and accessibility-friendly websites. As of April 22nd, I haven’t really found any websites that cover the coding aspect of creating websites for the visually impaired in quite as much depth as WebAIM, so it is likely going to be a vital resource for my dissertation. I also removed some words from the introduction and limitations sections of my report proposal to account for the extra words that I’d be adding in about WebAIM and managed to get the word count down to 1,056 words which is absolutely fine.

I’ve spent a lot of time over the past week researching this topic. I’m happy with my report proposal and the progress that I have made, so over the next day or so I’m going to be spending some time updating my professional portfolio ready for The Big Book Crit on May 2nd. In my post about life beyond Year 2 that I wrote very recently, I mentioned that I had designed a new version of my portfolio back in February, but had not time to develop it. Originally this was going to be developed over the summer, but with The Big Book Crit on May 2nd, I thought that it’d be good to spend a few days implementing the new menu and footer systems and tidying up the code, updating some of the content and redesigning the home page so that when professionals see it, it looks a lot more professional and features improved navigation and content hierarchy.

I hope to have the portfolio in a good state by around April 25th or 26th. I can then resume work on this and code a basic HTML site that a screen reader is compatible with and test this on a desktop with NVDA and an iPhone with VoiceOver. I’m also pleased to say that on April 26th I have a call with Microsoft’s Accessibility Ambassador, so it will be very interesting to hear what he has to say about this project and any research that Microsoft has conducted.

April 26th 2019 – call with Arran Smith, Microsoft UK Dyslexia & SEND Consultant

I was fortunate enough to have a 40 minute phone call with Arran Smith, the UK Dysexia & SEND (Special Eucational Needs Department) consultant for Microsoft UK. Allan is dyslexic himself and works at Microsoft to provide research and insight on how their products can be made better in terms of accessibility. I was put in touch with Arran by my former boss, who has worked at Microsoft UK for several years now.

Arran provided a lot of interesting insight and knowledge about what it’s like for dyslexic people to use the computer and pointed me in the direction of some research conducted by professors that explain theories relating to improving accessibility for the dyslexic. My dissertation is focusing on visual impairments, but dyslexia, whilst not medically a ‘visual impairment’, can make it hard for people to read and interpret content on screens due to having difficulties clearly making out the glyphs of letters against background colours.

It’s all a bunch of symbols. I have to use a text-to-speech engine to make sense of it all.

-Arran Smith, when I asked him what it was like for him to use the computer.

Dyslexia is something that I can desgin the interface of the prototype with in mind.

Research for dyslexia

Arran suggested that I consider the work of Professor John Stein and Professor Arnold Wilkins (both British and researching in the UK). Their findings do contradict each other in some ways, but both suggest that the impairment is because of incorrect colours being interpreted by the back of the eye. Stein’s work suggests that any colour of the rainbow could be misinterpreted and Wilkins’ work suggests that blue and yellow are the ‘problem colours’.

Arran explained that around 15% of the UK population suffer fro dyslexia and around 9% suffer from autism. Whilst both are very worthy causes of research, autism tends to get more funding for research because it is classed as an ‘medical condition’ which can be diagnosed. Dyslexia is an ‘educational disorder’ and gets less funding, but that doesn’t mean to say that there isn’t sufficient research. Arran also mentioned that research from the US and the UK  on dyslexia is different because of different attitudes towards it.

Arran also suggested that I look at the work of E.A Draffan, a resesrcher from University of Southampton specialising in accessibility. Her 2011 research with Dolphin Inclusive Consortium was conducted for the Department of Education and tests different ways of improving accessibility of e-learning resources for visually impaired and dyslexic children. Trials showed that more than 50% of pupils improved their reading, writing, confidence, level of achievement and homework completion as a result of modifications suggested in the report. This is proven design theory for the visually impaired. 

Irlen syndrone

He also suggested that I look at Irlen syndrone (or Meares-Irlen syndrone), which is a processing disorder. Irlen syndrone is a problem with the brain’s ability to processor visual information, so it is not an optical problem (i.e. a problem with the eyes). It can affect things like concentration, academic and work performance and attention spans. Common symptoms are:

  • Printed text/content and the environment looks ‘different’ to how others see it.
  • Reading can be slow or very inefficient with words often being misread and tracking from line to line slow.
  • Reading comprehension is slowed.
  • Eye strain is increased, this leads to increased fatigue and headaches.
  • People with Irlen syndrone tend to have difficulty doing maths.
  • Copying text or images can be hard.
  • Reading music can be difficult.
  • Hand-eye coordination can be poor, e.g. sports performance might be poor.
  • There are some mental health effects too, such as low self-esteem and low motivation.

Suffers of Irlen syndrone tend to also be bothered by glare, fluourescent lights, bright lights, sunlight and lights at night. This can make sufferers feel sick, dizzy, irritable and anxious.

It’s understood that certain colours irritate the brains of Irlen syndrone sufferers, so to combat it colour overlays can be used to remove specific colour waves from the light spectrum, thus preventing them from accessing the brain at all. Physical overlays can be placed over printed content, or tinted glasses can be worn. Software can be used to tint computer screens and/or software can be designed with different overlays pre-installed.

Dark interfaces

Windows 10 and a lot of software now features a ‘dark mode’ to relax the eyes. This could potentially help with Irlen syndrone sufferers, but dark modes in general are now becoming a ‘trend’.

Windows 10 also offers the ‘night light’ feature which is suppposed to reduce the amount of ‘blue light’ emitted from computer monitors as since this ‘blue light’ is said to disrupt sleep, but since it reduces the luminosity of the display, it could help Irlen syndrone sufferers.

Updates to Microsoft Narrator

Narrator is Windows’ built-in screen reader and according to WebAIM, holds a very small share of the market share, but has recently been updated. Arran suggested that I look at Microsoft’s recent accessibility webinar to see the changes that have been made.

Offering to test my prototypes

Arran was extremely interested in my dissertation proposal and even offered to test prototypes to me to provide insight and feedback. I will definitely be staying in touch with him and updating him on my progress.

The call was fantastic and an absolute ‘gold dust opportunity’. I was very grateful to Arran for giving up his time to talk to me.


Draffan, E. (2011). – the Accessible Resources Pilot Project website. [online] Available at: [Accessed 29 Apr. 2019]. (n.d.). E.A. Draffan | Electronics and Computer Science | University of Southampton. [online] Available at: [Accessed 29 Apr. 2019]. 

Amen Clinics. (2019). Learn More About Irlen Syndrome and the Best Way to Treat It!. [online] Available at: [Accessed 29 Apr. 2019]. (2017). What is Irlen Syndrome?. [online] Available at: [Accessed 29 Apr. 2019].

It was great talking to Arran from Microsoft UK (Microsoft UK HQ pictured by me, July 17th 2014).

April 29th 2019

With the portfolio site finally updated and written about, I could focus my efforts back to the report proposal and update it with the information that Arran gave me on Friday. University begins again tomorrow so I wanted to get this done and dusted so that I could go into university for the first day of the new term and have an up-to-date report proposal written and available for critique, if that’s going to happen. The proposal stands at around 1,070 words, including the quote from Arran as the opener and a bit about E.A Draffan’s project around visual impariment.

Which visual impairment should I focus on?

I can think of three categories that I could focus on:

  • Blindness
  • Colour-blindness and partial sightedness (‘visual impairments’)
  • Dyslexia and Irlen syndrone

They all affect people’s ability to see, but the methods of producing accessible software for them is different.

  • Blind people and partially-sighted people need to use screen readers and generally need to also use physical keyboards.
  • Colour-blind, dyslexic and sufferers of Irlen syndrone need to use screen overlays and colour-correcting tools.
  • Text-to-speech is also useful for those who struggle with colour impairments.

I could possibly create a prototype that covers all of these areas by:

  • Using ARIA and good HTML markup I could make a prototype that is well-structured and works well with screen reader navigation.
  • I could do some research into text-to-speech engines, find out how they operate, and ensure that the prototype works well with them.
  • The UI could be designed to take into account some colour impairments.

From a construction perspective, buulding the site for a speech-to-text engine or with colour disabilities in mind is possibly easier than writing perfect HTML markup that is ARIA-compatible and works efficiently with a screen reader.

From a user testing perspective (an important aspect in my technical report proposal!), it will be easier to find dyslexic testers than partially-sighted, colour blind or blind ones.

However, I am most excited about creating a website that works well with screen readers and that the blind could use – I’ve also done quite a bit of research into this already. So the route I am probably going to take with this is:

  • Design the prototype for the blind and screen reader compatibility as the top priority.
  • Add in features such as considerations to the colours of the UI that will benefit people suffering from other visual impairments.
  • Try to test with the blind as a priority, but also test with people who suffer from other forms of visual impairment to assess the usability for them.

For the time being, the report title remains focused on ‘visually impaired’, but chances are thart the actual prottoype will be aimed mostly at the blind.

Creating a basic website that is compatible with a screen reader

I’ve wanted to do this for a week and now I’ve done it! Using what I found out last week about best HTML coding practices for screen reader compatibility, I coded a very basic web page consisting of just a few elements to see how it would work if I were to use this with a screen reader, such as NVDA. The HTML markup is below.

    <h1>Accessibility Prototype</h1>
<p>By Jason Brown</p>
    <a href="page1.html">Page 1</a>
    <a href="page1.html">Page 2</a>
    <a href="page1.html">Page 3</a>
    <a href="page1.html">Page 4</a>
    <a href="page1.html">Page 5</a>
    <h2>Website content</h2>
    <p>This is where content introductions go.</p>
    <h3>A single line of text</h3>
    <p>Updating the portfolio site again has been on my mind since November 2018 (despite the site at the time only being a month old).</p>
    <h3>Some text and an image</h3>
    <p>In February 2019 I had some time off for reading week and spent most of this time beginning to design an updated version of my portfolio site.</p>
    <img src="" alt="Mega menu on the Pendragon Online website." />
    <h3>Unorganised lists</h3>
    <p>This is an example of an unorganised list.</p>
        <li>Item 1</li>
        <li>Item 2</li>
        <li>Item 3</li>
    <h3>Organised lists</h3>
    <p>This is an example of an organised list.</p>
        <li>This item must be done first</li>
        <li>Then this needs to be done</li>
        <li>Finally, let's do this</li>
    <h3>Text formatting</h3>
    <p>This is going to be emphasised: 1, 2, 3: <strong>you should hear this is emphasised!</strong></p>
    <p>This is an italic: 1, 2, 3: <em>you should hear the screen reader interpretation of italics.</em></p>
    <p>The following text will be read in German: <span lang="de">Ich heisse Jason und ich bin 15 Jahre alt.</span></p>
    <p>Prototype produced by Jason Brown.</p>
    <p>April 29th, 2019</p>

I wrapped content in the appropriate ARIA landmarks of bannernavigationmain and contentinfo which were discussed in my last post about this report proposal. I wanted to see if the screen readers would detect the landmarks and inform the user of them.

I also wanted to try the following:

  • See how screen readers would interpret a link.
  • See how screen readers would interpret heading titles.
  • See if the screen reader would read the ‘alt’ text on the image.
  • See how screen readers interpret organised and unorganised lists and what the difference between the two interpretations are.
  • See if the screen reader would interpret the text in the ‘semantic HTML’ (the ‘strong’ and ’em’ tags).
  • See if the screen reader would read the text in the language span in German.

I wanted to make this little prototype to see if it was possible in the first place (if it’s not possible then doing this for my dissertation is waste of time!) and also because not only is it a fun exercise and something very different, but also because it would help me understand how to use a screen reader and how they work.

When it comes to coding websites for the blind, HTML mark-up is everything. After all, they can’t see, so CSS doesn’t really matter! But I did include some CSS to show ‘how to make text not reflow’ (a problem that the visually impaired face when increasing the page scale of websites) and the answer to that is to use responsive units of measure to set your font sizes. Here, I am using vh (viewport height).

h1 {
    font-size: 6.5vh;
h2 {
    font-size: 5.5vh;
h3 {
    font-size: 3.5vh;
p, a, li {
    font-size: 2vh;
img {
    width: 40vw;
    height: 40vw;
navigation {
    display: inline-block;
@media only screen and (max-width: 400px) { /*for iPhone demo*/
    img {
        width: 80vw;
        height: 80vw;

I also added the media query for the image so that it scaled correctly on the iPhone that I also tested this prototype website on. I also set the elements in the navigation section to display as inline-block so that they appeared next to each other to look more like a menu.

View the prototype here!

NVDA and Mozilla Firefox on a Windows 10 desktop

After having refreshed myself on how to use NVDA’s keyboard shortcuts, I recorded the video below demonstrating how my prototype works on Mozilla Firefox on Windows 10 when used with NVDA.

Navigation is done with NVDA with the following keys:

  • Shift + any of the keys below: go back to previous element (e.g. Shift+H goes back to previous heading).
  • Tab: navigate through links and window elements.
  • H: navigate to the next heading.
  • D: navigate to the next ARIA landmark, e.g. ‘main’, ‘contentinfo’ etc.
  • 1-6: navigate to next heading level 1 (1), heading leave 2 (2), heading level 3 (3) etc.
  • F: navigate to the next form.
  • T: navigate to the next table.
  • B: navigate to the next button.
  • L: navigate to the next list.
  • I: navigate to the next item in the current list.
  • K: navigate to the next link item.

Before you’ve even used NVDA, just looking at those keyboard shortcuts it’s easy to see why your HTML markup needs to be spot-on in order to make browsing your site a pleasurable experience for somebody suffering from visual impairment.

My prototype worked well with NVDA. NVDA was able to read all of the content and it could navigate through the majority of my prototype with ease. I did however note that NVDA did not do the following:

  • Identify any landmark other than ‘main’.
  • Change the tone of voice to emphasise the ‘strong’ or ’em’ tags.
  • Read the text in German correctly (this may be because my installation of Windows 10 did not have any additional languages installed).

Otherwise, I didn’t really have too much trouble navigating this with NVDA. The alt text for the image described the content and the correct use of the heading hierarchy made skipping content easy and fast. It was a shame that NVDA was unable to navigate by the other ARIA landmarks.

VoiceOver and Apple Safari on an iPhone 7

My research has shown that creating accessible websites for desktop computers can be achieved through good use of HTML markup and navigating these with a screen reader and the keyboard is generally the most common way of interacting with them. My research has also shown that the mobile world is where things really need improving and so this is the focus of my dissertation.

The survey by WebAIM named the iPhone the most popular mobile device amongst the blind people that they surveyed, so I borrowed one and used the VoiceOver screen reader to see how my website prototype fared on a mobile device.

I was really curios to see how a screen reader worked on a mobile device given that smartphones these days do not have keyboards. This is why I suspected that blind users preferred using desktop computers to mobile devices. Apple has spent a lot of time and investment in designing the VoiceOver screen reader which is present in all of their operating systems. On an iPhone and iPad, gestures are used to navigate a site.

  • Scroll: three-finger swipe up or down the screen – VoiceOver tells you how far down the screen you are.
  • Speak the entire screen from top to bottom: two-finger swipe up.
  • Speak the entire screen from from current item: two-finger swipe down.
  • Next element: flick right.
  • Previous element: flick left.
  • Read content: tap – this will either read the text or read the element description (e.g. the alt text for an image will be read).
  • Read single letter: single finger swipe upwards (previous letter) or downwards (next letter).
  • Select: double tap.
  • Find content: drag your finger around the screen and VoiceOver will tell you what’s there.
  • Enable screen curtain: triple tap.
  • Disable speech: triple tap twice.

After reading how VoiceOver works, it seemed obvious that gestures would be how a screen reader would work on a phone, but I hadn’t thought of this!

Below is my prototype running on an iPhone 7 in Safari with VoiceOver enabled.

Again, the prototype worked well on the phone with VoiceOver able to tell me what heading levels I was looking at and able to read the alt text of the image which I can now understand how the alt text really helps to describe images to the blind and visually impaired.

VoiceOver also told me how far down the page I was, by saying ‘page 2 of 3’ (for example). Essentially, 100% of the screen occupied would equate to ‘1 page’.

The major difference between using VoiceOver and NVDA (asides from the platform) is that VoiceOver is unable to navigate through headings, landmarks and even items. In NVDA you can press the H key or a number between 1 and 6 to navigate to different headings or the next heading element, press D to navigate to the next landmark and press the downwards arrow key to navigate to the next item. VoiceOver does not have equivalent gestures for these, but there’s only so much swiping and tapping that can be done so it is easy to see why these features have been omitted.

VoiceOver was much better at reading the German text than NVDA was and even used an authentic-sounding accent! I feel that this may be because iOS comes with languages pre-installed whereas my Windows 10 doesn’t. I’ll have to install the language packs on a Windows machine and try again with NVDA to find out! However, it also didn’t emphasise the content inside of the <strong> or <em> tags and it was also unable all of the landmarks apart from ‘main’. This makes me wonder if perhaps only a few screen readers can emphasise <strong> and <em> tags and navigate to landmarks besides main, or if the code I’m using is just outdated and not supported by modern screen readers.

It didn’t seem to matter where you swiped, the user would be able to move between elements and use the gestures. This has been done because of course somebody who is visually impaired might not know whereabouts on the screen they are swiping.

It became apparent to me that the nature of needing to tap the screen constantly to find out what’s on the screen means that swiping elements on mobile websites would be very difficult to interact with. I’m focusing on e-commerce websites in particular for my report, which often feature things like carousels and other swipe elements. I wonder how these work on an iPhone. It would be really interesting to either try out some existing websites that feature swipe elements and see how they work, or add some carousels and other swipe content into my own prototype and see what using those is like.

VoiceOver Screen Curtain and what testing my prototype has taught me about how the blind use the computer

By triple tapping the screen it is possible to turn off the iPhone display and still use the phone thanks to VoiceOver. This mode makes a lot of sense on a phone, for example you may want to read private messages in a public place and don’t want other people looking.

For me this, this was exceptionally helpful because I was able to see how I’d navigate my prototype without actually being able to see it for the first time. In hindsight, I could have just turned my computer monitor off and used NVDA, but I didn’t think to do that!

The video below shows how it worked.

It worked just the same as before, but this time I was really able to fully understand how important content structure is because when you cannot see anything you need everything said aloud to you. Using the correct heading structure and providing descriptions for elements such as images and videos and having these read by the screen reader really helps the blind user to build a picture of what they’re looking at in their imagination.

I was able to understand for the first time how important ‘muscle memory’ is for the blind. I said in the video that you have to remember what content goes where on the page and then use that memory to figure out where you want to navigate to next. If you remember there’s an image with the alt text ‘pretty flowers’ just beneath the menu and you want to navigate to that, you know to keep scrolling up until you hear the screen reader say ‘pretty roses, image’ and then you know that you are near the menu. For the first time I was able to begin to fully appreciate how difficult it can be for a blind user to use the computer.

But above all, it proves that my idea for the technical report could work. It is possible to create a mobile website for the visually impaired, if a little difficult. So I can research the science behind users interact with them, research the methods used to build them, produce one and test it.

Next stages for the prototype

I have a few additions in mind.

Focusing on finding out what it’s like for the blind or partially-sighted to view certain elements on the web:

  • Add videos
  • Add audio
  • Add ‘swipe content’ such as slideshows or carousels
  • Add buttons
  • Add forms

Basically keep adding HTML elements and find out how these work with the screen readers.


  • Find out if/which screen readers do emphasise text in semantic HTML tags.
  • Find out if/which ARIA landmarks can be detected by screen readers besides ‘main’ and possibly ‘section’.

Focusing on making this a prototype that is accessible to the ‘visually impaired’ (covering blindness, partial-sightnedness, colour blindness and dyslexia):

  • Research colour theories and experiment with seeing how different colours for the UI help people with different visual impairments use the website.
  • Research how text-to-speech engines work and how these can be made to work better with websites.
  • Research how magnifying software works and how websites can be made to scale better.


Apple Support. (n.d.). Learn VoiceOver gestures on iPhone. [online] Available at: [Accessed 29 Apr. 2019].

Apple (United Kingdom). (n.d.). Vision Accessibility – iPhone. [online] Available at: [Accessed 29 Apr. 2019]. (2017). WebAIM: Using NVDA to Evaluate Web Accessibility. [online] Available at: [Accessed 29 Apr. 2019].