A really fun day today – probably one of the best days I’ve had at university for a long time! Today I was introduced to how to make gyroscopic moving 360 degree images with the ‘MarziPano’ JavaScript library and a Samsung Gear 360 camera and then was introduced to a markup language called LATEC which will help me write my dissertation.

360 Degree Images with MarziPano

The prototype that I showed the Broads Authority on February 12th featured a panoramic image of Burgh Castle, compiled by stitching 22 photographs together in Adobe Photoshop. This panorama was then ‘panned around’ by using a very simple jQuery-based script that just literally zooms in on the image and pans around when the user taps on it. This was supposed to replicate what I learned how to create today – a proper 360 degree image that actually moves around with the movement of the device that allows the user to almost experience being in a location. This is possible using a JavaScript library called ‘MarziPano’.

The basics of 360 degree images

They can be a slightly tricky concept to understand and there are two methods of creating them:

  • One method is to create a ‘net’ of squares that when put together form a box. On each of these squares is a part of the environment that when put together, creates a scene. This is used a lot in video games.
  • The other common method is to use a 360 degree camera to produce a photograph and then ‘wrap’ this photograph around a ‘sphere’ that moves when the device moves.

The method explained in this post is the second.

The 360 degree camera features two sensors and two lenses, meaning that when you take photographs with it, they look like the below.

The Gear 360 produces images that look a bit like this.

360 degrees is too wide an angle for one single lens to photograph, so combined, these two lenses can produce one image that is 360 degrees wide. There is Samsung software that can be used to do this automatically, but the Gear 360 is designed to be used with a smartphone – so this software is an app. It may be possible to do this in Adobe Photoshop, but instead I used a website called Nadir Stitch which allows you to upload an image from the Gear 360, it then stitches it for you and you can download the JPEG. The original image from the Gear 360 is 16.7 MP in resolution an the stitched image is also 16.7 MP – but more importantly the aspect ratio is 2:1 – meaning that the width is twice that of the height. This means that the image is what’s known as ‘equirectangular’ and this means that it will wrap onto a sphere easily.

The final stitched image.

The final stitched image has some interesting perspective and lines because it is so wide and there are some imperfections in the stitching. For the sake of today’s experimentation with MarziPano this is absolutely fine, but for the final production piece you’d want to tidy the stitching up and get a better result. The image quality and dynamic range of the images from the Gear 360 is very good, but when we come to produce the next prototypes I’ll add much more styling – such as using Photomatix Pro to really ‘over-dramatise’ the look. The idea is that if there were being used at somewhere like Burgh Castle then some photo editing to make the image look more dramatised could work well.

A basic diagram explaining how the equirectangular format works.

There is a small amount of maths and other considerations involved with displaying 360 degree images. The diagram above shows an image that is 2048px wide and 1024px tall, meaning that the image is approximately 2 MP in size, thus would be about 2 MB in file size. This means that the file is small enough to load on a mobile broadband connection out in the field (a factor that has to be considered), but the image may not be very clear on a larger or higher resolution display. When phones had low resolution displays such as 480p and 720p screens, this wouldn’t have been a massive problem. However, phones these days have screens that are higher resolution than most desktop computers, so it is a problem now. A slightly higher resolution image may need to be used, but the need for something very high resolution isn’t necessary because the device is only displaying a small portion of the image, so it’s not necessary to load a very high resolution image. I created a 4096×2048 image in the end, which is roughly 8 MP in size.

The other considerations are things that may appear in the frame – people, tripods, shadows and things like that. It doesn’t matter where you stand, unless you run around in a circle whilst the image is being taken and the camera doesn’t catch you, you will appear in the frame! Similarly, anything beneath the camera such as a tripod or a stand will also appear in the frame. The only way this can be removed is if you use something like Content Aware Fill in Photoshop to remove the item from the frame. Shadows and sunlight are also a problem – I experienced this when I made by panorama of Burgh Castle. The problem is that not that they exist, rather that some areas of the frame may appear lighter or darker than others and it can be hard to correct this – when I made my panorama of Burgh Castle some of the individual frames that made the final image were more exposed than others due to sunlight.

It’s very hard not to get yourself and anything underneath the camera (such as hand-straps and tripods, shown in the blue squares) in the frame. I was almost underneath the table and still most of my face was in frame!

The other thing that was considered were labels. These are elements that appear on top of the panoramic image that provide the user with information. If you add the labels onto the source image then they can’t be hidden, changed or interacted with at all. So instead, they’re going to be placed over the image and positioned using code for those reasons.

The Samsung Galaxy Gear 360 (2017)

This is the camera that was used to create the image. It’s very small, lightweight and portable and as mentioned, produces nice quality images in a decent resolution. I’m a big fan of high-end D-SLRs and an avid photographer, hence why I own a Nikon D500, but really enjoyed using this little 360 degree camera today. It’s the first time I’ve ever used one and for this kind of thing it is nearly perfect. The Gear 360 is designed to be used with the smartphone app, but can also be used without it too. I didn’t have the smartphone app, so I used it without and got on fine with it.

The Gear 360 was used to take the 360 degree images.

 

The Gear 360 gets a good review from me! It was fun using it today!

The code

MarziPano makes this very easy – it’s just a JavaScript library that is referenced at the bottom of the HTML file and there is even an additional file that handles the use of the device gyroscope to move the image around – so none of that needs to be coded manually.

Above line 80 is inline CSS which just defines the styles for the few elements that are on the screen, such as the button to activate or deactivate the use of the gyroscope (by default it is disabled – when it is disabled the user can swipe/drag or click and drag on the image to move it around).

Line 81 defines a new empty div, which the panoramic image sits in. Lines 82-85 define the button that enables the gyroscope.

Lines 91-97 define the first type of interaction that the user can do – the ‘reveal’ type. Line 92 is the image that goes in the button to activate the information panel and line 94 is the image that appears in the little dialog box above the text. When the user taps on the blue circle above the image, a small div containing an image and some information is shown. This type of interaction could be good for presenting fairly detailed, but succinct, information.

This dialog could be great for displaying a small amount of information and perhaps an image of something looks or used to look.

Lines 100-110 defines the other type of interaction – the ‘textInfo’ type. The user taps on the grey circles and a small label comes up – this is ideal for just explaining small details – two or three sentences at the most.

 

This type of dialog is better for displaying short bits of text, perhaps information or names of objects in the frame.

Lines 165-191 of the code are fairly simple and do not need a lot of changing from the default template. Essentially the two libraries are referenced on lines 166 and 167 and a new object called ‘viewer’ is defined on line 169. ‘Viewer’ is a Marzipano Viewer – this object is defined in the MarziPano.js library. Line 170 defines the device orientation capabilities. The only three lines that need some editing are lines 174 (the image width needs to be defined here), 176 (the width needs to be placed into that equation so that MarziPano can wrap the image around the sphere) and 181, which is the location of the 360 degree image.

Lines 184-189 define the new scene, which contains the panoramic image and the elements.

Positioning the elements that displays the information is done through JavaScript code. Like flying a plane, the elements are positioned by altering the pitch (height) and the yaw (direction/heading). If you’ve ever flown a plane or read about flying them then you’ll understand that these are terms used to describe the aircraft’s direction.

RAF Tornado GR4 pulling a very steep pitch, about to go into a loop.

Each element has a unique ID that can be referenced, but they all share a common CSS class. This means that they can share styles (since they are part of the same class), but they can be referenced to as unique items and thus moved and treated separately. Pitch and yaw values are relative to a centre point, which is 0,0. To find the centre point, I positioned one of the elements at 0,0 and made a note of where this was. Interestingly, although it is close to the absolute centre of the image, it is not perfectly in the centre – so that is one thing to bear in mind. The photograph shows my Surface on the left with the element positioned at 0,0 compared to the same image shown on a MacBook with the perfect centre of the image (shown in Adobe Illustrator).

The Surface on the left shows the centre of the image according to MarziPano and the MacBook on the right shows the centre of the image according to Adobe Illustrator. It’s not quite the same place.

With this taken into account, by experimenting with numbers it was possible to learn that:

  • Negative pitch numbers moved the element upwards.
  • Positive pitch numbers moved the element downwards.
  • Negative yaw numbers moved the element left.
  • Positive yaw numbers moved the element right.

In the example of the Tornado flying above, you can see that the pilot is pulling a very steep positive pitch, inverting the aircraft.

Shown nicely on the diagram I drew below.

It’s also worth nothing that generally pitch and yaw values were between -3.1415 and +3.1415 – this is because 3.1415 is of course Pi and Pi is used a lot in circular and spherical maths. The pitch values were also much more sensitive than the yaw values – each unit of pitch represented a larger distance/space than the equivalent unit of yaw.

Other than CSS styling for the buttons and their elements (all positioned absolutely so that their position is in the same place, no matter the screen resolution), that is it for the code – or at least the bits that need to be modified.

The result

The result is positive! Look at the prototype here – best run on your phone or tablet so that you can use the gyroscope! See the video below for a demonstration of it running on my Surface Pro 4 in Google Chrome.

It was really cool and good fun to run this on a 12″ tablet, but of course people will probably be more likely to use this on their mobile phone, so around 5-6″ is more realistic.

This is better than the app I tried at Caistor last week for a number of reasons:

  • I didn’t need to download an app or anything to make this work – just viewed it in the web browser.
  • I could get this to work on my 5 and 6 year old Windows Phones – couldn’t download the Caistor app for Windows!
  • The battery life on the device was not affected at all by the use of this.
  • It’s so quick and responsive – even running on the old Nokia Lumia 925

The gyroscope worked with my older devices, but it seems that old versions of Mobile Internet Explorer and Microsoft Edge have some issues displaying the information in the hotspots, see the video below.

It’s a positive experience and it works really well! If you had a perfect 360 degree image to work with this would be really cool.

It was great fun using the app prototype on the Surface Pro 4.

MarziPano was a nice library to work with. Very simple to grasp and the code is nice and logical. It also appears to be cross-browser compatible. I’d definitely use it again.

Application

This would definitely be best-suited to somewhere like Burgh Castle or another location where key points of interest need to be shown or a view of how a site looked at different points in time or at different points of the year. Some ideas:

  • Burgh Castle: show how the site looked in the Roman times – show warships, a complete castle, something similar.
  • Herringfleet Marshes: show the site over time, from it being an island to being agricultural land with freight trains passing by to what it is today.
  • Carlton Marshes: show how the site looks at different times of the year.
  • Breydon Water: show how the site has changed over time in terms of sea levels and the landforms etc.

The modern day view likely would not be shown, instead the user would be looking at a render of something from years ago. There would definitely need to be a point for the user to stand at, face a certain direction and open the app so that they can align themselves with the modern-day view. That means that the user can then move their device around and see the conception of how the exact place that they are facing looks today. I’ve mentioned the WymTrails app several times lately, but the footprint idea that this app uses for wayfinding could work well to mark a place to stand.

One of the paw prints that directs users of the WymTrails app. Visual navigational cues could be used to help users to find the locations or instruct users where to stand to get the best out of the app.

What’s next?

We’ve got a whole day tomorrow with the graphics students to develop this idea further, however we had a quick meeting this afternoon to see where each team was and we showed this to them and they loved it. We need to also consider other technical prototypes to make too – one for a trail app and one for a health or other type of mapping app perhaps. The idea is that eventually these can be put into a fully coded and functioning prototype for the Broads Authority.

Meeting with the graphics students

We had a 45 minute meeting with them just to fill each other in on where we were. They loved the gyroscopic app and we loved what they showed us. In short, they’ve:

  • Developed their own typeface to use for headings – it’s inspired by 14th Century type (from about the time that the Broads were formed) and also happens to look like treadmarks from walking boots.
  • Conducted some identity research by looking at campaigns such as This Girl Can and what the National Trust do to get the word across. They’ve also looked at what the Broads Authority have to promote the Angles Way at the moment such as posters and leaflets. It turns out they have information leaflets for each of the key towns on the walk, such as Beccles, Bungay, Diss and Thetford.
  • They’ve thought about specific areas that they could focus on, such as the health aspect of walking (physical health, mainly). They’ve come up with The Angles Way, My Way, inspired by This Girl Can.
  • They said they wanted to design some posters and leaflets to help develop the brand identity.

They want to show us a presentation of what they’ve made tomorrow.

We’re all getting on really well and we are enjoying working with each other. I’m really pleased by how much they have done and how interested and invested in the project they are becoming. It makes working with them great fun!

LaTeX

Today we also had an hours’ session on a service called Overleaf which helps you quickly write complex documents such as dissertations, academic reports and CVs by using a very simple mark-up language called LaTeX to help you format and quickly cite, chapter and organise documents. There’s even whole Stack Exchange websites dedicated to LaTeX help and support, so it is a very widely-used language.

By using simple bits of code such as:

\section{Introduction}

You can do things such as add sections and chapters to your work (the above example adds one called ‘Introduction’) really easily without the need to use formatting in Word to do something like that. Better still, it will number each section and as you add and remove them it will alter the numbers accordingly. Sub-sections can be added by changing ‘section’ to ‘subsection’.

Things that had to be done with several clicks at the fastest in Microsoft Word can now be done with one simple line of mark-up code in LaTeX, such as creating a table of contents – which is done by typing:

\tableofcontents

Typing that in the editor will create a fully-formatted table of contents that updates as and when you add or remove sections. Perfection!

Overleaf has a nice web-based IDE with the LaTeX code on the left and a PDF view on the right, which is updated each time you click ‘Recompile’.

It also has really good citing and referencing features. You can create a bibliography document within Overleaf and then get it to cite and reference directly from that. A lot of sites hosting academic reports have citations available in the *.bib format, which Overleaf uses. You can copy and paste the JSON code from these sites into the *.bib file for each reference entry. In the example below I’m citing from a source written by somebody called Curtis in 1997 and the paper is called ‘Computer-generated watercolor’. For inline citations you can use:

Quotation from the source here (\cite{curtis1997computer}).

Which will print:

Quotation from the source here (Curtis et al. (1997)).

In the report text.

To add the bibliography into the end of the report, you can use:

\bibliography{agsm}
\bibliography{bibliography}

Which will print the bibliography wherever that code is placed. The first line dictates that the AGSM citation method should be used and the second line dictates that the bibliography called ‘bibliography’ should be used – this is the ‘bibliography.bib’ file that I made which contains the JSON references. It also arranges them in alphabetical order.

The referencing abilities are very helpful.

This seems like a really good system for formatting and generating reports. The mark-up short-codes make it easy to complete operations that would otherwise be tiresome or long-winded to do in Word or another word processor. I had never heard of us until now but it seems a great idea!

I will need some practice before I use this to start writing my dissertation proposal and ensure that I know how it all works.

What’s next?

Get practicing and continue to come up with an idea to write about for my dissertation, then consider the different types of dissertation available.