This repository has been archived by the owner on Mar 6, 2019. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathicldiss.tex
1803 lines (1486 loc) · 113 KB
/
icldiss.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% IMPERIAL COLLEGE LONDON DISSERTATION TEMPLATE
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright (c) 2008, Daniel Wagner, www.PrettyPrinting.net
% http://www.prettyprinting.net/imperial/
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[MSc,paper=a4,pagesize=auto]{icldt}
%\usepackage{showframe}
%\usepackage{cleveref} % lets us use chapter references
\usepackage{graphicx} % lets us import graphics nicely
\usepackage{caption} % to go with lstlisting
\usepackage{subcaption} % lets us use subfigure
\usepackage[binary-units=true]{siunitx} % and friendly SI unit shiz
\setcounter{tocdepth}{1} % don't show sub-sections in the TOC
% next two lines supress 'underfull hbox" warnings caused by URLs in bib file
\usepackage{etoolbox}
\apptocmd{\sloppy}{\hbadness 10000\relax}{}{}
\usepackage{todonotes} % TODO
\usepackage{dirtree} % directory structure
\usepackage{scrhack} % to avoid the warning that \float@addtolists is deprecated
\usepackage{listings} % lets us insert software code and highlight
\usepackage{color} % required to set background colour
\lstset{
language=[Visual]C++,
keywordstyle=\bfseries\ttfamily\color[rgb]{0,0,1},
identifierstyle=\ttfamily,
commentstyle=\color[rgb]{0.133,0.545,0.133},
stringstyle=\ttfamily\color[rgb]{0.627,0.126,0.941},
showstringspaces=false,
basicstyle=\tiny, %small,
% numberstyle=\footnotesize,
% numbers=left,
% stepnumber=1,
% numbersep=5pt,
tabsize=2,
breaklines=true,
prebreak = \raisebox{0ex}[0ex][0ex]{\ensuremath{\hookleftarrow}},
breakatwhitespace=true,
aboveskip={1.5\baselineskip},
% captionpos=b,
columns=fixed,
extendedchars=true,
backgroundcolor=\color[rgb]{0.9,0.9,0.9},%\color{white},
% title=\lstname
}
%\usepackage[usenames,dvipsnames]{color} % know about colours
\definecolor{grey}{RGB}{102,102,102}
\DeclareCaptionFont{white}{\color{white}}
\DeclareCaptionFormat{listing}{\colorbox{grey}{\parbox{0.97\textwidth}{#1#2#3}}}
\captionsetup[lstlisting]{format=listing,labelfont=white,textfont=white}
% Questionairre definitions based on Sven Hartenstein's examples
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{wasysym}% provides \ocircle and \Box
\usepackage{enumitem}% easy control of topsep and leftmargin for lists
\usepackage{forloop}% used for \Qrating and \Qlines
\usepackage{ifthen}% used for \Qitem and \QItem
\usepackage{typearea}
\newcommand{\Qq}[1]{\textbf{#1}}
\newcommand{\QO}{$\Box$}% or: $\ocircle$
\newcounter{qr}
\newcommand{\Qrating}[1]{\QO\forloop{qr}{1}{\value{qr} < #1}{---\QO}}
\newcommand{\Qline}[1]{\noindent\rule{#1}{0.6pt}}
\newcounter{ql}
\newcommand{\Qlines}[1]{\forloop{ql}{0}{\value{ql}<#1}{\vskip0em\Qline{\linewidth}}}
\newenvironment{Qlist}{%
\renewcommand{\labelitemi}{\QO}
\begin{itemize}[leftmargin=1.5em,topsep=-.5em]}{ \end{itemize}
}
\newlength{\qt}
\newcommand{\Qtab}[2]{
\setlength{\qt}{\linewidth}
\addtolength{\qt}{-#1}
\hfill\parbox[t]{\qt}{\raggedright #2}
}
\newcounter{itemnummer}
\newcommand{\Qitem}[2][]{% #1 optional, #2 notwendig
\ifthenelse{\equal{#1}{}}{\stepcounter{itemnummer}}{}
\ifthenelse{\equal{#1}{a}}{\stepcounter{itemnummer}}{}
\begin{enumerate}[topsep=2pt,leftmargin=2.8em]
\item[\textbf{\arabic{itemnummer}#1.}] #2
\end{enumerate}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Essential Setup
\title{Seeing the Big Data: Virtual Reality Visualisations of Large Datasets}
\author{Alexander Zawadzki (az2713)}
\date{September 2014}
\department{Computing}
% Optional
\supervisor{Professor Daniel Rueckert}
\dedication{
\textbf{\huge Acknowledgements and thanks:}
\newline
\newline
Dr Paul Gass and Dr Ian Thompson, my bosses at Sharp Laboratories of Europe Ltd., for arranging for my funded sabbatical year at Imperial College London.
\newline
\newline
Colleagues Dr Nathan Smith, Dr Graham Jones, Dr Jon Mather, and Dr Andrew Kay for sharing their passion for 3D display systems. Especial thanks to John Nonweiler, for setting an example of how software development \textit{should} be done.
\newline
\newline
Supporters Aashish Chaudhary at Kitware, and Brad Davis at ORIA for providing insight into VTK and the Oculus Rift SDK respectively.
\newline
\newline
Local Biomedical Image Analysis experts Kevin Kerdauren, Lisa Koch, Dr Sarah Parisot, and especially Dr Bernhard Kainz for help and advice.
\newline
\newline
Last, but not least, my supervisor Professor Daniel Ruckert. I am deeply grateful for your guidance and the opportunity to run this project.
}
\begin{document}
\maketitle
\begin{abstract}
%\textbf{A Zawadzki, Department of Computing, Imperial College London}
%\\ \textbf{Abstract of Master's Thesis, submitted September 4th 2014}
%\\ \textbf{Seeing the Big Data: Virtual Reality Visualisations of Large Datasets}
A new pipeline is implemented for the visualisation of Human Connectome Project (HCP) data. HCP data are converted to an intermediate format, and then imported into the Visualisation Tool Kit (VTK). Through a series of extensions to the VTK pipeline, the data are rendered as distorted stereographic 3D images and output to an Oculus Rift virtual reality headset. Sensors on the headset track the user's head position and dynamically update the rendered image. The complete pipeline allows an immersive 3D display of data, with native access to all VTK functionality. A user study is conducted to evaluate the pipeline, and the strengths and weaknesses are discussed.
%\hfill --- Alexander Zawadzki
\end{abstract}
\makededication
%\iffalse
\tableofcontents
%\listoftables
\listoffigures
\chapter{Introduction}
\section{Context}
Mapping the structure of the human brain is key to understanding its function. The ongoing Human Connectome Project (in the USA) and the Developing Human Connectome Project (in the UK) are dedicated to mapping and analysing the neural connections of developed and developing human brains.
These Connectome projects produce very large datasets giving information about the structural and functional connectivity of neurons in the brain. Multi-gigabyte datasets are common, as of June 2014 the Human Connectome Project has generated over 20 terrabytes of data. The volume of data produced, and 3D nature of the data make it difficult to visualise in 2D form. A typical functional connectome matrix, for example, may have 3,000 by 3,000 elements and in 2D form bears no resemblance to the physical shape of the fibres.
Advances in computing power and display technology offer the capability to visualise these data sets in new ways. The dominant display technology today is the LCD computer monitor: large, high resolution, and a 2D format. Recent developments in virtual reality technology, in particular the mass-market launch of Oculus Rift development kits, offer new and exciting opportunities.
Do researchers want to view data in 3D? What data is suited to this type of display? Is the performance of current VR display systems adequate for practical use? This project has sought to answer these questions by developing a demonstration system for the virtual reality display of medical image data. The demonstration system was used to conduct a survey in the Biomedical Image Analysis Group, and gather expert feedback. Results showed a significant interest in the use of Virtual Reality visualisation tools, and suggestions provided ideas for further development.
\newpage
\section{Contributions}
This project has investigated the use of consumer grade virtual reality hardware, specifically the Oculus Rift DK1, as a tool for visualising and interacting with large datasets. The project has contributed to the open source community in a number of ways:
\begin{itemize}
\item Source code was ported from Paraview with encouragement from lead developer Aashish Chaudhary in order to bring Rift functionality to VTK.
\item A plug-in was developed for Blender in order to allow the import of plain text\footnote{Also known as `legacy' VTK format, it is used as widely as `modern' XML format.} .vtk format data.
\item A number of bugs were identified and fixed in the Linux version of the excellent Oculus Rift in Action (ORIA) project. The project was forked on GitHub, at the suggestion of the project owner Brad Davis, to share these bug-fixes with the community.
\item When relevant, assorted software problems were documented on Stack Overflow in `Question and Answer' format. The response to these posts has been very positive, bumping my account ``\textit{GnomeDePlume}'' to the top quartile of contributors in 2014.
\end{itemize}
In comparison to the size and scope of the VTK, Blender and ORIA projects, the contributions to these projects have been very small. Nevertheless, contributions were made in the spirit of open source collaboration and have have incrementally advanced the state-of-the-art. All software can be accessed from my GitHub account: \texttt{github.com/zadacka}.
The academic impact of this work has been modest: demonstrations within the Biomedical Image Analysis Group, and a formal presentation to my employers at Sharp Laboratories of Europe\footnote{Following which they offered me a raise, so some good has come of it.}. This MCS thesis will be the main academic publication resulting from the project, and it is hoped that the documented source code, image content and demonstration system will be a useful resource for future project work.
\newpage
\section{Structure of the Report}
\label{sec:structure_of_the_report}
As the project dealt with implementing a pipeline, illustrated in Figure~\ref{fig:the_full_pipeline}, it seems natural for the report to follow the process from raw data acquisition to the final display image.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.5]{resources/data_pipeline_overview}
\caption{An overview of the project pipeline.}
\label{fig:the_full_pipeline}
\end{figure}
Following this logic, Chapter~2 discusses the how the Human Connectome Project (HCP) maps human brain structures using DTI and fMRI imaging, and discusses existing visualisation techniques. Chapter~3 discusses the HCP data structure, and how it may be converted to intermediate formats. Chapter~4 shows how a Visualisation Tool Kit (VTK) pipeline can be constructed and HCP data can be processed for interaction and display on a standard 2D computer monitor.
Chapter~5 breaks away from the sequential structure in order to introduce the Oculus Rift hardware, and shows how the collimating optics in the headset will deform images unless extra distortion shaders are used. The report then returns to the VTK pipeline in Chapter~6 to discuss the good, the bad and the ugly ways to apply shaders in VTK.
With a complete pipeline for converting and displaying content, Chapter~7 discusses how the system was tested and evaluated. Results from the user study are summarised, and suggestions for further work are explored.
Finally, Chapter~8 concludes by synthesizing findings from each stage in the project, and reviewing interesting learning outcomes.
\chapter{Connectome Data}
This chapter discusses why visualising brain data is important, and the suitability of HCP connectome data for this project. Different experimental techniques for gathering data are compared, with particular attention paid to the type of data that each technique produces. Current data visualisation systems are introduced, with the conclusion that the tractography is particularly well suited to be displayed in 3D.
\clearpage
\section{Motivation for using Connectome Data}
Mapping and visualising the brain is of considerable academic and practical interest. The map or ``wiring diagram'' of neurones in a brain is called a \textit{connectome}. The brain is thought to be composed of motor, sensory, behavioural state, and cognitive systems \cite{Swanson2003}. Better knowledge about the physiology of the brain can help in understanding these systems. Knowledge of the structure of an individual brain can be helpful when diagnosing medical conditions, or planning a surgical procedure \cite{Golby2011}. In either case, the interested parties need to have data, and be able to draw useful inferences from this data. This project aims to work on the second of these challenges: developing new ways of displaying and interacting with existing data.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/hcp_logo}
\caption{The Human Connectome Project logo}
\label{fig:hcp_logo}
\end{figure}
The high quality and availability of connectome data makes it an excellent source of content for visualisation. The Human Connectome Project is a world-class source of data, and thanks to the Open Access Data license terms \cite{HCPAccess2014} all of this content can be obtained free of charge. Imperial College has an active research program in the medical imaging field, and the project has benefited greatly from access to friendly local expertise. As an illustration of this good fortune, Imperial College is one of only four UK universities involved in the Developing Human Connectome Project \cite{DHCPPartners2014}, and numerous experts\footnote{Kevin Kerdauren, Lisa Koch, Dr Sarah Parisot, and especially Dr Bernhard Kainz} from the Imperial College team have directly assisted this project.
\clearpage
\section{Sources of Connectome Data}
The process used to generate connectome data determines the resolution and format of the data obtained.
\subsection{Electron microscopy (EM)}
EM gives nanometer resolution images of neural structure. This is an ex vivo process, samples must be taken from the brain. EM techniques can give a resolution of \SI{5}{\nm} in the x-y plane, with a \SI{30}{\nm} slice thickness and generate an enormous amount of raw data, of the order of \SI{1}{\pebi\byte\per\mm\cubed} \cite{Jeong2010}.\footnote{A data density greater than two hundred thousand DVDs per cubic milimeter or, in old money, eleven \textit{billion} floppy disks per cubic inch} Due to the high resolution and small sample sizes accommodated by EM techniques, the only connectome to have been completely mapped out using this technique is that of the nematode c.elegans \cite{White1986}.
\subsection{Optical microscopy (OM)}
OM techniques can be used in conjunction with fluorescent markers to selectively tag and examine brain matter. As with EM, this is done ex-vivo. This approach allows for much larger samples to be analysed, at a resolution of approximately \SI{0.35}{\um} in the x-y plane\footnote{Fluorescent markers allow resolution beyond the Abb\'{e} limit.}, and with a slice thickness of \SI{100}{\um}. This approach has recently been used to map the entire mouse connectome \cite{Oh2014}.
\subsection{Magnetic Resonance Imaging (MRI)}
MRI scans may be used in combination with Diffusion Tensor Imaging (DTI-MRI) as an in-vivo technique for determining larger scale structure. Diffusion of water within the brain is influenced by the structure of the brain matter, since water can diffuse more easily along bundles of nerves than perpendicular to them, and so the diffusion patterns allow local structure to be inferred. The resolution of this technique is typically low, approximately \SI{1}{\mm} in the x-y plane \cite{Westin2002}. A related technique is to detect neural activity, determined as a function MRI measurements of blood-oxygen changes in the brain (fMRI) \cite{Huettel2004}.The HCP and DHCP projects have used MRI techniques to map hundreds of human connectomes.
\section{Storing Connectome Data}
There are no standardised formats for storing raw connectome data, as the form of the data depends on the experimental technique used to obtain it (EM, OM, MRI) and the specific equipment used. Thankfully the HCP project does have a set of data format and structure conventions. The HCP includes DT-MRI and fMRI measurements from multiple institutions and multiple different scanners, and has been careful to ensure that different data samples may be compared in a valid way \cite{HCP_Logistics_2014}. This subject is covered in more depth in Section~\ref{sec:the_structure_of_hcp_data}.
Fast access to connectome data is essential when working interactively with high resolution images, and custom data structures have been developed to support this. Previous publications \cite{Jeong2010} have discussed working with a 75GB connectome dataset, stored as an octree in order to enable fast access to the desired parts of the dataset. This concept is illustrated in Figure \ref{fig:octtree}. They also store sub-sampled data at various resolutions in order to allow responsive zooming into the data set and cache data at GPU, CPU and process levels.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/octtree}
\caption{An octree corresponding to a subdivided cube.\cite{octtree2010}}
\label{fig:octtree}
\end{figure}
The pre-processing techniques described in Chapter 3 were used to produce simple geometric structures with correspondingly small file sizes. These simple structure, with file sizes less than \SI{10}{\mega\byte}, could be stored in RAM or even in video memory. Exotic data structures may be required in the future to support more complex content and higher-resolution VR displays.
\clearpage
\section{Visualising Connecome Data}
A number of common visualisations exist for connectome data. Visualisations can be divided into three types: structural, functional and multimodal. Some visualisations, connectivity matrices for example, are equally well suited to display structural or functional information. Other visualisations, such as tractographies, are best suited to displaying structural information. Multimodal visualisations show elements of both structural and functional data. Virtual reality is well suited to display structural information, and so the following visualisations discussed are primarily those for structural or multimodal data.
\subsection{Connectivity Matrix}
A connectivity matrix shows the physical or functional connectivity between different regions of the connectome \cite{Wang2011}. It allows a lot of information to be presented in a single 2D image. However, it does not represent the spatial relationship of the data, and the format in which the axes are ordered may introduce false patterns.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/connectivity_matrix}
\caption{An alphabetically ordered connectivity matrix. \cite{Wang2011}}
\label{fig:connectivity_matrix}
\end{figure}
\clearpage
\subsection{Tractography}
Tractography is the process of visualising nerve tracts, bundles of nerve cells. It can provide a very intuitive way of looking at nerve data, as seen in Figure \ref{fig:tractography}, although considerable editing of opacity, colour and shading may be needed in order to bring out the desired features in a tractography image. Without such editing, tractography images suffer from a surfeit of information and can obscure features of interest.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.6\textwidth]{resources/tractography}
\caption{Fibre tractography. \cite{Odonnell2006}}
\label{fig:tractography}
\end{figure}
Various techniques have been developed for fibre tractography, including the use of tuboids, streamlines and streamtubes to represent the tracts. Advantages of these techniques include faster rendering times, aesthetically pleasing labelling of tracts, simplified tract junction rendering, and good occlusion handling \cite{Petrovic2007}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.6\textwidth]{resources/tuboids}
\caption{A tractography image using tuboids. \cite{Petrovic2007}}
\label{fig:tuboids}
\end{figure}
\clearpage
\subsection{Glyph Images}
Glyphs can be used to represent local diffusion tensors in DT-MRI data. The goal of glyph images is to show local diffusion properties as well as larger macroscopic structure. Various systems have been proposed to do this: using ellipsoids \cite{Pierpaoli1996}, colour coded arrows \cite{Peled1998}, opacity mapping \cite{Westin1997}, and hybrid geometric shapes \cite{Westin2002}.
A spherical glyph represents isotropic diffusion properties, whereas various ellipsoids can represent a local bias towards linear or planar diffusion. Figure \ref{fig:hybrid_glyphs} illustrates how a hybrid shape may be less ambiguous than a 2D rendering of a 3D ellipsoid.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.6\textwidth]{resources/hybrid_glyphs}
\caption{Hybrid glyphs representing local diffusivity in DT-MRI data. \cite{Westin2002}}
\label{fig:hybrid_glyphs}
\end{figure}
The ordered placement of glyphs can introduce patterns not present in the data, prompting research into the positioning of the glyphs, and the use of particle systems to re-position glyphs as a function of interparticle forces \cite{Kindlmann2006}. The benefit of this approach is that the sampling artefacts may be greatly reduced or completely removed from the final image. As can be seen from Figure \ref{fig:glyph_packing}, this approach is not without drawbacks, new image artefacts, such at the hexagonal close packing structure, may be introduced by the particle field.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.6\textwidth]{resources/glyph_packing}
\caption{Glyph packing. \cite{Kindlmann2006}}
\label{fig:glyph_packing}
\end{figure}
\clearpage
\subsection{Other Techniques}
In addition to connection matrices, tractographies and glyph images there are many other ways to display connectome data. Graph-based representations can present functional connectivity \cite{Hagmann2008} as shown in Figure \ref{fig:connectome_graphs}, but may do so at the cost of reduced anatomical accuracy \cite{Margulies2013}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/connectome_graphs}
\caption{Network structure and interconnection density maps. \cite{Hagmann2005}}
\label{fig:connectome_graphs}
\end{figure}
The Glass Brain project has developed a multimodal technique for combining structural and functional data \cite{GlassBrain2014}. This type of representation may be useful as it gives anatomical context alongside functional information.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/glass_brain}
\caption{A Glass Brain visualisation. \cite{GlassBrain2014}}
\label{fig:glass_brain}
\end{figure}
\section{Summary}
Human DTI-MRI data from the Human Connectome Project was chosen for use in this project due to it's availability, quality, and the importance of the subject matter.
The choice of visualisation system was a more complex issue. For the purpose of this project, it is useful to note that, with the exception of the connection matrix, all of the visualisation techniques for connectome data attempt to represent 3D (or greater dimensionality) information in a 2D image. This is has two important consequences as it shows:
\begin{enumerate}
\item there is a need to display 3D data.
\item three dimensional image processing is already being carried out.
\end{enumerate}
2D representations have a number of advantages. The sampling methods used in measuring the connectome data gather 2D data, the full connectome structure being reconstructed from many planar slices. Relatively little processing is needed in order to convert an array of data into a viewable image, and so a 2D visualisation can be a very fast way to look at raw data. The ‘flat’ nature of 2D images is convenient for printing, transportation, and storage.
Pseudo-3D images are frequently used. These use either colours or symbols to represent out-of-plane structure. Alternatively, they may show a 3D representation of the connectome, rendered into a 2D picture. Using this definition of ‘pseudo-3D’, all of the above visualisation techniques (except the connection matrix) would fall into this category.
Full 3D images seem well suited to visualising connectome data. Given that there is a need to display 3D data, and that much of the data is already processed and rendered as a 3D model, it seems natural to display it natively in 3D. If the only objections to 3D visualisations is the expense of current 3D hardware \cite{Margulies2013} then a proliferation of cheap consumer hardware may remove this barrier in the near future.
Given these considerations, all of the 3D visualisation styles were candidates for the project. The tractography was selected due to it's intuitive and elegant nature. Of all of the styles discussed, it is the one most easily understood by non-experts, and so suitable content for a wide target audience.
\chapter{Data Conversion}
This chapter discusses the the data-wrangling necessary to convert HCP data into suitable content for display on the Rift.
Structural MRI data in the HCP is stored in NIfTI, CIFTI and GIFTI formats, incompatible with the VTK render pipeline used here. Legacy VTK is chosen as a suitable intermediate format, and three types of image content are generated:
\begin{enumerate}
\item \textbf{btain}\footnote{An initial misspelling, this name became a useful way of differentiating this content from the various brain data files.}, a cortical surface represented by a dense mesh. Btain is generated using MITK to select a region within a CIFTI file and save it as a mesh.
\item \textbf{surface}, a simple cortical surface generated from GIFTI data. Surface is generated from a HCP GIFTI file using MATLAB.
\item \textbf{CCTracts}, a tractography. CCTracts is generated using Camino and ITK-SNAP.
\end{enumerate}
In addition, a Blender plugin is developed to import and examine the VTK mesh.
\clearpage
\section{The Structure of HCP Data} \label{sec:the_structure_of_hcp_data}
HCP data is intended to be a standard format allowing for comparison between hundreds, eventually thousands, of individual human connectomes. Figure~\ref{fig:hcp_full_data_set} shows that the HCP dataset contains a wide variety of experimental data. As discussed in Chapter 2, the decision was made to use structural DTI-MRI.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/hcp_full_data_set}
\caption{Components of a sample HCP Data Set.}
\label{fig:hcp_full_data_set}
\end{figure}
Having identified a source of structural data, it was discovered that the native storage formats could not be read by a basic VTK rendering pipeline. The following paragraphs describe the HCP data formats, and the subsequent search for a useful intermediate format.
\clearpage
\subsection{NIfTI Data}
The main format of the HCP data is Neuroimaging Informatics Technology Initiative (NIfTI) format. Recent data is in the NIfTI-2 format, an extension that uses \SI{64}{bit} storage and addressing to allow for larger data sets with greater precision. To the explorer browsing an HCP data set, these are the $\ast.nii$ files \footnote{or sometimes compressed (gnu-zipped) $\ast.nii.gz$ or tar-archived $\ast.nii.tar$ files.}. The NIfTI format holds data as a 4D matrix (x, y, z and time), as well as containing meta-data in a header. Conceptually, data can be extracted from NIfTI format by taking a 3D matrix `slice' out of the set, to get voxel data for a specific point in time.
\subsection{CIFTI Data}
CIFTI is a related data format to NIfTI, more of a \textit{flavour} of NIfTI than a separate file format and so has the same file extensions. CIFTI files reference other files, frequently GIFTI surfaces, to specify relative vertex and voxel positions. CIFTI files can contain dense or sparse matrices, and are often used to store many different volumes that each correspond to a different anatomical region \cite{WorkbenchGlossary2014}.
\subsection{GIFTI Data}
Following widespread adoption and success of the NIfTI format, the Geometry Informatics Technology Initiative were set up to create a new standard format to store geometrical surfaces. The result was the GIFTI format, designed to store a variety of data types including surfaces, measurements and labels \cite{Harwell2011}. These are the $\ast.gii$ files in the HCP dataset, and they mostly represent surfaces. Figure~\ref{fig:connectome_gifti} shows the native, midthickness, inflated and very inflated cortical GIFTI surface files from HCP data, viewed using Connectome Workbench.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\textwidth]{resources/connectome_gifti}
\caption{GIFTI surfaces from a HCP data set.}
\label{fig:connectome_gifti}
\end{figure}
\section{Choosing an Intermediate Format}
\label{sec:choosing_an_intermediate_format}
A number of programs were investigated for reading and converting HCP data. MeshLab \cite{MeshLab2014} was found to be an excellent mesh conversion tool, but could not load any of the GIFTI or NIFTI data sets. Medinria \cite{MedInria2014} could work with the available data sets but was better suited to viewing content than converting it. MITK \cite{MITK2014} was used to select a region in a CIFTI file using an image intensity threshold, and then output the selection as a vtk mesh. This was a significant step forward: the vtk data format was perfectly compatible with the Visualisation Tool Kit, and the resulting mesh was the first `real' connectome data visualised on the Rift.
\\ \\
At this point it becomes convenient to adopt the convention of referring to the for distinguishing between the Visualisation Tool Kit (\textbf{VTK}), and the Visualisation Tool Kit file format (henceforth \textbf{vtk}).
\\\\
The vtk data format proved to be very suitable for this project for a number of reasons. On a practical basis, it was known to work with VTK, and so could provide the Rift with suitable content for demonstrations. Further research showed that it was a well established format \cite{VTK_file_formats}, and this wide support made it a suitable intermediate format. Additionally, `legacy' vtk holds data in a simple, plain text structure allowing human inspection of the mesh data.
\begin{lstlisting}[label=vtk_data_structure, caption=The structure of data in a legacy .vtk file.]
# vtk DataFile Version 2.0 // # vtk DataFile Version 3.0
Really cool data // vtk output
ASCII | BINARY // ASCII
DATASET type // DATASET POLYDATA
...
POINT_DATA n // POINTS <number> float
...
CELL_DATA n
\end{lstlisting}
Listing \ref{vtk_data_structure} shows the header specification from \cite{VTK_file_formats} with comments illustrating vtk parameter values in a real dataset. Information such as the number of points in the mesh proved to be very useful when debugging the VTK render output.
\section{Data conversion using MITK}
As mentioned in Section \ref{sec:choosing_an_intermediate_format}, MITK was used to export a region of a CIFTI cortical data file as a 3D mesh. This process is shown in Figure \ref{fig:MITK_conversion}. The process involved selecting a volume in the original data file using a thresholding function, and then creating a mesh from the selected region. The mesh could then be exported in vtk format. This process was used to create the \textit{btain} content.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/MITK_conversion}
\caption{Converting CIFTI data to vtk using MITK.}
\label{fig:MITK_conversion}
\end{figure}
This approach had two significant problems. Firstly, with little experience in working with this type of content it was difficult to choose a meaningful threshold and the resulting mesh was correspondingly arbitrary. Worse still, this approach produced a noisy, complex mesh that took approximately \SI{100}{MB} of plain text vertex positions to fully express. The mesh density mostly represented noise which did not add any information, and caused significant problems with the render-speed. These problems are discussed in the Results Chapter.
\section{Data conversion using MATLAB}
After learning more about the HCP data structure it was understood that certain GIFTI files in the HCP data set already contained cortical surface representations. These surfaces are shown in Figure \ref{fig:connectome_gifti}. The GIFTI surfaces would be ideal demonstration content with none of the size or noise issues of the MITK-converted data-set.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/MATLAB_conversion}
\caption{Converting GIFTI data to vtk using MATLAB.}
\label{fig:MATLAB_conversion}
\end{figure}
Converting GIFTI format data to vtk proved challenging and the approach was initially abandoned. Dr. Sarah Parisot insisted that such a conversion must be possible, and offered access to her MATLAB installation for the job. Using MATLAB, Professor Chris Rorden's MATcro conversion scripts were used to convert the HCP GIFTI surface files into VTK polydata meshes. These mesh files were only \SI{6}{MB} in size, compared with the \SI{100}{MB} size of MITK-conversions, which translated into quick rendering and a high frame rate when displaying the content on the Rift.
\section{Data conversion using CAMINO}
As mentioned in Chapter 2, the project had the ambitious goal of working with tractography data. Tractographies were generated from NIfTI connectome data using the CAMINO and ITK-SNAP tools, and saved in vtk format for display on the Rift .
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.4\textwidth]{resources/camino_process}
\caption{Creating a tractography from HCP data.}
\label{fig:camino_process}
\end{figure}
As Figure \ref{fig:camino_process} illustrates, the generation of a tractography is more complex than converting a surface mesh from one format to another. The first two processing steps were memory intensive, and \SI{10}{GB} of RAM need to be allocated to CAMINO cache before the process would complete successfully.
CAMINO was then used to fit a diffusion tensor to the NIfTI data, and filter the result by fractional anisotropy. The fibrous structure inferred from the anisotropy data is shown colourfully in Figure \ref{fig:CAMINO_tensors}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/CAMINO_tensors}
\caption{A per-voxel tensor map, created using CAMINO.}
\label{fig:CAMINO_tensors}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/CAMINO_roi_selection}
\caption{Creating a region of interest volume using ITK-SNAP.}
\label{fig:CAMINO_roi_selection}
\end{figure}
Figure \ref{fig:CAMINO_roi_selection} illustrates how a region of interest\footnote{Mostly the corpus callosum, though it isn't brain surgery.} can be `painted' onto a NIfTI dataset, and the selected volume can then be saved in NIfTI format.
Once structural connectivity was established and a region of interest specified, tracts were traced throughout the connectome to generate a tractography. The final tractography, exported in vtk format, could be viewed in Paraview, as shown in Figure \ref{fig:CAMINO_paraview}. This visualisation is reassuring, as Paraview uses VTK to manipulate and render the tractography. If it works in Paraview, it must be feasible to display it directly in VTK and, hence, on the Rift!
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/CAMINO_paraview}
\caption{Viewing a tractography using Paraview.}
\label{fig:CAMINO_paraview}
\end{figure}
\section{Bonus: Developing a Blender Plug-in}
The open source Blender 3D software is an excellent tool, and was used for mesh-refinement of the cumbersome btain data, as shown in Figure \ref{fig:blender_brain}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/blender_brain}
\caption{The remarkably dense btain.vtk mesh, in Blender.}
\label{fig:blender_brain}
\end{figure}
Blender could not previously import the vtk data format, and so it was necessary to write a small Python plug-in to parse the data.
\lstinputlisting
[label=python_parsing, caption=Parsing vertex locations into Blender]{code/vtk_parse.py}
Listing \ref{python_parsing} shows the core parser design. The use of a generator was borrowed from Blender's stl parser, and allowed the function to process hundreds of thousands of vertices without working memory problems.
Unfortunately, it was subsequently discovered that the VTK data generated by MITK does not comply with the VTK style guide \cite{VTK_file_formats}! As the plugin had been developed to work with MITK specific data, it will need some modifications before it can import and export vtk compliant data.
\chapter{The Visualisation Tool Kit}
This chapter introduces the Visualisation Tool Kit (VTK), explaining why VTK is an appropriate tool for the project. A simple VTK rendering pipeline is then described, to provide context for subsequent developments. The simple pipeline is modified to work with real connectome data, and additional manipulation tools are added. Finally, the pipeline is modified so that it can produce stereoscopic image pairs.
\clearpage
\section{Introducing VTK}
VTK is a software system for 3D computer graphics, image processing, and visualisation. For the purpose of this project, it's defining characteristics are:
\begin{enumerate}
\item \textbf{It is an open source project, freely available via GitHub.} This proved to be very useful, as the software could be cloned and built immediately and without needing to pay any license fees. The C++ source code was a useful supplement to documentation and tutorials, since it allowed the source to be searched or `grep-ed' and read. Access to the source code proved essential when the project needed to extend VTK functionality via subclassing.
\item \textbf{It is mainly written in C++.} This was useful at a practical level, as the MSc course had provided an introduction to C++. The development model of VTK is to provide core functionality in C++ and then support other languages such as Java and Python using wrappers. This model meant that any functionality added to VTK during this project could be extended to other languages through automatically generated wrapper code.
\item \textbf{It is under active development, and used extensively by the medical imaging community.} The active development of VTK was useful as it meant that there was a substantial volume of documentation and forum material surrounding the software. This meant that it was often possible to learn from past examples and avoid well known problems. The development of VTK by the medical imaging community also meant that VTK could work with some medical data formats \footnote{Unfortunately this functionality was only available in VTK 6.x, which couldn't be used as it deprecated features central to the Rift render pipeline.} and that there was a known user group who would be interested in any useful developments.
\end{enumerate}
Developed by Kitware Inc., VTK works well with CMake, and can run on all of the major OS platforms. VTK is also used to do all of the Visualisation work inside Paraview, as well as other notable open-source projects including 3DSlicer.
\section{The VTK Pipeline}
The VTK pipeline is illustrated in Figure~\ref{fig:vtk_pipeline}, showing a VTK cone primitive. In order to use connectome data, it is necessary to load the data as a source. The source must be in a format the VTK can then map to primitives, which places a restriction on the nature of the input data.
Some important components are hidden in this basic pipeline: neither the light nor the camera in the render scene are shown. These are both contained in the Renderer, and if not explicitly set then VTK initializes them to default values. This behaviour is simultaneously a great strength and a great liability: it allows a rendering pipeline to be set up very quickly, but it does require that the user be familiar with the pipeline and the various default settings.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/vtk_pipeline}
\caption{The Basic VTK Pipeline for a cone primitive.}
\label{fig:vtk_pipeline}
\end{figure}
\section{Complex Data Sources}
Loading and rendering data can be relatively straightforward. Compatibility with a wide range of input data formats was one of the reasons why VTK was chosen as the tool kit to use for the project.
\begin{figure}[htbp!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{resources/cone}
\caption{Cone}
\label{fig:sub1}
\end{subfigure}%
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{resources/cortical_surface}
\caption{Cortical Surface}
\label{fig:sub2}
\end{subfigure}
\caption{VTK and HCP data rendered using VTK.}
\label{fig:data_sources}
\end{figure}
VTK can read in a large number of file formats including VTK's own `.vtk' format. The rendering pipeline in the project was developed using mesh data, and has not yet been tested with volumetric data -- implementing this is suggested in the Further Work section.
\section{Interaction and Callbacks}
Rendering a scene in a continuous loop is a very computationally inefficient technique. It is much better to only re-render the scene in response to a trigger such as a keypress or mouse movement event. Mouse movement is such a common interaction style that it is implemented in VTK using the vtkInteractor object, and even comes with a number of default styles including `trackball' and `joystick' modes. Responding to key presses was implemented using a listener / callback paradigm. VTK conveniently deals with all aspects of logging and queuing the key press, and can trigger a function in response to the event The structure of the callback system is illustrated in Listing \ref{callbacks}.
\lstinputlisting
[label=callbacks, caption=Callback functionality in VTK]{code/callback.cpp}
A virtual reality system is more demanding than a 2D interface as the viewpoint may be continuously moving in response to small changes of the user's head position. The scene would ideally be re-rendered whenever the head position changes. Sensors in the HMD report the head position in floating point units, of sufficiently high precision that any two queries show some difference.
Thresholds for the pitch, roll, and yaw could be set above which to re-render the viewpoint, but this solution contains it's own problems. For example, if a \SI{5}{\degree} change in head pitch is required to trigger a re-render of the scene, then the scene is always going to jump a minimum of \SI{5}{\degree} in response to changing head position. Instead, a clock timer was used to re-render the scene at a fixed frame rate, irrespective head movement. A \SI{16}{\ms} timer triggered refresh \SI{60}{\Hz} giving good performance.
\lstinputlisting
[label=mouse_callback, caption=Sharing mouse control between two adjacent renderers.]{code/mouse_callback.cpp}
Interaction for a naive stereo system holds one additional terror. VTK's automatic mouse interactors operate per-renderer, but a the stereo cameras need to be updated simultaneously to prevent the user going cross-eyed. The trick to solving this problem is to ensure that the active mouse interaction affects all cameras in the scene, using temporary variables to store each interactor id. This system is illustrated in Listing \ref{mouse_callback}.
\clearpage
\section{Rendering Stereo Images}
Rendering stereo images is beautifully intuitive. Stereoscopic 3D involves viewing different images with each eye in order to get an impression of depth. Using VTK, it is possible to create two cameras and position them as if they were eyes viewing objects in the scene. This is illustrated in Figure \ref{fig:simple_scene}, showing how multiple cameras can be positioned relative to a single object and light source in order to generate left and right eye images. For comfort, the images should be generated using a camera spacing similar to that of the user's Inter-Pupillary Distance (IPD)\footnote{for most of the population this can be safely approximated as \SI{62}{\mm}} although nearby objects in a scene can cause discomfort even when the camera separation is set correctly. The reason for this is that the human visual system copes badly with extreme vergence - it is difficult to focus on an object very close between the eyes\footnote{and for exactly the same reason it is uncomfortable when 3D movies insist on bringing things too close to the audience!}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/simple_scene}
\caption{A Blender scene showing stereo cameras.}
\label{fig:simple_scene}
\end{figure}
VTK does contain a `stereo camera' class, but the documentation for this is rare\footnote{Like Bigfoot, or the Loch Ness Monster.}, and implementation usually involves colour anaglyph or interlaced stereographic image systems. The rift needs stereo input to be available in a `side-by-side' format, and the most basic method of implementing this was to use two separate renderers, each of which was associated with an independent monoscopic camera
\chapter{Virtual Reality and the Oculus Rift DK1}
This chapter provides a very short introduction to virtual reality, and describes related projects using virtual reality to view medical data. The Rift Development Kit (DK1) hardware is then described. Key hardware features are the head-mounted sensors, and the lens system which deforms image content on the Rift display. The chapter concludes by mentioning the Oculus Rift In Action project, and the issues associated with early stage software development.
\clearpage
\section{An Introduction to Virtual Reality}
Virtual Reality (VR) refers to the visualisation of an artificial environment. This distinguishes it from Augmented Reality (AR) where the user simultaneously experiences real and virtual environments. The Oculus Rift is a VR device since it completely shuts out any view of the real world, whilst Google Glass is an AR device since it shows information as an overlay of the real world.
Various VR and AR systems were investigated, as shown in Figure \ref{fig:vr_headsets}. Many of these have been designed for consumers to watch films or play games. These systems do not include any positional feedback, since they are not designed to give an immersive experience but rather to act as a personal cinema screen. Another sub-set of these systems were designed for industrial or military use. Although military or industrial systems could provide the desired performance, they were either prohibitively expensive or unavailable for purchase.
\begin{figure}[htbp!]
\centering
\includegraphics[width=1\textwidth]{resources/vr_headsets}
\caption{AR and VR Products (an incomplete set).}
\label{fig:vr_headsets}
\end{figure}
VR is an attractive way of getting stereoscopic 3D. The current generation of virtual reality hardware offers high-resolution, immersive, stereoscopic 3D performance at a price of a high-end LCD monitor.
The Oculus Rift DK1 was found to offer acceptable performance at a reasonable cost. The DK1 is also a suitable system due to the active development of the SDK, and large open source community surrounding the project. Future development of the SDK and the forthcoming release of the compatible DK2 should enable the project to be forward compatible with future hardware, if that is required.
\section{Related Projects}
There have been a small number of projects on using Virtual Reality for connectome visualisation, but the use of a head mounted display for this application appears to be new. The most similar known projects listed in the following sub-sections.
\subsection{The Glass Brain project}
The Glass Brain project aims to do real time visualisation of neural activity \cite{GlassBrain2014}. The Glass Brain Project uses a Unity 3D model of the brain, though it subsequently renders a 2D image from the model for display on LCD monitors. As such it does not obtain many of the benefits of the 3D model, nor face the challenges of displaying images in 3D.
\subsection{Brainder}
Brainder\cite{brainder2014} is Dr Anderson Winkler’s excellent blog on FMRI and Blender rendering. The Brainder project uses the open source ‘Blender’ 3D modeling software to ray trace brain models, primarily for use as 2D illustrations. Although the goals of Brainder are different from this project, the existence of (relatively) simple 3D brain models may be a useful asset.
\subsection{The Dynamic Connectome}
The Dynamic Connectome is part of the CEEDS project: real time, visualisation of neural activity \cite{ceeds2014}.
\subsection{Aachen University Project work}
Project work at Aachen University \cite{Rick2011} uses MOCAP and a CAVE virtual environment to interactively view a connectome with dynamic clipping \cite{ceeds2014}.
\subsection{Purdue University project work}
Project work at Purdue University \cite{Chen2011} uses a CAVE virtual environment to view a connectome.
\\
\\
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/yview_cave}
\caption{A CAVE virtual reality system, by AntyCip.}
\label{fig:yview_cave}
\end{figure}
The Dynamic Connectome project and the Aachen University and Purdue University projects use computer assisted virtual environment (CAVE) virtual reality rooms with multiple projectors rather than a head mounted display (HMD) system as proposed here. Figure \ref{fig:yview_cave}
shows a modern CAVE system. Such a system removes the need for bulky headgear, but instead requires a multi-projector system and dedicated room of projection screens. Additionally, the CAVE systems do not typically show 3D, nor do they update the view based on the user’s precise head position.
\clearpage
\section{Introducing the Oculus Rifts}
At the time of writing, two Rift headsets had been released. The first, shown in Figure \ref{fig:dk1} was no longer sold by Oculus but readily available on eBay. The second, shown in Figure \ref{fig:dk2}, was released mid-way through the project and proved popular and Oculus quickly ran out of stock. This project used the DK1 for pragmatic reasons.
\begin{figure}[htbp!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{resources/dk1}
\end{subfigure}%
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tabular}{ l r }
Price: & \$300 \\
Display size: & 1280x800 \\
Display type: & LCD \\
Sensors: & gyroscope,\\
& accelerometer,\\
& magnetometer\\
\end{tabular}
\end{subfigure}
\caption{Rift DK1}
\label{fig:dk1}
\end{figure}
Both headsets function in a similar way. They contain one main display, part of which is viewable by each of the user's two eyes. Lenses distort the display, making it appear to extend further across the user's field of vision, in order to provide `immersion'.
\begin{figure}[htbp!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{resources/dk2}
\end{subfigure}%
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tabular}{ l r }
Price: & \$350 \\
Display size: & 1920x1080 \\
Display type: & OLED \\
Sensors: & gyroscope,\\
& accelerometer,\\
& magnetometer,\\
& IR Camera\\
\end{tabular}
\end{subfigure}
\caption{Rift DK2}
\label{fig:dk2}
\end{figure}
Comparing DK1 and DK2 headsets is useful, as it helps understand what performance problems may be `fixed' by buying new hardware. User survey feedback, as discussed later, shows that many users are unhappy with the current screen resolution. This problem may be mitigated by using a DK2 headset. This allows the project to anticipate which problems the hardware improvements may fix, and to focus on the remaining challenges.
\clearpage
\section{Collimating Optics and Software Compensation}
As illustrated in Figure \ref{fig:rift_display_1}, each of the user's eyes sees a different part of the Rift display. A central division prevents the right eye seeing content on the left side of the display, and vice versa.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{resources/rift_display_1}
\caption{Eye position relative to the Rift display}
\label{fig:rift_display_1}
\end{figure}
Collimating lenses are positioned in front of each eye, as shown in \ref{fig:rift_display_2}. These lenses distort the screen, so that it appears to extend to cover between \SI{90}{\degree} and \SI{110}{\degree} of the user's field of view. A useful side effect of the collimation is that a `sweet spot' exists, tolerating some movement of the eye position relative to the lens. This is very useful, each user is likely to have a slightly eye spacing -- the sweet spot means that they can all use the same headset without needing to calibrate the lens position.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{resources/rift_display_2}
\caption{Lens position relative to the eye.}
\label{fig:rift_display_2}
\end{figure}
The drawback of the lenses is that they apply a pincushion deformation to the image shown on the Rift display, as well as. Further implications of this are discussed in Section \ref{sec:need_for_shaders}.
\section{Integrated Sensors and the HMD}
The Rift contains accelerometer and gyroscope sensors that measure the position of the user's head. These are abstracted by the SDK in such a way that the user can query for pitch, yaw or roll values and be returned an Euler angle in degrees. Figure \ref{fig:pitch_yaw_roll} illustrates the meaning of pitch, yaw and roll in the context of a user's head.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{resources/pitch_yaw_roll}
\caption{Head movements measured by the Rift \cite{Thompson2001}.}
\label{fig:pitch_yaw_roll}
\end{figure}
These sensors are precise, in that they return floating point values, but they are not accurate. They are updated so frequently that minute amounts of systematic bias can quickly accumulate into noticeable sensor `drift'. The Rift SDK attempts to correct for this by using a magnetometer, essentially a compass, in order to provide a reference direction against which to calibrate the sensor positions. In the electrical noise of a university office this is not effective, and drift remains a significant problem.
\\
\\
The DK2 appears to acknowledge the drift problem, and adds a user-facing external camera to the tracking system. With an external reference point, drift may be easily corrected for. Given this development in the DK2, user feedback gathered during this project about the `drift' problem of the DK1 can be identified as a hardware-specific issue.
\chapter{Extending VTK}
At the end of Chapter 4, the VTK pipeline had been configured so that connectome data could be loaded, manipulated using interactors, and rendered as a stereo-pair. Chapter 5 then noted that the collimating lenses in the Rift headset would distort any images output to the Rift display. This chapter describes how image processing in software can be used to compensate for optical artefacts, and how GLSL shaders can be used with VTK in order to apply the image correction in real time.
\clearpage
\section{The Need for Shaders}
\label{sec:need_for_shaders}
Before diving into the details of implementation, it is useful to understand the motivation for using image processing, and shaders in particular.
\subsection{Using optics to correct for distortions}
The undesirable deformations introduced by the Rift lenses could be compensated for by using additional optics. Two corrections would be required:
\begin{enumerate}
\item Compensating for the pincushion deformation whilst maintaining the extended field of view. The pincushion distortion could be eliminated by using a lens doublet, and a central aperture \cite{Brainerd2004}.
\item Reducing the chromatic aberrations caused by dispersion of light in the lens glass. The problem of chromatic aberration could be dealt with by means of achromatic lenses or by using extra-low (EL) dispersion glass in the lens system \cite{Davis2014}.
\end{enumerate}
These solutions would be appropriate for high-end camera lenses, as shown in Figure \ref{fig:camera_lens}, but are completely impractical for the Rift headset. The lenses in the Rift, shown in Figure \ref{fig:rift_lens}, are designed to be small, cheap and robust. Any optical correction system would involve multiple lenses (increased weight), spacing (increased size), require rigid positioning (increased fragility) and require expensive EL glass (cost).
\begin{figure}[htbp!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\linewidth]{resources/camera_lens}
\caption{Camera lens (lenstip.com)}
\label{fig:camera_lens}
\end{subfigure}%
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\linewidth]{resources/rift_lenses}
\caption{Rift lens, front and back views}
\label{fig:rift_lens}
\end{subfigure}
\caption{A comparison of lens complexity.}
\label{fig:lens_comparison}
\end{figure}
\clearpage
\subsection{Using software to correct for distortions}
By comparison with an optical distortion correction system, software offers the attractive property of requiring no additional physical components, and so no additional hardware cost\footnote{The cost of buying a powerful graphics card falls to the consumer!}. A remarkable property of barrel and pincushion distortions is that the geometric shape has been distorted but \textit{the image is still in perfect focus}. This means that the distortion can be corrected for by moving pixel data around in the original image. The amount of movement required is a function of the radial distance from the optic axis \cite{Brunelli2009}. The barrel distortion in Figure \ref{fig:appearance_unprocessed} could be corrected by re-arranging pixel data, moving each pixel towards the centre of the lens to achieve the result shown in Figure \ref{fig:appearance_processed}.
\begin{figure}[htbp!]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{resources/appearance_unprocessed}
\caption{The appearance of native content to the Rift's user.}
\label{fig:appearance_unprocessed}
\end{subfigure}
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{resources/appearance_processed}
\caption{The appearance of processed content to the Rift's user.}
\label{fig:appearance_processed}
\end{subfigure}
\caption{Correcting distortion by modifying image content.}
\label{fig:processing_content}
\end{figure}
\clearpage
\subsection{Using shaders to correct for distortions}
The image processing required for this geometric correction can be done pixel-by-pixel. Given the size of the rift display, it must be computed for $1280 \times 800 = 1,024,000$ times per frame. If content is to be rendered at \SI{60}{Hz} then the total render time, of which the distortion step is only part, needs to take place in less than \SI{16}{ms}. Fortunately, the individual pixel calculations are entirely independent of one another, making the operation perfectly suited to parallel processing.
Modern 3D computer graphics make heavy use of Graphical Processor Units (GPUs), processors designed especially for the parallel execution of thousands or millions of independent calculations on pixel data. The image deformation problem is perfectly suited to be solved with a GPU.
\subsection{Practical aspects to using a GPU}
Two companies dominate the market for graphics cards: Nvidia and ATI. Graphics drivers are notorious for driver problems \footnote{When discussing problems in the department, the first question was frequently ``What graphics card are you using''.}, so using a moderately powerful Nvidia card with the latest drivers was a pragmatic way to avoid exotic hardware issues. The project chose to use a Nvidia GeForce GTX 480 graphics card on the basis that it would support the most recent version of OpenGL \footnote{OpenGL 4.4, at the time of writing}. Nvidia provide good driver support, and have created the CUDA parallel computing platform in case it was necessary to use the graphics hardware directly for calculations.
Similar to the division of the graphics card market, two main application programming interfaces (APIs) exist to make use of graphics hardware. These are Direct3D and OpenGL. The project chose to use OpenGL as it had previously been used in the MSC Graphics lectures, it is an open standard, and it is cross platform. By contrast, Direct3D appeared to be unfamiliar and would lock the project to the Windows operating system.
Once the pragmatic decisions regarding the choice of hardware and API were made, the practical work could begin: designing and implementing a graphics shader program to perform pixel-wise calculations.
\section{Designing a Test Shader}
Graphics shaders are executed on graphics hardware, not on the CPU. OpenGL shaders are written in the Graphics Library Shader Language (GLSL), a C-like language with the occasional surprise. They are massively parallel and difficult to debug, though some third party debugging tools do exist \cite{glslDevil}. There are no ways to print statements in GLSL \footnote{If a per-pixel shader script is executed over a million times per frame, and at sixty frames per second, then print statements would not be a practical debugging tool}. For these reasons, the chosen approach for shader development was to start with a very simple shader, and gradually add complexity in a series of iterative steps.
\subsection{Vertex and Fragment Shaders}
In a graphics pipeline there are vertex shaders and fragment shaders \footnote{Geometry shaders were added in OpenGL3.2 and and tessellation shaders added in OpenGL4, but these shaders are mainly intended to do geometric mesh adjustments and aren't relevant to the processing pipeline in this project.}. Vertex shaders are executed at every vertex in a scene. Fragment shaders are executed at every fragment, the projection of an area of an object polygon onto a pixel in the screen. Figure \ref{fig:vert_frag} shows a simple three vertex primitive affecting 70 fragments. By contrast, the cortical surface contained over 32 thousand vertices. The final value of one screen pixel may be the combination of values of a number of fragments, combined as a function of their depth, opacity and position.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.4\textwidth]{resources/vert_frag}
\caption{Fragments corresponding to a geometric primitive.}
\label{fig:vert_frag}
\end{figure}
\subsection{Simple test shader design}
Listing \ref{simple_vert} shows the most basic vertex shader, where vertices are simply passed through the pipeline. The graphics pipeline is a state machine, so the variables \texttt{gl\_ModelViewProjectionMatrix} and \texttt{gl\_Vertex} are automatically set by the system, and the result of the calculation \texttt{gl\_Position} is automatically passed on to the next part of the pipeline.
\lstinputlisting
[label=simple_vert, caption=A Simple Vertex Shader]{code/my_first_shader.vs}
Listing \ref{simple_frag} is almost as simple as the example vertex shader. It sets the colour of every fragment to red. If this fragment shader works, every polygon in the rendered object should be bright red, as shown in Figure \ref{fig:object_shader}, without altering the background. This sort of shader may seem excessively simple, but several basic checks were necessary to confirm fundamental OpenGL functionality worked before moving on to more complicated goals.
\lstinputlisting
[label=simple_frag, caption=A Simple Frament Shader]{code/my_first_shader.fs}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.4\textwidth]{resources/object_shader}
\caption{A VTK object with GLSL shading. \cite{dimrock2014}}
\label{fig:object_shader}
\end{figure}
\subsection{Including and compiling shaders}
As might be expected from the C-like form of GLSL, shaders must be compiled before they can be executed on the GPU. Martin Christian's libglsl shader management library was used, with adjustments to allow compatibility with OpenGL 4.4 and GLSL 440. Shaders were loaded in three different ways during the project:
\begin{enumerate}
\item \textbf{The preferred method:} Shaders can be written as plain text files and given extensions to explain their content. It is conventional to give a vertex shader a .vs extension, and a fragment shader a .fs extension\footnote{or sometimes a .ps extension for `pixel shader', as used in the ORIA project}. The shader file can then be opened and loaded at runtime. This is convenient as it means that altering the shader does not require re-compilation of the main body of source code. It also means that errors in the shader code do not generate compile errors and may only appear at runtime.
\item \textbf{The in-line approach:} A shader may be defined as a string in source code, as an alternative to being read in from an external file. This can be useful for testing purposes, to check that the program has access to the shader source at runtime. It also cuts down the number of files in the project, though at the expense of clarity! As the shader code is in string format, it is still not checked by the debugger and so is equally liable to runtime problems as if it were a separate file.
\item \textbf{Loading a pre-compiled shader:} A sophisticated\footnote{terrifying} third option is to load a compiled shader program as a binary file. Some publicly available Rift deformation shaders were in binary format, and were tested using this approach but could not be subsequently modified. The precompiled approach means that no shader compilation is required at run time, but this comes at the cost of human readability. Given the frequent need to frequently adjust the contents of shaders in this project, such optimisation seemed unwise.
\end{enumerate}
\section{Applying Shaders in VTK}
Using a basic test shader, the next step was to use it in the VTK image processing pipeline. A number of potential options presented themselves, and a number of challenges were encountered.
\subsection{Applying GLSL Shaders to Materials}
Investigation revealed that applying GLSL Shaders to VTK scenes via material properties is not feasible in current versions of VTK.
One drawback of large open source projects is that the most widely available documentation may be severely out of date. In this regard, VTK is a victim of it's own success. A lot of documentation \cite{Seip2005}, \cite{OBrien2009} suggests that the GLSL shaders can be encapsulated within XML and applied to a VTK object as a material. This is also the approach of the official documentation \cite{VTK_XML_shaders}, where ``To Do'' and ``In progress'' comments create the deceptive appearance that these features are cutting edge. \textbf{Do not be fooled!} This use of shaders is deprecated, and has not been possible in `recent' VTK versions for some time. Due to the size of the project it can be difficult to find documentation for every feature of interest, and confirming VTK feature deprecation was to become a reoccurring problem during the project\footnote{If there is one thing harder than finding a needle in a haystack, it is finding a needle in a haystack when the needle was quietly removed back in 2008.}.
\subsection{Applying GLSL Shaders to Objects}
Having learned to ignore VTK shader documentation pre-dating 2008, it transpired that shaders can be applied directly to VTK objects. This option, though initially promising, contained critical flaws.
Firstly, the approach meant that the project had to switch from Python to C++. VTK is developed in C++ and other languages are supported via wrapper code. The wrapping does not always expose the full functionality of the source code, and in this case the functionality of the Shader2 class was discovered to be only partially exposed in Python. Although it should be possible to edit the VTK source and coax the parser into generating more complete Python wrappers, even the VTK authors at Kitware admit that this process can be ``a bloody pain'' \cite{VTK_Wrapper_FAQ}. With this warning in mind, it was decided to use C++ in order to get full access to the VTK classes.
Secondly, documentation for this approach was scarce. Another VTK user had raised related questions on the VTK mailing list \cite{vtk_shader_actor}, which provided sufficient hints and suggestions to implement a working solution. Listing \ref{object_shader} shows a functional implementation, contributed to Stack Overflow in order to help document the process \cite{SO_shader_actor}. Note that unlike the test shader in Listing \ref{simple_frag} this shader cannot have a \texttt{main()} function, since that would cause a name conflict with other parts of the VTK render process.
\lstinputlisting
[label=object_shader, caption=Applying a Shader to a VTK Object]{code/vtk_object_shader.cpp}
Figure \ref{fig:object_shader} shows the result of the object shader approach using the test fragment shader: any fragment representing part of the cone object gets coloured red.
The third and most significant problem of the per-object shader approach can also be understood with reference to Figure \ref{fig:object_shader}. The target image deformation, as Figure \ref{fig:appearance_processed} showed earlier, is a barrel deformation. This means that pixel data must be radially displaced away from the optic axis. The cone shader does not -- and cannot -- affect pixels beyond the surface of the cone, since the fragment shader is only executed for object object pixels, and default shaders are used to handle background pixels.
These problems, in particular the third problem, led to the conclusion that object-shaders would not provide a viable method for performing image distortion suitable for Rift display.
\subsection{Applying GLSL Shaders to Buffers}
Given the requirement to apply a deformation across an entire screen, the the VTK multipass-rendering framework looked to be a good place to start. This framework allows users to specify rendering passes to a vtk renderer, as is illustrated in Listing \ref{multi_pass}.
\lstinputlisting
[label=multi_pass, caption=Using the VTK MultiPass Rendering framework]{code/multi_pass.cpp}
The VTK multipass framework proved to be the best way to use OpenGL shaders on the VTK scene.
\section{VTK MultiPass rendering in VTK}
VTK's multipass rendering framework allows the user to set the rendering passes that are applied to a VTK scene. Using the framework disables the default render passes\footnote{ \texttt{vtkLightsPass}, \texttt{vtkDefaultPass}, and \texttt{vtkCameraPass}} so basic passes must be explicitly set before additional rendering passes can be applied. At the suggestion of Dr Bernhard Kainz, a derived VTK class was written to extend the multi-pass framework. Dr Kainz had previously developed an image processing pass which proved a useful starting point. The derived class used VTK methods to render the scene to an OpenGL texture (off screen rendering), then would render the image back to the screen using GLSL shaders. The resulting pipeline is shown in Figure \ref{fig:vtk_extended_pipeline} where `Saliency Pass' is the derived and extended pass.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/vtk_extended_pipeline}
\caption{The extended VTK pipeline.}
\label{fig:vtk_extended_pipeline}
\end{figure}
\clearpage
\subsection{OpenGL operations in the new rendering pass}
The default pass, as shown in Figure~\ref{fig:vtk_extended_pipeline}, initializes the colour and depth buffers of the scene and the lighting pass sets up the scene lighting. VTK needs to apply these passes sequentially, and so they are grouped as a collection and then packaged into a Sequence Pass. The camera pass is applied after that sequence, and finally the custom-made Saliency pass is applied. This is the point at which things start to get interesting, as the Saliency pass works directly with OpenGL functions in order to pas commands to control the state of the GPU.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.4\textwidth]{resources/vtk_saliency}
\caption{The new VTK saliency pass.}
\label{fig:vtk_saliency}
\end{figure}
Figure \ref{fig:vtk_saliency} shows the series of OpenGL operations taking place within the Saliency pass. Frame Buffer Objects (FBOs) are, essentially, areas of memory that can hold picture data. They act as staging areas between processor and display hardware. Off-screen rendering to a texture means rendering the scene to an area of memory on the graphics card instead of to the screen. The graphics card memory can be accessed very quickly by OpenGL functions, and is perfectly positioned for subsequent GLSL renderering steps.
\subsection{Significant implementation challenges}
Several problems needed to be overcome before this approach worked satisfactorily
Firstly, the \texttt{vtkRenderer->SetPass()} method, central to the entire approach of delegating render passes, was quietly deprecated in VTK 6.x. Once identified\footnote{By carefully comparing different versions of the VTK documentation.}, the decision was made to work VTK 5.10, on the basis that it was the latest, and best version of VTK to be released where the \texttt{SetPass} could be found.
Next, the \texttt{SaliencyPass} code made use of \textbf{libglsl} to load, compile, and generally manage \textbf{GLSL} shaders. This library worked with GLSL \texttt{\#version 110} but was incompatible with recent versions of GLSL. Certain functions in OpenGL 4.4 required GLSL \texttt{\#version 440} compatibility, and this in turn required that \textbf{libglsl} be updated. Fortunately, many of the deprecated functions had been implemented in recent OpenGL versions, and so many of the previously ``experimental'' ARB-functions could simply be replaced by modern ``core'' functions.
Numerous OpenGL problems were encountered. OpenGL appears to follow the ``Silence is golden'' design philosophy \cite{the_art_of_unix_programming}, and it was necessary to query OpenGL state, as shown in Listing \ref{gl_check_error}, to reveal the origin of GLErrors. In the absence of diligent checks, errors manifest in terms of a blank screen or segmentation fault at runtime.
\lstinputlisting
[label=gl_check_error, caption=Checking OpenGL error state]{code/gl_check_error.cpp}
\section{Rift Deformation Shaders}
The final Rift Deformation shaders are shown in Listings \ref{distortion_vert} \& \ref{distortion_vert}. The vertex shader performs a simple pass through of vertex position, and all of the image processing occurs in the fragment shader. The shaders (ab)use GLSL to perform three functions. They shift the image in order to align the camera viewpoint centre with the lens centre. They apply a barrel deformation of the entire image in order to compensate for the pincushion deformation of the Rift's lenses. They also mask the edges of the screen where the OpenGL 340.x drivers leave image artefacts\footnote{Nvidia note that a bug in recent drivers ``could prevent OpenGL Framebuffer Objects (FBOs) from being properly redrawn after a modeswitch''. This can lead to beautiful but undesirable render artefacts at the side of the screen.}.
\lstinputlisting
[label=distortion_vert, caption=The Vertex Distortion Shader]
{code/Distortion.vs}
\lstinputlisting
[label=distortion_frag, caption=The Frament Distortion Shader]
{code/Distortion.fs}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/rift_screen}
\caption{Lens and screen centres on the Rift display.}
\label{fig:rift_screen}
\end{figure}
Figure~\ref{fig:rift_screen} shows how the lens positions, aligned with the user's eyes, are \textit{not} aligned with the left or right screen centres. This causes a misalignment problem, since the VTK camera viewpoints are aligned with the screen centres. The fragment shader code fixes this problem by applying a lateral shift to each image. For example, when rendering the left-eye image every fragment value is laterally offset by the lens centre to screen centre displacement. With the rift DK1, this involves a y-adjustment of 48 pixels. So when the shader is called for a pixel at (0, 0) the offset is applied and all subsequent calculations are made for a pixel at (48, 0). This solution will require revision for new hardware with different screen offsets.
After correcting for the lens centre displacement, the fragment shader applies a barrel distortion to the image. This distortion corrects for the pincushion distortion of the Rift lenses. The polynomial used to do this is based on Paraview distortion shaders developed by Dr Stephan Rogge
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{resources/distortions}
\caption{Lens distortions illustrated using a square grid.}
\label{fig:distortions}
\end{figure}
After correcting for the lens distortion, it would be desirable to apply a chromatic aberration factor in order to mitigate the dispersive effect of the Rift's lenses as mentioned in Section~\ref{sec:need_for_shaders}. This was not implemented because chromatic aberration parameters were unavailable, and there was insufficient time to determine parameters experimentally.
\section{Rift SDK Deformations}
In theory, an image texture can be passed to the Rift SDK which can then apply deformations directly. Since v0.3 Oculus have stated the SDK-rendering is the preferred approach. All example code for SDK-rendering was for a Windows system with DirectX rather than OpenGL. Oculus have not yet made Linux support available in the current SDK, and so it was decided that the `legacy' approach of using GLSL shaders would be sensible given objectives of the project.
\section{Distorted images}
Figures \ref{fig:undistorted_cone} \& \ref{fig:undistorted_surface} show the result of rendering content with a simple pass through shader. Figures \ref{fig:distorted_cone} \& \ref{fig:distorted_surface} show the result of the Distortion shaders developed in this chapter. The effects of the barrel deformation on the straight lines of the cone content are particularly marked.
\begin{figure}[htbp!]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{resources/undistorted_surface}
\caption{Surface appearance using a pass-through shader.}
\label{fig:undistorted_surface}
\end{subfigure}
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{resources/distorted_surface}
\caption{Surface appearance using the Rift distortion shader.}
\label{fig:distorted_surface}
\end{subfigure}
\caption{A comparison of the effect of the Rift distortion shader on surface content.}
\label{fig:surface_distortion_comparison}
\end{figure}
\begin{figure}[htbp!]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{resources/undistorted_cone}
\caption{Cone appearance using a pass-through shader.}
\label{fig:undistorted_cone}
\end{subfigure}
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{resources/distorted_cone}
\caption{Cone appearance using the Rift distortion shader.}
\label{fig:distorted_cone}
\end{subfigure}
\caption{A comparison of the effect of the Rift distortion shader on cone content.}
\label{fig:cone_distortion_comparison}
\end{figure}
\chapter{Results}
The results section begins by describing the image content and the VR software developed during the project. Testing methodologies are mentioned, particularly the use of CDash and colourful shader debugging tools. A survey was carried out using the demonstration system, and the results of this are discussed. Drawing on comments from the survey, suggestions are made for further work.
\clearpage
\section{Final System Specification}
The VR system developed in this project consists of two parts: a methodology for converting HCP data to a suitable form for importing into the VTK pipeline, and software that can load the generated data and render it to the rift display.
\subsection{Data conversion}
As detailed in Chapter 3, the project established that HCP data could be converted to a suitable form for stereoscopic rendering in VTK.
\begin{figure}[htbp!]
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{resources/content_surface}
\caption{Cortical surface}
\label{fig:content_surface}
\end{subfigure}%
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{resources/content_tracts}
\caption{Tractography}
\label{fig:content_tracts}
\end{subfigure}
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{resources/content_btain}
\caption{Dense mesh}
\label{fig:content_btain}
\end{subfigure}
\caption{Demonstration content generated during the project.}
\label{fig:content_final}
\end{figure}