Dr George Dobre, School of Physical Sciences I'm Dr George Dobre, I'm a member of the Applied Optics Group in the School of Physical Sciences at the University of Kent and I conduct research in optics and photonics with an emphasis on photonics for biomedicine. >>I want to talk to you today about some of the research that has been going on at Kent in our research group for the last few decades but also what we're doing currently. >>So, the focus of our research group over time has been very much that of providing solutions to problems that arose mainly in industry, and those solutions were mainly focused on measurement, particularly measurement of physical parameters such as displacement, temperature, pressure, all sorts of other parameters that could be translated into one of these, and offering solutions that can be taken up quite easily and that often meant making things robust enough to be realised out of the lab and deployed in real-life situations. >>So some of the research has actually been carried out in the area of fibre optic sensors over much of the existence of the Applied Optics Group, starting from the 1970s onwards but the interferometer, the interferometric techniques that we have used in order to work out things like, you know, how far is something or how hot it is or how much is the pressure inside the chamber, those interferometric techniques can actually also be used to map different layers of tissue in the human body, so I want to talk to you about some of that. >>Now, the interesting thing about lasers which are ubiquitous nowadays - we're constantly surrounded by them - and this is one such example of a laser, those lasers tend to emit light at single wavelength, single colours, and that is something to do with the way they are built - mostly, it's much easier to make a laser that does that but we don't have much use for these types of lasers in in the low-coherence regime that we use them in photonics and let me explain why. >>If you look at the diagram on the right it represents, if you will, ripples or interference fringes which are created at different colours and you can see that the spacing between them is really quite different, depending on the colour, and that's because the spacing between them is related to the wavelength. >>Now, if you want to pick out a standout feature in any of those representations, the red, the yellow and the green, you will struggle to do so. However, if you overlap them what they all have in common is that they have a central maximum in the middle of the pattern and so when you overlap them you will continue to have a central maximum and then the ripples will go away and amount to nothing, all those features will just wash out, away from the central maximum, and this is the the key property of … of light, of low-coherence light, that we are trying to exploit in our research. >>As we do that, we build instruments which allows us to see into somebody's eye, for example, and our instruments look nothing like the instruments that used to be found in ophthalmology practices about 150 years ago. So, when you examine features in a patient's eye nowadays you'd be mostly interested in producing data in a … in a number of orientations so that you give the opthalmologist access to a richness, that they allow ... a richness that allows them to then make an accurate diagnostic. >>And so I would point out to the three different orientations that correspond to much of the data that we display and the clear distinction should be made between the horizontal or what we call the B scan, and the transverse or what we call a C , so that … that's the B scan in pink there and the C scan in green. And this is particularly useful for things like the eye because there's definitely an advantage in being able to show our types of images in the same orientation as those that ophthalmologists have learnt about and trained to recognise. >>So much of my research has been devoted to finding ways to represent this information, this rich information, which is really three-dimensional information to represent that in the different orientations that would help the ophthalmologist make that diagnosis. >>As you can see, you could have the information from the eye fundus or the C scan, shown in these diagrams in black and white or greyscale, and you can also have B scan depth information, which is often very illuminating because it really gives you access to what happens below the superficial layers and that B scan depth information is indicated by the coloured overlays. >>When you display that colour overlay under the slice that represents the eye fundus, things become even clearer, so in this particular case we are able to see, for example, a retinal defect or the retina, the retinal layers are beginning to lift up and we see a little bit of a hole forming below the most superficial layers, and so being able to pick out features in either of these orientations is very important, the opthalmologist will often look for the pattern and in this particular case it's a disease, an eye condition, called the central serous retinopathy and we are able to see that recognisable bullseye pattern but only in the C scan, only in the fundus image, it's not so obvious when you look at the depth scan. >>So, that's one outcome that we expect but how do we ensure that we have the tools to produce such images? Well, we have to start by building an interferometer and really the interferometer is just a very simple device that splits light into two arms: an object arm or sample arm which takes light and shines it onto the feature of interest - in the previous case that would have been the retina. In the reference arm, meanwhile, light is propagating to, typically, a mirror and it returns and then the two arms like propagating in these terms is allowed to interfere and so the interference pattern will tell us - if you remember the previous slide where you saw the different fringes in different colours - it will tell us whether we're looking at a central maximum or we're out here in the wings and we're getting very little. So by doing that at each location on the retina we can build an XYZ three-dimensional map of where things are essentially. Is the reflection event happening at a particular location or not and, if it is, how deep in the tissue is it? >>And so we would typically detect the interference pattern with either a dispersive method that typically spreads the spectrum onto a camera or we could also have something called swept source which is essentially a laser of a single wavelength that sweeps its wavelength very many times a second, typically a few … few hundred thousand times a second or or even faster than that and in both cases we would be looking for these maxima of interference and if we get that maximum interference we can work out where things are. >>And to extend our technique, we have recently started looking at reflection events from gold nanoparticles. Why do we say the extension of the technique? Because, in this particular case, we are able to produce phase maps or information about the distance travelled which is much more accurate than without the ability to measure phase. >>So measurement of phase is essentially measuring how how wide one of those fringes is and where you are on on one of the fringes - we're getting down to very little, we're getting down to very fine displacement amounts, we're talking about nanometers rather than microns. >>So gold nanoparticles have this ability to migrate in the human body to regions where cancers are developing and they have been used in therapy because you can … you can target the area with a mirror infrared laser which causes them to heat up - they absorb very well in that region - causes them to heat up and to destroy cancer cells, but, more importantly, we want to know exactly where they are and where is the highest aggregation of gold nanoparticles, so in order to do that we have conducted a series of experiments where we place these particles onto a slide and then illuminate them with a beam just to see what sort of range of actual displacements we can expect. >>If we just simply measure the thickness of the gold nanoparticle layer that is shown either on the left of the picture or on the right of the picture, we can definitely see some … some speckle in the image, some variation in the phase … in the phase map, whereas in the central picture we generally shouldn't expect because that's taken from the smooth gloss surface, we shouldn't expect to see anything other than the bullseye typical fringe pattern that you might get as you scan. >>So, having these phase maps gives us confidence that we can move towards detecting individual reflection events in depth, which we have done, and we can see that the top of the sandwich we created by placing a glass slide microscope coverslip - in fact on top of some real pork tissue - generates three different reflection events and we have been able to measure the depths of … of all of these events and we can actually see also just down to the nearest few microns or so where the boundaries of the glass slide are. >>Where we hope to do better is now to look at those boundaries and actually fix them to within a thousandth of the previous resolutions, so just getting down from micrometer scale to nanometer scale - we can do that and we can calibrate our technique by using a piezo transducer whose movement we control down to the nearest nanometer and measuring the output in phase of that transducer, which is a graph we can see on the right of the picture there. Where we drive the transducer with decreasing voltages, we should expect that at the smallest driving voltage the phase output will be buried in noise and that is our axial detection limit and as I said earlier that typically a few nanometers. So we are able to see that with nanometer precision in in the axial direction. >>So, in our gold nanoparticle layer, as we produce the X&Y transversal scan, we're actually able to build pictures of the phase and we can see that the phase changes when the layer is illuminated with an excitation beam which is in the near-infrared, and this excitation beam can be also tuned ... dialled down ... in terms of power - we have looked at how little power we can get away with in order to still produce an effect - and we found that some of the power that is needed will ... there's sufficient ... will generate displacements of the order of 10, 20 nanometers or so, so easily picked up by our technique and those are manifested in a change of greyscale, so when you look at those pictures and you see that the shutter is closed, the shutter is open, that means that the radiation or excitation beam is allowed to shine through or it's stopped by the shutter. >>So, we have been able to understand a little bit about thresholds and a little bit about how long it takes for this process to manifest itself in a change of phase. This is very exciting area of research and we're continuing with it but I want to leave you with some images from our group and the top row shows transversal sections through skin - in this particular case you might see ridges corresponding to fingerprints and the bottom section contains images of the eye and in particular the retina, the central one being the macula, which is the region of the highest sensitivity, and you can see quite good discrimination between the layers that make … make up the retina. >>So, thank you for your attention.