Warning: We observe an increase of emails from fake travel portals like . "travelhosting.co.uk". We never send links to such portals so be vigilant!

8–10 Apr 2024
Bürgerhaus Garching
Europe/Berlin timezone
Event fully booked +++ Registration closed!

Establishing Multi-resolution, Multi-modal Data Fusion of Synchrotron Imaging Modalities at Diamond Light Source

9 Apr 2024, 16:50
2h
Poster MLC Posters

Speaker

Calum Green (Imperial College London)

Description

In the medical imaging field, DLNs have allowed for many recent advances in imaging processing such as super-resolution and segmentation tasks. Similar applications have been studied in the fields of digital rocks, and Li-ion battery research with super-resolution deep learning models being successfully deployed to enhance the resolution of rock X-ray CT (XCT) images and microscopy images of Li-ion electrodes. The Diamond Light Source produces a significant quantity of imaging, scattering, and spectroscopic data across all 33 beamlines, with expectations of 100s of petabytes (PBs) of data generated when Diamond-II comes online. DLNs have found increasing applicability at the synchrotron in helping to process and analyse this data, across the range of modalities (imaging, scattering, spectroscopy) and length scales (e.g. using phase information from diffraction data to assist image segmentation, or overcoming the physical limitation of certain detectors).

Super-resolution tasks help to overcome limitations of the beamline detectors, such as the field of view (FOV) with super-resolution deep learning models being used to enhance the resolution of larger FOVs that are otherwise not possible due to limitations of experimental equipment. Obtaining a super-resolution dataset at a larger FOV can then allow for cross-correlation of datasets between beamlines, with a pathway for multimodal data fusion through the combination of different modes across different beamlines. Such an example includes combining higher-resolution XCT data with X-ray diffraction CT (XRD-CT). This multimodal dataset could be used for high-resolution segmentation tasks that require no manual annotation, by using the phase information acquired from XRD-CT to train the segmentation DLN. (Mock-up example shown in attached Figure)

In our previous work, we successfully developed and used a super-resolution generative adversarial network (SRGAN) to enhance the resolution of artificially downsampled XCT zeolite datasets acquired on the Dual Imaging and Diffraction beamline (DIAD/K11). Lab-based XCT datasets of porous media have also shown promise in enhancing the resolution of XCT using our developed SRGAN methods, utilising a fully paired, spatially correlated, high and low-resolution dataset that has been experimentally obtained. These datasets demonstrate the feasibility of applying our super-resolution techniques to synchrotron-based XCT datasets as part of our in-development cross-beamline XCT fusion pipeline for automatic XCT segmentation using X-ray diffraction (XRD) as a ground truth.

Primary author

Calum Green (Imperial College London)

Co-authors

Prof. Daniele Dini (Imperial College London) Dr James Le Houx (Diamond Light Source) Dr Paul Quinn (Ada Lovelace Centre)

Presentation materials

There are no materials yet.