Today, there is a vast collection of biomedical "big data" available in individual laboratories and public
repositories, e.g. large datasets of images from drug and genetic screens, large collections of biomedical text
and genomic, proteomic and epidemiologic datasets. Such data can potentially be used to define biological
pathways, discover new disease links and develop new therapies. Despite this potential, the full richness of this
data is not yet used because the complex layers of information are difficult to extract and understand and
because it is extremely challenging to integrate information from very diverse types of data.
Artificial intelligence-based methods, such as deep learning, are uniquely suited to process large and complex
datasets and in recent years great advancements have been made in this field, e.g. in computer vision and
natural language processing. In this project, we aim to make use of these advances by developing artificial
intelligence-based methods for analysing and integrating different types of biomedical big data, including
images, text and tabular data.