11th INTERNATIONAL SCHOOL ON DEEP LEARNING
(and the Future of Artificial Intelligence)
DeepLearn 2024
Porto =E2=80=93 Maia, Portugal
July 15-19, 2024
https://deeplearn.irdta.eu/2024/
******************************************************
Co-organized by:
University of Maia
Institute for Research Development, Training and Advice =E2=80=93 IRDTA
Brussels/London
******************************************************
Early registration: November 25, 2023
******************************************************
SCOPE:
DeepLearn 2024 will be a research training event with a global scope aiming=
at updating participants on the most recent advances in the critical and f=
ast developing area of deep learning. Previous events were held in Bilbao, =
Genova, Warsaw, Las Palmas de Gran Canaria, Guimar=C3=A3es, Las Palmas de G=
ran Canaria, Lule=C3=A5, Bournemouth, Bari and Las Palmas de Gran Canaria.
Deep learning is a branch of artificial intelligence covering a spectrum of=
current frontier research and industrial innovation that provides more eff=
icient algorithms to deal with large-scale data in a huge variety of enviro=
nments: computer vision, neurosciences, speech recognition, language proces=
sing, human-computer interaction, drug discovery, health informatics, medic=
al image analysis, recommender systems, advertising, fraud detection, robot=
ics, games, finance, biotechnology, physics experiments, biometrics, commun=
ications, climate sciences, geographic information systems, signal processi=
ng, genomics, materials design, video technology, social systems, etc. etc.
The field is also raising a number of relevant questions about robustness o=
f the algorithms, explainability, transparency, and important ethical conce=
rns at the frontier of current knowledge that deserve careful multidiscipli=
nary discussion.
Most deep learning subareas will be displayed, and main challenges identifi=
ed through 18 four-hour and a half courses, 2 keynote lectures, 1 round tab=
le and a few hackathon-type competitions among students, which will tackle =
the most active and promising topics. Renowned academics and industry pione=
ers will lecture and share their views with the audience. The organizers ar=
e convinced that outstanding speakers will attract the brightest and most m=
otivated students. Face to face interaction and networking will be main ing=
redients of the event. It will be also possible to fully participate in viv=
o remotely.
ADDRESSED TO:
Graduate students, postgraduate students and industry practitioners will be=
typical profiles of participants. However, there are no formal pre-requisi=
tes for attendance in terms of academic degrees, so people less or more adv=
anced in their career will be welcome as well.
Since there will be a variety of levels, specific knowledge background may =
be assumed for some of the courses.
Overall, DeepLearn 2024 is addressed to students, researchers and practitio=
ners who want to keep themselves updated about recent developments and futu=
re trends. All will surely find it fruitful to listen to and discuss with m=
ajor researchers, industry leaders and innovators.
VENUE:
DeepLearn 2024 will take place in Porto, the second largest city in Portuga=
l, recognized by UNESCO in 1996 as a World Heritage Site. The venue will be=
:
University of Maia
Avenida Carlos de Oliveira Campos - Cast=C3=AAlo da Maia
4475-690 Maia
Porto, Portugal
STRUCTURE:
3 courses will run in parallel during the whole event. Participants will be=
able to freely choose the courses they wish to attend as well as to move f=
rom one to another.
All lectures will be videorecorded. Participants will be able to watch them=
again for 45 days after the event.
An open session will give participants the opportunity to present their own=
work in progress in 5 minutes. Also companies will be able to present thei=
r technical developments for 10 minutes.
This year=E2=80=99s edition of the school will schedule hands-on activities=
including mini-hackathons, where participants will work in teams to tackle=
several machine learning challenges.
Full live online participation will be possible. The organizers highlight, =
however, the importance of face to face interaction and networking in this =
kind of research training event.
KEYNOTE SPEAKERS: (to be completed)
Jiawei Han (University of Illinois Urbana Champaign), How Can Large Languag=
e Models Contribute to Effective Text Mining?
PROFESSORS AND COURSES: (to be completed)
Luca Benini (Swiss Federal Institute of Technology Zurich), [intermediate/a=
dvanced] Open Hardware Platforms for Edge Machine Learning
Gustau Camps-Valls (University of Val=C3=A8ncia), [intermediate] AI for Ear=
th, Climate, and Sustainability
Nitesh Chawla (University of Notre Dame), [introductory/intermediate] Intro=
duction to Representation Learning on Graphs
Daniel Cremers (Technical University of Munich), [introductory/advanced] De=
ep Networks for 3D Computer Vision
Peng Cui (Tsinghua University), [intermediate/advanced] Stable Learning for=
Out-of-Distribution Generalization: Invariance, Causality and Heterogeneit=
y
Sergei V. Gleyzer (University of Alabama), [introductory/intermediate] Mach=
ine Learning Fundamentals and Their Applications to Very Large Scientific D=
ata: Rare Signal and Feature Extraction, End-to-End Deep Learning, Uncertai=
nty Estimation and Realtime Machine Learning Applications in Software and H=
ardware
Hayit Greenspan (Icahn School of Medicine, Mount Sinai / Tel Aviv Universit=
y), tba
Yulan He (King=E2=80=99s College London), [intermediate/advanced] Machine R=
eading Comprehension with Large Language Models
Frank Hutter (University of Freiburg), [intermediate/advanced] AutoML
George Karypis (University of Minnesota), [intermediate] Deep Learning Mode=
ls and Systems for Real-World Graph Machine Learning
Hermann Ney (RWTH Aachen University / AppTek), [intermediate/advanced] Mach=
ine Learning and Deep Learning for Speech & Language Technology: A Probabil=
istic Perspective
Massimiliano Pontil (Italian Institute of Technology), tba
Elisa Ricci (University of Trento), [intermediate] Continual and Adaptive L=
earning in Computer Vision
Xinghua Mindy Shi (Temple University), [intermediate] Trustworthy Artificia=
l Intelligence for Health and Medicine
Laurens van der Maaten (Meta AI), [introductory/intermediate] Introduction =
to Computer Vision
Danxia Xu (National Research Council of Canada) [introductory] Photonic Chi=
ps and Artificial Intelligence: An Interplay
OPEN SESSION:
An open session will collect 5-minute voluntary presentations of work in pr=
ogress by participants.
They should submit a half-page abstract containing the title, authors, and =
summary of the research to david@irdta.eu by July 7, 2024.
INDUSTRIAL SESSION:
A session will be devoted to 10-minute demonstrations of practical applicat=
ions of deep learning in industry.
Companies interested in contributing are welcome to submit a 1-page abstrac=
t containing the program of the demonstration and the logistics needed. Peo=
ple in charge of the demonstration must register for the event.
Expressions of interest have to be submitted to david@irdta.eu by July 7, 2=
024.
HACKATHON ACTIVITIES:
A section of the event will consist of hands-on activities including mini-h=
ackathons, where participants will work in teams to tackle several machine =
learning challenges.
EMPLOYERS:
Organizations searching for personnel well skilled in deep learning will be=
provided a space for one-to-one contacts.
It is recommended to produce a 1-page .pdf leaflet with a brief description=
of the organization and the profiles looked for to be circulated among the=
participants prior to the event. People in charge of the search must regis=
ter for the event.
Expressions of interest have to be submitted to david@irdta.eu by July 7, 2=
024.
SPONSORS:
Companies/institutions/organizations willing to be sponsors of the event ca=
n download the sponsorship leaflet from
https://deeplearn.irdta.eu/2024/sponsoring/
ORGANIZING COMMITTEE:
Jos=C3=A9 Paulo Marques dos Santos (Maia, local chair)
Carlos Mart=C3=ADn-Vide (Tarragona, program chair)
Sara Morales (Brussels)
Jos=C3=A9 Lu=C3=ADs Reis (Maia)
Lu=C3=ADs Paulo Reis (Porto)
David Silva (London, organization chair)
REGISTRATION:
It has to be done at
https://deeplearn.irdta.eu/2024/registration/
The selection of 8 courses requested in the registration template is only t=
entative and non-binding. For logistical reasons, it will be helpful to hav=
e an estimation of the respective demand for each course.
Since the capacity of the venue is limited, registration requests will be p=
rocessed on a first come first served basis. The registration period will b=
e closed and the on-line registration tool disabled when the capacity of th=
e venue will have got exhausted. It is highly recommended to register prior=
to the event.
FEES:
Fees comprise access to all program activities and lunches.
There are several early registration deadlines. Fees depend on the registra=
tion deadline.
The fees for on site and for online participation are the same.
ACCOMMODATION:
Accommodation suggestions will be available at
https://deeplearn.irdta.eu/2024/accommodation/
CERTIFICATE:
A certificate of successful participation in the event will be delivered in=
dicating the number of hours of lectures. This should be sufficient for tho=
se participants who plan to request ECTS recognition from their home univer=
sity.
QUESTIONS AND FURTHER INFORMATION:
ACKNOWLEDGMENTS:
Universidade da Maia
Universidade do Porto
Universitat Rovira i Virgili
Institute for Research Development, Training and Advice =E2=80=93 IRDTA, Br=
ussels/London
**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************