The
Scalable Parallel Computing Lab's SPCL_Bcast
seminar continues with Petar
Veličković of DeepMind, and University
of Cambridge presenting on Capturing Computation with
Algorithmic Alignment. Everyone is
welcome to attend (over Zoom)!
When:
Thursday, 21st March,
6PM CET
Where: Zoom
Abstract:
What makes a neural
network better, or worse, at fitting
certain tasks? This question is
arguably at the heart of neural
network architecture design, and it is
remarkably hard to answer rigorously.
Over the past few years, there have
been a plethora of attempts, using
various facets of advanced
mathematics, to answer this question
under various assumptions. One of the
most successful directions --
algorithmic alignment -- assumes that
the target function, and a mechanism
for computing it, are completely
well-defined and known (i.e. the
target is to learn to execute an
algorithm). In this setting, fitting a
task is equated to capturing the
computations of an algorithm, inviting
analyses from diverse branches of
mathematics and computer science. I
will present some of my personal
favourite works in algorithmic
alignment, along with their
implications for building intelligent
systems of the future.
|
Biography:
Petar is a Staff
Research Scientist at Google
DeepMind, an Affiliated
Lecturer at the University of
Cambridge, and an Associate of
Clare Hall, Cambridge. He
holds a PhD in Computer
Science from the University of
Cambridge (Trinity College),
obtained under the supervision
of Pietro Liò. His research
concerns geometric deep
learning—devising neural
network architectures that
respect the invariances and
symmetries in data (a topic
he’s co-written a proto-book
about). For his contributions,
he is recognized as an ELLIS
Scholar in the Geometric Deep
Learning Program.
Particularly, he focuses on
graph representation learning
and its applications in
algorithmic reasoning
(featured in VentureBeat). He
is the first author of Graph
Attention Networks—a popular
convolutional layer for
graphs—and Deep Graph
Infomax—a popular
self-supervised learning
pipeline for graphs (featured
in ZDNet). His research has
been used in substantially
improving travel-time
predictions in Google Maps
(featured in CNBC, Endgadget,
VentureBeat, CNET, the Verge,
and ZDNet), and guiding the
intuition of mathematicians
towards new top-tier theorems
and conjectures (featured in
Nature, Science, Quanta
Magazine, New Scientist, The
Independent, Sky News, The
Sunday Times, la Repubblica,
and The Conversation).
|
More
details & future talks
|