Asaka low res.jpg

Asaka Quartet

-
Add to Calendar Asaka QuartetThe Leonard Wolfson Auditorium
Location
The Leonard Wolfson Auditorium
Event price
£15 | £10 Wolfson Members | Students Free
Booking Required
Not Required
Accessibility
There is provision for wheelchair users.
Wolfson College Music Society presents Asaka Quartet



Programme -

Schubert - Quartettsatz

Beethoven - Opus 18, No. 2 in G major

Danish String Quartet - Unst Boat Song

Brahms - String Quartet No. 1 in C minor



5pm Sunday 5 May 2024



Leonard Wolfson Auditorium, Wolfson College, Linton Road, Oxford, OX2 6UD



£15 | £10 Wolfson Members | Students Free

Tickets available on the door of the event (cash only).

Australia in Oxford flyer 3.1.jpeg

CAM celebrates Fauré Anniversary

-
Add to Calendar CAM celebrates Fauré AnniversaryThe Leonard Wolfson Auditorium
Location
The Leonard Wolfson Auditorium
Event price
Free Admission
Booking Required
Recommended
Accessibility
There is provision for wheelchair users.
Celebrating Australian Music (CAM) celebrates Fauré Anniversary



6.30-7pm Talk by Dr Roy Howat, ‘Rediscovering Fauré’

7.30-8.30pm Recital of music by Fauré and Australian composers



Programme –

Wendy Hiscocks – Explorer goes East

Peter Sculthorpe – Mountains

Peggy Glanville-Hicks – Prelude to a Pensive Pupil

Gabriel Fauré – Music for cello & piano, Madrigal & Pavane



Saturday 11 May 2024



The Leonard Wolfson Auditorium, Wolfson College, Linton Road, Oxford, OX2 6UD.



Free Admission. Seats can be reserved by visiting www.celebratingaustralianmusic.com

04  Stand up for Adolescents _Amref Health Africa_Jeroen van Loon (Medium) (1).jpg

Amref Health Africa Concert

-
Add to Calendar Amref Health Africa ConcertThe Leonard Wolfson Auditorium
Location
The Leonard Wolfson Auditorium
Speakers
Piano recital performed by Nima Farahmand Bafi
Event price
Free Admission. Donations for Amref Health Africa (cash only)
Booking Required
Not Required
Accessibility
There is provision for wheelchair users.
Wolfson College Music Society presents AMREF HEALTH AFRICA CONCERT. Performed by Nima Farahmand Bafi



Programme -

Nima Farahmand Bafi - Persian fantasy

Leoš Janáček - Piano Sonata 1.X.1905

Nima Farahmand Bafi - Jin - Jiyan – Azadi (Woman – Life – Freedom)

Frédéric Chopin - Polonaise Op. 53

Nima Farahmand Bafi - Poems without words

Ernest Bloch - Nigun from Baal Shem (arr. Nima Farahmand Bafi)

Nima Farahmand Bafi - Concert paraphrase on "Chahargah" by H. Kassai



5pm Sunday 21 April 2024



Leonard Wolfson Auditorium, Wolfson College, Linton Road, Oxford, OX2 6UD



Free Admission. Donations welcome (cash only).

All donations from the concert will go to Amref Health Africa
istockphoto-1212503012-612x612.jpg

Wolfson Family Society Easter Egg Painting and Egg Hunt

-
Add to Calendar Wolfson Family Society Easter Egg Painting and Egg HuntThe Buttery
Location
The Buttery
Speakers
N/A
Event price
N/A
Booking Required
Recommended
Please join us for a fun afternoon of Easter egg activities at the Buttery from 2 - 4 pm. We will provide light snacks. Eggs and painting supplies will also be provided.



See you!

Nobuo small.jpg

Nobuo Okaka - Closing Exhibition Event

-
Add to Calendar Nobuo Okaka - Closing Exhibition EventThe Florey Room
Location
The Florey Room
Event price
Free
Booking Required
Not Required
Accessibility
There is provision for wheelchair users.
Meet the artist and join us for a final viewing of Nobuo Okawa's exhibition, From Dark to Light, An Exhibition of Mezzotint Prints on Friday 15 March 2024 from 2-6pm.



Tea and coffee will be available. A last chance to view and purchase a work from this wonderful exhibition.
1549397206672.jpeg

Resurrecting Recurrent Neural Networks for Language Modelling

-
Add to Calendar Resurrecting Recurrent Neural Networks for Language ModellingThe Buttery
Location
The Buttery
Speakers
Dr. Razvan Pascanu (Google DeepMind)
Booking Required
Not Required
Accessibility
There is provision for wheelchair users.
Bio:

I'm currently a Research Scientist at DeepMind. I grew up in Romania and studied computer science and electrical engineering for my undergrads in Germany. I got my MSc from Jacobs University, Bremen in 2009. I hold a PhD from University of Montreal (2014), which I did under the supervision of prof. Yoshua Bengio. I was involved in developing Theano and helped writing some of the deep learning tutorials for Theano. I've published several papers on topics surrounding deep learning and deep reinforcement learning (see my scholar page). I'm one of the organizers of EEML (www.eeml.eu) and part of the organizers of AIRomania. As part of the AIRomania community, I have organized RomanianAIDays since 2020, and helped build a course on AI aimed at high school students.



Abstract:

In this talk I will focus on State Space Models (SSMs) , a subclass of Recurrent Neural Networks (RNNs) that has recently gained some attention through works like Mamba, obtaining strong performance against transformer baselines. I will start by first explaining how SSMs can be viewed as just a particular parametrization of RNNs and what are the crucial differences compared to previous recurrent architectures that led to these results. My goal is to demystify the relative complex parametrization of the architecture and identify what elements are needed for the model to perform well. In this process I will introduce the Linear Recurrent Unit (LRU), a simplified linear layer inspired by existing SSM layers. In the second part of the talk, I will focus on language modelling and the block structure in which such layers tend to be embedded. I will argue that beyond the recurrent layer itself, the block structure borrowed from transformers plays a crucial role in the recent successes of this architecture, and present results at scale of well performing hybrid recurrent architectures as compared to strong transformer baseline. I will close the talk with a few open questions and thoughts on the importance of recurrence in modern deep learning models.

Nikolay Sarkisyan

Marie Curie Post Doctoral Fellow
nikolay.sarkisyan@mod-langs.ox.ac.uk

I am currently a Marie Curie Postdoctoral Research Fellow, delving into a project on historical revolutionary museums in Petrograd-Leningrad during the pivotal years of 1917-1941. This project, funded under the Horizon 2020 European Commission Grant Agreement number 10102852, allows me to explore the intricate transformations of Russian revolutionary traditions in museum spaces, particularly in the course of the 1920s and 1930s. My academic journey commenced at the University of Oslo, where I completed my PhD with a focus on the influence of the discourse of tolerance on contemporary Russian governance. This work offered profound insights into the intersection of politics, society, and history in post-Soviet Russia. My time at the University of Oslo was not only about academic growth but also about enhancing my teaching skills. I contributed to the university's curriculum by teaching courses on contemporary Russian politics and conducting Russian language classes. In 2014-17, before embarking on my doctoral studies, I worked at two prominent (formerly) historical-revolutionary museums: the Museum of Political History of Russia, formerly known as the Museum of the Revolution, and the Smol’ny Museum, previously the Museum of Lenin in Leningrad. My experiences in these museums were not just professionally enriching but also served as a source of inspiration for my current research project. I hold a Master’s degree in Sociology from the European University at St Petersburg and a Specialist degree in History from St Petersburg State University. My research interests are diverse and ever-evolving. Having initially focused on political science for my thesis, I transitioned (back) to history, examining the roots and evolution of Russia's revolutionary tradition. This research is not just a historical inquiry but also engages with broader theoretical questions about the nature of Soviet power, Stalinism, and the 1917 Revolution. In Michaelmas Term 2023, I explored new academic territories, teaching a course on post-Soviet red-brown literature. This endeavor reflects my growing interest in comparative literature and the sociology of literary work, marking a potentially new phase in my academic pursuits.

jakob-foerster.png

Opponent-Shaping and Interference in General-Sum Games

-
Add to Calendar Opponent-Shaping and Interference in General-Sum GamesThe Levett Room
Location
The Levett Room
Speakers
Jakob Foerster
Booking Required
Not Required
Accessibility
There is provision for wheelchair users.
Bio:

Jakob Foerster started as an Associate Professor at the department of engineering science at the University of Oxford in the fall of 2021. During his PhD at Oxford he helped bring deep multi-agent reinforcement learning to the forefront of AI research and interned at Google Brain, OpenAI, and DeepMind. After his PhD he worked as a research scientist at Facebook AI Research in California, where he continued doing foundational work. He was the lead organizer of the first Emergent Communication workshop at NeurIPS in 2017, which he has helped organize ever since and was awarded a prestigious CIFAR AI chair in 2019.





Abstract:

In general-sum games, the interaction of self-interested learning agents commonly leads to collectively worst-case outcomes, such as defect-defect in the iterated prisoner's dilemma (IPD). To overcome this, some methods, such as Learning with Opponent-Learning Awareness (LOLA), shape their opponents' learning process. However, these methods are myopic since only a small number of steps can be anticipated, are asymmetric since they treat other agents as naive learners, and require the use of higher-order derivatives, which are calculated through white-box access to an opponent's differentiable learning algorithm. In this talk I will first introduce Model-Free Opponent Shaping (M-FOS), which overcomes all of these limitations. M-FOS learns in a meta-game in which each meta-step is an episode of the underlying (``inner'') game. The meta-state consists of the inner policies, and the meta-policy produces a new inner policy to be used in the next episode. M-FOS then uses generic model-free optimisation methods to learn meta-policies that accomplish long-horizon opponent shaping. I will finish off the talk with our recent results for adversarial (or cooperative) cheap-talk: How can agents interfere with (or support) the learning process of other agents without being able to act in the environment?