Summer School & Workshop Wisla 23

The Summer School & Workshop Wisla 23 will bring together students, researchers, and practitioners from all over the world to share groundbreaking insights in AI, mathematics, and computer science. The event will take place from  August 21 - September 1, 2023 in a virtual form, and will combine lectures by experts in the field and talks by participants.  This year's theme is Mapping the Interdisciplinary Horizons of AI: Safety, Functional Programming, Information Geometry, and Beyond.


  • Scott Aaronson (The University of Texas at Austin, USA)
    AI Safety: Leaning Into Uninterpretability
    I'll share some thoughts about AI safety, shaped by a year's leave at OpenAI to work on the intersection of AI safety and theoretical computer science.  I'll discuss what I've worked on, including a scheme for watermarking the outputs of Large Language Models such as GPT, as well as proposals such as cryptographic backdoors and a theory of AI acceleration risk.

  • Seth Baum (Global Catastrophic Risk Institute, University of Cambridge, USA)
    Social and Philosophical Dimensions of AI Alignment
    Successful AI alignment requires three steps: (1) concept(s) for what, if anything, to align AI to; (2) design(s) for how to implement the concept(s) in one or more AI systems; and (3) the usage, by human developers of AI systems, of these design(s) instead of other, less suitable design(s). Step (1) is mainly a matter of moral philosophy; step (2) is mainly a matter of computer science; and step (3) is mainly a matter of social science and politics. My lecture(s) will cover steps (1) and (3).

  • Olle Häggström (Chalmers University of Technology, Sweden)
    AI risk and AI alignment
    The planetary dominance over other species that humanity has attained has very little to do with muscular strength and physical endurance: it is all about intelligence. This makes the present moment in history, when we are automating intelligence and handing over this crucial skill to machines, the most important ever. The research area that has become known as AI alignment deals with how to make sure that the first superintelligent machines have goals and values that are sufficiently aligned with ours and that sufficiently prioritize human flourishing. This needs to succeed, because otherwise we face existential catastrophe. In these lectures I will outline key challenges in AI alignment, what is being done to solve them, and how all this relates to the breakneck speed at which AI is presently advancing.

  • Anders Sanberg (Future of Humanity Institute, University of Oxford, UK)
  • Patrik Jansson (Chalmers University of Technology, Sweden)
    Domain-Specific Languages of Mathematics
    The main idea behind this minicoure is to encourage the students to approach mathematical domains from a functional programming perspective. We will learn about the language Haskell; identify the main functions and types involved; introduce calculational proofs; pay attention to the syntax of mathematical  expressions; and, finally, to organize the resulting functions and types in domain-specific languages.
  • Frank Nielsen (Sony Computer Science Laboratories, Japan)
    Introduction to Information Geometry, Recent Advances, and Applications.
    Information geometry primarily studies the geometric structures, dissimilarities, and statistical invariance of a family of probability distributions called the statistical model. A regular parametric statistical model can be geometrically handled as a Riemannian manifold equipped with the Fisher metric tensor which induces the Fisher-Rao geodesic distance. This Riemannian structure on the Fisher-Rao manifold was later generalized by a dual structure based on pairs of torsion-free affine connections coupled to the Fisher metric: The α-geometry. This dual structure casts light on the close interaction between statistical estimators in inference (maximum likelihood) and parametric statistical models (exponential families obtained from the principle of maximum entropy), and brings into play a generalized Pythagorean theorem useful to prove uniqueness of information projections. We will illustrate applications of information geometry in statistics, information theory, computer vision and pattern recognition, and learning of neural networks. The second part of the minicourse will present recent advances in information geometry and its applications.
  • Dmitri Alekseevsky (University of Hradec Králové, Czech RepublicNeurogeometry of Vision and Information Geometry of Homogeneous Convex Cones
    These lectures will provide a comprehensive overview of information processing. We will start with information processing in early vision in static and geometric models of the primary visual cortex. Then, we will explore information processing in vision in dynamics. Finally, we gonna talk about the information geometry of Chentsov-Amari and homogeneous convex cones.

  • Frédéric Barbaresco (Thales Land and Air Systems, FranceSymplectic Foliation Structures of Information Geometry for Lie Groups Machine Learning
    We present a new symplectic model of Information Geometry  based on Jean-Marie Souriau's Lie Groups Thermodynamics. Souriau model was initially described in chapter IV “Statistical Mechanics” of his book “Structure of dynamical systems” published in 1969. This model gives a  purely geometric characterization of Entropy, which appears as an invariant Casimir function in coadjoint representation, characterized by Poisson cohomology. Souriau has proved that we can associate a symplectic manifold to coadjoint orbits of a Lie group by the KKS 2-form (Kirillov, Kostant, Souriau 2-form) in the affine case (affine model of coadjoint operator equivariance via Souriau's cocycle), that we have identified with Koszul-Fisher metric from Information Geometry. Souriau established the generalized Gibbs density covariant under the action of the Lie group. The dual space of the Lie algebra foliates into coadjoint orbits that are also the Entropy level sets that could be interpreted in the framework of Thermodynamics by the fact that dynamics on these symplectic leaves are non-dissipative, whereas transversal dynamics, given by Poisson transverse structure, are dissipative. We will finally introduce Gaussian distribution on the space of Symmetric Positive Definite (SPD) matrices, through Souriau's covariant Gibbs density by considering this space as the pure imaginary axis of the homogeneous Siegel upper half space where Sp(2n,R)/U(n) acts transitively. We will also consider Gibbs density for Siegel Disk where SU(n,n)/S(U(n)xU(n)) acts transitively. Gauss density of SPD matrices is then computed through Souriau's moment map and coadjoint orbits. Souriau’s Lie Groups Thermodynamics model will be further explored in European COST network CaLISTA and European HORIZON-MSCA project CaLIGOLA.
  • Noémie  Combe (Max Planck Institute for Mathematics in Sciences, Germany)
    Exploring Information Geometry: Recent Advances and Connections to Topological Field Theory
    With the rapid progress of machine learning, artificial intelligence and data sciences, the topic of information geometry is an important domain of research. We aim at introducing the topic of information geometry, as well as presenting some recent progress in this domain. Differential geometry and algebraic aspects shall be developed. The new tight relation between the information geometry and topological field theory will be discussed

MATERIALS (available only for admitted participants)


The school will provide participants with an opportunity to interact with their colleagues and well-known researchers in the field.  Each participant could make a talk about recent research and get independent and constructive feedback on her/his current research and future research directions. Materials from the school and workshop will be published by Springer Nature. All contributions are subject to peer review.

Org. Committee
J. de Lucas, M. Roop, J.Szmit, R.Zawadzki, M.Ulan, M. Wojnowski
sn_logo_cmyk                            birkh_logo_2c12345