International Conference:
Differential Equations for Data Science 2021 (DEDS2021)



  • Date: March 8(Mon) - 10(Wed), 2021


  • Place: Online (Zoom)


  • Registration: Registration link for Zoom meeting

  • After registering, you will receive a confirmation email containing information about joining the meeting. (Once you register from the link above, you are able to join all the sessions.) Note that the Zoom meeting will be available from 30 minutes before each session start time.

    If the link does not work, copy and paste the entire URL into your internet browser: https://zoom.us/meeting/register/tJEkcuCqrjgtH9OHLOMO7TaVFlrod-8mLngq

  • Aim:

  • This conference is mainly devoted to new mathematical aspects on machine learning algorithms, big data analysis, and other topics in data science area, from a viewpoint of differential equations. In recent years, several interesting connections between differential equations and data science have been found and attract attention from researchers of differential equations. In this conference, we will gather such researchers of differential equations who have interest in data science and try to shed new light on mathematical foundations on the topics in machine learning/data science.



  • Keywords:

  • ODE, PDE, Delay DE, Neural ODE, Machine learning, Deep learning, Data science, Big data, Reservoir computing (RC), Physical RC, Graph Laplacian, Universal approximation theory, Edge of chaos, Echo state property, Graphon, Dynamical System, Singular valued decomposition, Variational auto encoder



  • Speakers:

  • Hayato Chiba (Tohoku University, Japan)
    Yoshikazu Giga (The University of Tokyo, Japan)
    Lyudmila Grigoryeva (University of Konstanz, Germany)
    Boumediene Hamzi (Imperial College London, UK)
    Masato Kimura (Kanazawa University, Japan)
    Hiroshi Kokubu (Kyoto University, Japan)
    Yiping Lu (Stanford University, USA)
    Stefano Massaroli (The University of Tokyo, Japan)
    Kengo Nakai (Tokyo University of Marine Science and Technology, Japan)
    Kohei Nakajima (The University of Tokyo, Japan)
    Hirofumi Notsu (Kanazawa University, Japan)
    Sho Sonoda (RIKEN, Japan)
    Tomohiro Taniguchi (AIST, Japan)
    Matthew Thorpe (The University of Manchester, UK)
    Yves van Gennip (TU Delft, Netherlands)


  • Program: PDF


  • *All lectures are given by invited speakers.
    *Each lecture consists of 45 minutes talk and 10 minutes discussion (for overseas speakers) or 25 minutes talk and 5 minutes discussion (for domestic speakers).

    Monday, March 8
    16:55 — 17:00: Opening

    • Session 1: JST 17:00-20:10
      (=PST 00:00-03:10 =UTC 08:00-11:10 =CET 09:00-12:10)

    Chair: Hirofumi Notsu
    17:00—17:55 Matthew Thorpe :
    • Correcting the Bias in Laplacian Learning at Low Label Rates: From Laplace's to Poisson's Equation
    17:55—18:05 Break
    18:05—19:00 Lyudmila Grigoryeva :
    • Explainable and Reliable Learning of Dynamic Processes with Reservoir Computing
    19:00 — 19:10 Break
    19:10—19:40 Sho Sonoda :
    • Characterizing Deep Learning Solutions by Using Ridgelet Transform
    19:40—20:10 Kohei Nakajima :
    • Designing chaotic itinerancy


    Tuesday, March 9
    • Session 2: JST 13:00-15:05
      (=PST 20:00-22:05 =UTC 04:00-06:05 =CET 05:00-07:05)

    Chair: Hiroshi Kokubu
    13:00—13:55 Yiping Lu :
    • ODE As Infinite Depth Neural Network: Modeling, Optimization, and Inferencing
    13:55—14:05 Break
    14:05—14:35 Masato Kimura :
    • Universal approximation property for narrow ODENet and ResNet
    14:35—15:05 Yoshikazu Giga :
    • Spatially discrete total variation flow and its time discrete approximation


    Wednesday, March 10
    • Session 3: JST 08:00-10:10
      (=PST 15:00-17:10 =UTC 23:00-01:10 =CET 00:00-02:10)

    Chair: Masato Kimura
    08:00—08:30 Tomohiro Taniguchi :
    • Nonlinear dynamics in nanomagnet and its application to neuromorphic computing
    08:30—09:00 Stefano Massaroli :
    • Learning with Differential Equations: Current Status and Future Perspectives
    09:00—09:10 Break
    09:10—09:40 Kengo Nakai :
    • Dynamical system analysis of a data-driven model constructed by reservoir computing
    09:40—10:10 Hirofumi Notsu :
    • Twin vortex computer in fluid flow


    • Session 4: JST 17:00-20:10
      (=PST 00:00-03:10 =UTC 08:00-11:10 =CET 09:00-12:10)

    Chair: Hirofumi Notsu
    17:00—17:55 Yves van Gennip :
    • Variational models and gradient flows for graph clustering
    17:55—18:05 Break
    18:05—19:00 Boumediene Hamzi :
    • Machine learning and dynamical systems meet in reproducing kernel Hilbert spaces
    19:00—19:10 Break
    19:10—19:40 Hiroshi Kokubu :
    • Learning dynamics of the logistic maps by Reservoir Computing
    19:40—20:10 Hayato Chiba :
    • Generalized spectral theory and its applications
    20:10—20:15 Closing


  • Abstracts:

  • T. = Title, A. = Abstract.

    1. Matthew Thorpe (The University of Manchester, UK)
      T. Correcting the Bias in Laplacian Learning at Low Label Rates: From Laplace's to Poisson's Equation
      A. Laplacian learning is a semi-supervised method that finds missing labels via minimising a Dirichlet energy. It is well known that Laplacian learning is (asymptotically) ill-posed at low labelling rates. In this talk I will identify the bias of Laplace learning and show how this can be corrected leading to significant improvement in performance. The correction in the bias leads one to a Poisson equation. This work is joint with Jeff Calder (University of Minnesota), Brendan Cook (University of Minnesota) and Dejan Slepcev (Carnegie Mellon University).

    2. Lyudmila Grigoryeva (University of Konstanz, Germany)
      T. Explainable and Reliable Learning of Dynamic Processes with Reservoir Computing
      A. Many dynamical problems in engineering (including financial), control theory, signal processing, time series analysis, and forecasting can be described using input/output (IO) systems. Whenever a true functional IO relation cannot be derived from first principles, specific families of state-space systems can be used as universal approximants in different setups. In this talk we will focus on the so-called Reservoir Computing (RC) systems that have as defining features the ease of implementation due to the fact that some of their components (usually the state map) are randomly generated. From a machine learning (ML) perspective, RC systems can be seen as recurrent neural networks with randomly generated and non-trainable weights and a simple-to-train readout layer (often a linear map). RC systems serve as efficient, randomized, online computational tools for dynamic processes and constitute an explainable and reliable ML paradigm. We will discuss some theoretical developments, connections with contributions in other fields, and details of applications of RC systems for data processing.

    3. Sho Sonoda (RIKEN, Japan)
      T. Characterizing Deep Learning Solutions by Using Ridgelet Transform
      A. Characterization of the typical solutions obtained by deep learning is an important open problem in machine learning theory. The speaker has been addressing this problem from the viewpoint of functional analysis by using the integral representation of neural networks. The integral representation is known to have a closed-form right inverse operator, called the ridgelet transform, which is related to both the Radon and the wavelet transforms. The speaker has recently shown with his collaborators that for the case of ridge regression by finite two-layer neural networks, the empirical risk minimizers are given by ridgelet transform in the limit of over-parametrization (S-Ishikawa-Ikeda, AISTATS2021). In this talk, the speaker will introduce the ridgelet transform and present recent attempts to characterize deep learning solutions.

    4. Kohei Nakajima (The University of Tokyo, Japan)
      T. Designing chaotic itinerancy
      A. Reservoir computing (RC) provides opportunities to exploit complex nonlinear dynamics as a computational resource. Many RC implementation methods require the state of the reservoir to become a function of the input stream, which is often termed an echo state property; this setting leaves behind the bulk of the complex systems existing in nature to be exploited for information processing. Here, we introduce a scheme to exploit the intrinsic high-dimensional chaos of the reservoir in order to design complex nonlinear dynamical systems. In this presentation, we demonstrate, through a number of experiments, that this scheme is capable of systematically designing a broad range of nonlinear dynamics, including chaotic itinerancy, which is beneficial not only for nonlinear dynamical systems but also for the artificial intelligence field, especially for roboticists who aim to create spontaneous and/or intelligent systems. This talk is based on the paper, Katsuma Inoue (Univ. Tokyo), Kohei Nakajima (Univ. Tokyo), Yasuo Kuniyoshi (Univ. Tokyo), "Designing spontaneous behavioral switching via chaotic itinerancy," Science Advances 6 (46), eabb3989, 2020.

    5. Yiping Lu (Stanford University, USA)
      T. ODE As Infinite Depth Neural Network: Modeling, Optimization, and Inferencing
      A. Neural networks have become state-of-the-art models in numerous machine learning tasks and strong empirical performance is often achieved by deeper networks. My research starts from interpreting many effective networks as different numerical discretizations of (stochastic) differential equations. Differential equations play an important role in many disciplines of science and engineering. Based on this perspective, we are able to combine physic information with the deep neural network system via linking deep learning theory with control theory. In this part of talk, I'll show how we can use physical prior knowledge to help generalization and discover physic laws. This part of the talk is based on the speaker's work published at ICML2018 and ICLR2019.

      At the same time, there also exists a gap between this empirical success and the theoretical understanding: ResNets can be trained to almost zero loss with standard stochastic gradient descent, yet it is known that larger depth leads to increasingly non-convex landscape even the the presence of residual connections. In this talk we consider an ODE as the infinite depth limit of the neural network and give a mean field analysis of training the infinite depth neural network. With the ODE analysis, we exploit the structures that the optimization objectives is encountered with deep neural networks. The new point of view enables us to develop a fast NN–specific algorithm for adversarial training, called YOPO which reduces the total number of full forward and backward propagation to only one for each group of adversary updates. This part of the talk is based on the speaker's work published at Neurips2019 and ICML2020.

    6. Masato Kimura (Kanazawa University, Japan)
      T. Universal approximation property for narrow ODENet and ResNet
      A. We prove a universal approximation property (UAP) for a class of ODENet and a class of ResNet, which are used in many deep learning algorithms. The UAP can be stated as follows. Let n and m be the dimension of input and output data, and assume m \le n. Then we show that ODENet width n + m with any non-polynomial continuous activation function can approximate any continuous function on a compact subset on R^n. We also show that ResNet has the same property as the depth tends to infinity. Furthermore, we derive explicitly the gradient of a loss function with respect to a certain tuning variable. We use this to construct a learning algorithm for ODENet. To demonstrate the usefulness of this algorithm, we apply it to a regression problem, a binary classification, and a multinomial classification in MNIST. This is a joint work with Y. Aizawa, Kanazawa University.

    7. Yoshikazu Giga (The University of Tokyo, Japan)
      T. Spatially discrete total variation flow and its time discrete approximation
      A. A total variation map flow is important in image analysis and data science. We consider spatially discrete total variation map flow from a domain to an embedded manifold in an Euclidean space. We give a time-discretized numerical scheme for this flow by using the exponential map in a Riemannian manifold. We prove the convergence of this scheme with convergence rate. Numerical experiments are presented when the target manifold is a sphere and a rotation group. This is a joint work with K. Taguchi (ARISE analytics Inc.), K. Sakakibara (Okayama University of Science), M. Uesaka (The University of Tokyo, Arithmer Inc.) and the paper has been published in Numerische Mathematik (2020).

    8. Tomohiro Taniguchi (AIST, Japan)
      T. Nonlinear dynamics in nanomagnet and its application to neuromorphic computing
      A. Spintronics is a research field in condensed matter physics and has studied applications of ferromagnets in nanometer scale to practical devices such as nonvolatile memory and magnetic sensors. These applications have utilize relatively simple dynamical state of magnetization in ferromagnet, such as switching between fixed points and limit-cycle oscillation states. Recently, on the other hand, spintronics applications to neuromorphic computing become an exciting topic after human-voice recognitions by nanomagnet was reported. Note that the neuromorphic applications require highly nonlinear dynamics to imitate the activities in human brains such as firing. However, such a complex dynamics has not been studied in the field of spintronics. In this talk, we first give an introduction of spintronics and then report our recent progresses on theoretical analyses of the dynamical state in nanomagnet and their applications to physical reservoir computing. I will concentrate on fundamental aspects of spintronics because, if I understand correctly, attendees are not familiar with the research field. If you are interested in, we could provide further discussion and/or communications.

    9. Stefano Massaroli (The University of Tokyo, Japan)
      T. Learning with Differential Equations: Current Status and Future Perspectives
      A. Continuous deep learning architectures have recently re-emerged as variants of Neural Differential Equations (Neural DEs). The infinite-depth approach offered by these models theoretically bridges the gap between deep learning and dynamical systems. While this setting paves the way to a whole new perspective of learning systems and opens up to new frontiers in several application domains (e.g. control of physical systems), several challenges - both theoretical and practical - have to be overcome. Neural DEs also fall into the broader category of "implicit" learning models where, instead of fixing a static input-output map, we specify the conditions that we want the model's output to satisfy. In this talk, we explore the features of state-of-the-art continuous-depth approaches, discuss their connection to other implicit learning models and present some of the most relevant open problems.

    10. Kengo Nakai (Tokyo University of Marine Science and Technology, Japan)
      T. Dynamical system analysis of a data-driven model constructed by reservoir computing
      A. This study evaluates data-driven models from a dynamical system perspective, such as unstable fixed points, periodic orbits, Lyapunov exponents, and statistical values. We also confirm that manifold structures between stable and unstable manifolds can be reconstructed, even if the system has tangencies which break structural stability. These dynamical characteristics can be reconstructed much more precisely by a data-driven model than by computing directly from training data. With this idea, we predict the laminar lasting time distribution of a particular macroscopic variable of chaotic fluid flow, which cannot be calculated from a direct numerical simulation of the Navier--Stokes equation because of its high computational cost.
          This talk is based on M. U Kobayashi, K. Nakai, Y. Saiki, and N. Tsutsumi "Dynamical system analysis of a data-driven model constructed by reservoir computing" submitted to arXiv.

    11. Hirofumi Notsu (Kanazawa University, Japan)
      T. Twin vortex computer in fluid flow
      A. We consider a well-known fluid flow past a circular cylinder. Diverse types of vortices such as Karman vortex can be found in this flow as the Reynolds number increases. In this study, we reveal that these vortices can be employed for conducting certain types of computation. The results from a number of numerical experiments showed that optimal computational performance is achieved near the critical Reynolds number, where the flow exhibits a twin vortex before the onset of the Karman vortex shedding associated with Hopf bifurcation. It is revealed that as the Reynolds number increases toward the bifurcation point, the sensitivity of the twin vortex long-diameter motion in accordance with the input stream also increases, suggesting the modality of information processing within the system. Our finding paves a novel path toward the understanding of the relationship between the physical properties of fluid dynamics and its computational capability. This is a joint work with K. Goto (Kanazawa University) and K. Nakajima (The University of Tokyo).

    12. Yves van Gennip (TU Delft, Netherlands)
      T. Variational models and gradient flows for graph clustering
      A. Discrete graph-based variants of the Allen--Cahn and total variation variational models have proven to be successful tools for clustering and classification on graphs. In this talk we will study these models and the gradient flows that are derived from them. We will see deep connections between the various discrete gradient flows as well as between the discrete gradient flows and their continuum relatives.

    13. Boumediene Hamzi (Imperial College London, UK)
      T. Machine learning and dynamical systems meet in reproducing kernel Hilbert spaces
      A. Since its inception in the 19th century through the efforts of Poincaré and Lyapunov, the theory of dynamical systems addresses the qualitative behaviour of dynamical systems as understood from models. From this perspective, the modeling of dynamical processes in applications requires a detailed understanding of the processes to be analyzed. This deep understanding leads to a model, which is an approximation of the observed reality and is often expressed by a system of Ordinary/Partial, Underdetermined (Control), Deterministic/Stochastic differential or difference equations. While models are very precise for many processes, for some of the most challenging applications of dynamical systems (such as climate dynamics, brain dynamics, biological systems or the financial markets), the development of such models is notably difficult. On the other hand, the field of machine learning is concerned with algorithms designed to accomplish a certain task, whose performance improves with the input of more data. Applications for machine learning methods include computer vision, stock market analysis, speech recognition, recommender systems and sentiment analysis in social media. The machine learning approach is invaluable in settings where no explicit model is formulated, but measurement data is available. This is frequently the case in many systems of interest, and the development of data-driven technologies is becoming increasingly important in many applications.

      The intersection of the fields of dynamical systems and machine learning is largely unexplored and the objective of this talk is to show that working in reproducing kernel Hilbert spaces offers tools for a data-based theory of nonlinear dynamical systems. In this talk, we introduce a data-based approach to estimating key quantities which arise in the study of nonlinear autonomous, control and random dynamical systems. Our approach hinges on the observation that much of the existing linear theory may be readily extended to nonlinear systems - with a reasonable expectation of success- once the nonlinear system has been mapped into a high or infinite dimensional Reproducing Kernel Hilbert Space. In particular, we develop computable, non-parametric estimators approximating controllability and observability energies for nonlinear systems. We apply this approach to the problem of model reduction of nonlinear control systems. It is also shown that the controllability energy estimator provides a key means for approximating the invariant measure of an ergodic, stochastically forced nonlinear system. We also show how kernel methods can be used to detect critical transitions for some multi scale dynamical systems. We also use the method of kernel flows to predict some chaotic dynamical systems. Finally, we show how kernel methods can be used to approximate center manifolds, propose a data-based version of the centre manifold theorem and construct Lyapunov functions for nonlinear ODEs. This is joint work with Jake Bouvrie (MIT, USA), Peter Giesl (University of Sussex, UK), Christian Kuehn (TUM, Munich/Germany), Romit Malik (ANNL), Sameh Mohamed (SUTD, Singapore), Houman Owhadi (Caltech), Martin Rasmussen (Imperial College London), Kevin Webster (Imperial College London), Bernard Hasasdonk, Gabriele Santin and Dominik Wittwar (University of Stuttgart).

    14. Hiroshi Kokubu (Kyoto University, Japan)
      T. Learning dynamics of the logistic maps by Reservoir Computing
      A. We study how well the Reservoir Computing can learn dynamics of the logistic maps on the interval. Our results, summarized below, are obtained mainly by numerical computation for the logistic map at a = 3.7 (which is merely a sample choice):

      (1) In our setting of the Reservoir Computing, best learning results are with the connection matrices of the Reservoir having their spectral radius much larger than the unity, contrary to a usual assumption.
      (2) In spite of a standard belief that good learning needs a large number of nodes in the Reservoir, we found a Reservoir with only three nodes that can learn the dynamics of the logistic map pretty well.
      (3) A strong numerical evidence is obtained that the dynamics of the Reservoir, after learning, is topologically (perhaps even smoothly) conjugate to the original dynamics of the logistic map. This suggests that a successful Reservoir Computing of dynamics may lead to realization of a dynamics in the Reservoir space that is qualitatively the same as the original dynamics.

      In addition, we shall propose, from dynamical system point of view, a mechanism of the learning of dynamics by Reservoir Computing, along with some supporting theoretical results. Based on this mechanism, we may arrive at a unique feature of the Reservoir Computing for dynamics, among various Machine Learning methods, that a successful learning could provide information on the dynamics beyond the data used for the learning. We shall discuss this in more detail in the talk.

      This is a joint work of Masato Hara (Math., Kyoto U).

    15. Hayato Chiba (Tohoku University, Japan)
      T. Generalized spectral theory and its applications
      A. In this talk, I give the generalized spectral theory of linear operators on an infinite dimensional vector space and its applications to synchronization phenomena, neural networks, symbolic dynamics.


  • Supports:

  • RIMS, Kyoto University Link
    MIRS, Kanazawa University Link
    JST, CREST, JPMJCR2014 Link
    JSPS KAKENHI, JP18H01135, JP18H03671

  • Organizers:

  • Hirofumi Notsu (Kanazawa University, Chair) Link
    Yoshikazu Giga (The University of Tokyo) Link1 Link2
    Masato Kimura (Kanazawa University) Link1 Link2
    Hiroshi Kokubu (Kyoto University) Link
    Sho Sonoda (RIKEN) Link
    Tomohiro Taniguchi (AIST) Link