Student research projects at Scientific Computing Group

Contact: Prof. Ivan Oseledets,, joint with the group members.

Below is the list of projects. Some of them can later evolve into the master theses. They can be also taken by a group of students (2-3).


The main prerequisite is the motivation work with us: our group is big, and we are always quite helpful, even if you are not a master in the particular topic. However it is a good idea that you know linear algebra (or at least learning it). For machine-learning related projects, machine learning experience is a good idea, for PDE-related it not bad to have a basic knowledge about finite differences and/or finite elements. Many projects are related to tensor train and quantized tensor train decompositions: this is our “special feature”, which you will surely need to learn. Also, all the projects are related to programming and Python is the preferred language.


The tasks are of course specific for the particular projects, however the pipeline is similar.

  1. Do a literature survey. Google Scholar will help you. Read all the papers you can find on the topic, not only the particular ones that the advisor has given to you. Follow the references in the papers, find keywords, define key ideas.

  2. Find appropriate software and baseline algorithm. Typically, you are not the one solving a particular problem: look for the same projects, what was their approach and try to identify possible improvements

  3. Start writing in \(\LaTeX\) a short summary of what you are doing (something like a journal)

  4. Try to formulate questions: a good question is the core to the answer.

  5. Ask the questions to me or group members.

  6. Write code, do experiments, get new results (or find that the approach is not working at all - this is also a scientific result).

List of projects

  1. Deep PDE: Try to solve PDEs using deep learning
  2. Multi-classifier: Learn many linear classifiers with low-rank tensor constraints
  3. Linear classification: Linear classification with many classes & low-rank optimization: Given a machine learning task with huge number of classes, try to reduce the complexity by incorporating low-rank constraints into the problem.
  4. Maxvol feature selection: Given machine learning task with many features, try to extract the “good” ones using maximum-volume algorithm
  5. Risk analysis and high-dimensional integrals: Using recently developed fast low-rank numerical quadrature technique, evaluate integrals over Brownian motion and apply to
  6. Sum-product-network and Metropolis algorithm: Given a multivariate function, try to recover sum-product network for it based on Metropolis sampling method.
  7. Fast direct solver for sparse matrices: Block elimination for hierarchically semiseparable matrices
  8. Boltzmann collision integral: Implement (possibly find a new one) fast technique for the evaluation of the Boltzmann collision integrals.
  9. Parallel tensor-train optimization: using message passing algorithm and its connection to loopy belief propagation.
  10. Finite element solver for topology optimization in 3D printing:. Study different regularization techniques for the finite element (FEM) solvers for topology optimization of different shapes.
  11. TT-cross approximation for topology optimization in 3D printing:. Given a “Lego-type structure” of a 3D topology, defined by a set of parameters, apply & compute TT-cross approximation of the optimized quantity.
  12. Quantized tensor train solver topology optimization in 3D printing: Develop a QTT solver for shape & topology optimization for different computational 3D printing tasks.
  13. Optimal sparse grids: Given a set of multivariate functions, find the best interpolation points using maxvol algorithm.
  14. QTTfun project: Develop a QTT alternative to the Chebfun system.
  15. TT-Toolbox production: Develop a well-documented, clean codebase for the TT-Toolbox project.
  16. Tensors on GPU: Develop a prototype code for the TT-Toolbox on GPU (OpenCL).
  17. Tensors on MPI: Develop a prototype code for the TT-Toolbox on a parallel cluster (MPI).
  18. Volume integral equation solvers using QTT: Given a volume integral equation, develop an efficient QTT-solver for it.
  19. Logistic matrix factorization using Riemannian optimization: In many problems, instead of the Frobenious norm, other loss functionals are used.

Deep PDE

Deep learning is now extremely popular for many different tasks, including image processing, speech recognition and many others. On the other hand, PDEs require a lot of computing power. The goal of this project is to study the applicability of deep learning methods (based on standard software) to approximate the functionals of solution of PDEs.

Problem description.

Consider a thermal conductivity problem

$$ \nabla \cdot k \nabla u = f, $$

where \(k\) is a given coefficient. Then we want to approximate the functional \(F(u)\) of the solution (i.e., maximal temperature, mean temperature). The goal is to design a neural network that will be able to approximate this quantity to sufficient accuracy.


  • Learn how to solve thermal conductivity problem numerically
  • Create a database of solutions/values of the functional
  • Design a deep architecture capable of approximating the functional.


Problem description.

We propose the following classifier. Given a feature vector \(p\), the classification is performed by the selection of the path \(i_1, \ldots, i_d\) that maximizes the “softmax” criteria [ F(p, i_1, \ldots, i_d) = \frac{p W_1(i_1) \ldots W_d(i_d)}{ p W_1(i_1) \ldots W_{d-1}(i_{d-1}) W}, ] where \(W_d = \sum_{i_d = 1}^c W_d(i_d)\) and the last variable \(i_d\) is the class variable (which we want to discover). The learning of the matrices \(W_k(i_k)\) can be done by a gradient-descent type algorithm: for each training sample \((p_s, c_s)\) given \(W\) we update the best path \(i^{(s)}_1, \ldots, i^{(s)}_{d-1}\) by maximizing the classification, and then for a given set \((p_s, i_1, \ldots, i_d)\) we minimize the loss (for example, logistic regression on the softmax criteria, or just the softmax itself) by taking a Riemanian gradient descent.

This can be viewed as a \textbf{multiple linear classificator}, but with a special structure of the classificator matrices, otherwise the overfitting would be obvious. For small number of layers the inference the classification would not be hard (i.e. \(10\) layers each of size \(2\)).

It is also close to the highway networks.


  1. Get familiar with logistic regression
  2. Get familiar with Riemannian optimization
  3. Get familar with TT-Toolbox
  4. Implement the multi-classifier approach

Linear classification

Problem description.

A typical linear regression uses the following classifier. Given a feature vector \(x\) and number of classes \(c\), we learn an \(F \times C\) weight matrix \(W\) and the probabilities for different classes are given by the vector

$$ p = \frac{1}{Z} \exp(W x), $$

where \(Z\) is the normalization constant. For large number of features and large number of classes (maybe infinite, when the class variable is continious), storing matrix \(W\) is difficult, and we propose to constraint it to the rank-\(r\) case.


  1. Get familiar with rank-\(r\) matrices
  2. Get familiar with Riemannian optimization over rank-\(r\) manifolds
  3. Find a good problem with many classes
  4. Test the method!

Maxvol feature selection

Problem description.

A machine learning task with many features can be difficult to solve; however, many features are not so important, and only important features can be used for learning larger and more accurate classifiers. But how to select those features? The idea that we want to test is based on the maximal volume submatrices. Represent the data as \(N_{obj} \times N_F\), and find there the submatrix with maximal determinant.


  1. Look for different feature selection algorithms in the literature
  2. Find appropriate many-features datasets
  3. Implement the maximal volume submatrix approach (see, for example,

Risk analysis and high-dimensional integrals

Problem description.

Many stock prices can modelled using Brownian motion, i.e. the next state depends on the previous state plus some random event. Important statistical options (for example, option calls) can be then estimated as expectations of random process. The straightforward way is to do Monte-Carlo simulation, but it can not give high accuracy. Another way is to write down the expectation as a path integral and then approximate it by a multi-dimensional integral. For similar integrals we have recently developed a new technique, The idea is to test it for examples from risk analysis.


  1. Read the papers on high-dimensional integrals in risk analysis
  2. Read the paper
  3. Implement the code from for high-dimensional financial integrals

Sum-product-network and Metropolis algorithm

Problem description.

Sum-product-networks is a new deep architecture. One of its interesting property is that there a constructive algorithm for learning the structure of such network from the observations. On the other hand, one can look at SPN as an approximation of multivariate functions. The idea is then to use Metropolis-type algorithm that samples the given function as it was the probability density, and then apply the SPN learning to recover the SPN structure.


  1. Read literature about sum-product-networks
  2. Get familiar with software that learns the network structure
  3. Get familiar with the Metropolis sampling algorithm
  4. Try to compute SPN for deterministic functions

Fast direct solver for sparse matrices

Problem description.

Sparse matrices are important and have \(\mathcal{O}(N)\) parameters. However solving a linear system is much more difficult. The goal of this project is to get familiar with the subject, and implement the fast direct solver based on so-called HSS structure.


  1. Read literature on fast low-rank direct solvers
  2. Study our code
  3. Implement a variant for HSS structure

Boltzmann collision integral

Problem description.

Boltzmann collission integral is an integral transform that takes two functions from \(L^2(\mathbb{R}^3)\) to a one function in \(L^2(\mathbb{R}^3)\). For example, see Discretizated on a grid, it can be defined as a \(N^3 \times N^3 \times N^3\) array, and evaluation requires \(\mathcal{O}(N^9)\) complexity. However, it has many symmetries. Our goal is to compare different method for the evaluation of the Boltzmann integral.


  1. Read the literature for numerical evaluation of Boltzmann collision integrals
  2. Come up with the discretization scheme
  3. Implement it and evaluate its efficiency

Parallel tensor-train optimization

Problem description.

TT-format (Tensor Train) is a representation of \(d\)-dimensional tensors of the form

$$A(i_1, \ldots, i_d) = G_1(i_1) \ldots G_d(i_d),$$

where \(G_k(i_k)\) has size \(r_{k-1} \times r_k\) for fixed \(i_k\). A typical task to find the TT-representation is to optimize over the cores \(G_k\) in a sequential manner (all fixed, update one and so on). Our goal is to present an asynchonous versions of such algorithm, based on the connection with loopy belief propagation


  1. Get familiar with tensor trains
  2. Get familiar with loopy belief propagation
  3. Implement an asynchonous optimization for TT-fromat and study its efficiency.

Finite element solver for topology optimization in 3D printing

Problem description.

The “mega-goal” is given a functional \(F\) find a topology that optimizes this functional given some constaints (weight of the structure, for example). One of the typical task is linear elasticity, and it is being solved by iteration over the material properties. One can formulate the problem as a minimization problem with constraints (regularization).


  1. Get familiar with FEM approaches for topology optimization
  2. Select an appropriate topology optimization problem
  3. Formulate it as an inverse problem with a suitable regularization
  4. Implement a prototype code
  5. (Bonus) Print the structure

TT-cross approximation for topology optimization in 3D printing

Problem description.

The “mega-goal” is given a functional \(F\) find a topology that optimizes this functional given some constaints (for example, weight of the structure). One of the ways to do so is to use “ball and sticks” method to create the structure: a set of primitives that can be connected in different ways and also parametrized with several parameters. Checking all possible parameters has exponential complexity, and each computation requires a solution of a PDE. The idea is to use TT-cross method to adaptively sample the parameter space.


  1. Get familiar with FEM approaches for topology optimization
  2. Select a suitable local model, determine the number of parameters
  3. Get familar with TT-cross approximation algorithm
  4. Write a prototype code

Quantized tensor train solver topology optimization in 3D printing

Problem description.

The “mega-goal” is given a functional \(F\) find a topology that optimizes this functional given some constaints (weights). One of the typical task is linear elasticity, and it is being solved by iteration over the material properties. One can formulate the problem as a minimization problem with constraints (regularization). Our goal here is to use the weirdest regularization: contraint the material properties to low-QTT rank case. However, you can be smarter, and come up with other ideas, for example coming from deep networks as well (that would allow for very curved shapes to be generated). However to work on this project you should get familiar both with the FEM solvers for a topology optimization codes, and with optimization with low-rank constraints.


  1. Get familiar with FEM approaches for topology optimization
  2. Get familiar with Tensor-Train optimization
  3. Try to apply TT/QTT contraints to the material coefficients to get new optimized shapes.

Optimal sparse grids

Problem description.

Approximation of multivariate functions is a notoriously difficult tasks. One of the most efficient approaches, based on so-called sparse grids is based on using the set polynomials with bounded total degree. This provides a sparse basis set in the set of multidimensional functions. Interpolation is the simplest task, and in the standard setting gives rise to the so-called sparse grids. However, given the basis, the problem of selecting optimal interpolation points can be solved by the maximum volume algorithm=. The goal of this project is to compute those interpolation sets and evaluate their efficiency.


  1. Get familiar with sparse grids
  2. Get familiar with max-vol algorithm
  3. Implement a functional version of the maxvol algorithm
  4. Estimate the efficiency of the approach

QTTfun project

Problem description.

QTT (Quantized Tensor Train) is a very elegant & efficient representation of functions, which in many cases allows to achieve complexity logarithmic in the number of samples. However, even for 1d-functions using the basic software ( it is not that straightforward to compute integrals, find roots. This is all implemented in the Chebfun system in a very nice and lucrative way. The goal of this project is implement a QTT alternative to the Chebfun project, and it should be much more efficient.


  1. Get familiar with tensor train/QTT. Install the basic software
  2. Get familiar with Chebfun system
  3. Implement QTTFun 0.1 with basic overloaded functions (arithmetics, integrals, etc.)

TT-Toolbox production

Problem description.

TT-Toolbox (, there is also a MATLAB version, is our basic software for working with tensors, and it is actively used in many groups around the world. However it is not well-documented and the number of examples is limited. The codebase should be also cleaned as well.


  1. Get familiar with tensor train Toolbox
  2. Implement basic examples
  3. Try to document the code, find bugs, propose new functionality

Tensors on GPU

Problem description.

Tensor train is a very efficient representation of multidimensional tensors. However, there are use-cases when not only an asymptotic complexity is needed, but the actual computation speed matters. Our main application in mind is machine learning with low-rank constraints, when someone has to do many iterations, and if it is possible to speedup the computations on GPU, it also improved the results. The TT-Toolbox ( is implemented in Python with no GPU support. The goal of this project is two-fold: get familiar with modern GPU programming techniques, including pyopencl and and based upon those, build a GPU version of basic tensor algorithms


  1. Get familiar with tensor train Toolbox
  2. Get familiar with GPU programming with pyopencl and
  3. Implement a prototype code for basic TT algorithms on GPU, evaluate their performance compared to the peak one

Tensors on MPI

Problem description.

Tensor train is a very efficient representation of multidimensional tensors. There are use-cases, when the representation is so large that is does not fit into the memory (i.e., large TT-ranks) and parallelization is required. The goal of this project is to try to figure out how the algorithms implemented in the TT-Toolbox ( can be parallelized (for example, using mpi4py) and test the resulting prototype. Moreover,the actual parametrization of TT is very sequential, and it would be really interesting to consider asyncronous approaches to TT optimization.


  1. Get familiar with tensor train Toolbox
  2. Get familiar with MPI programming using mpi4py
  3. Implement a prototype code for basis TT-algorithms using MPI, evaluate their performance

Volume integral equation solvers using QTT

Problem description.

Probably you have some understanding, what a partial differential equation is. However, there are cases, when volume integral equations (i.e., the system has the form \(K u = f\), and \(K\) is a non-local operator, for example,

$$ \int \frac{q(y) dy}{\Vert x - y \Vert} = f. $$

After discretization, we are left with \(N^3 \times N^3\) dense matrix. For a uniform grid, the complexity is then greatly reduced by using Fast Fourier Transform (FFT) to \(\mathcal{O}(N^3 \log N)\) for one matrix-by-vector product. The goal of this project is to reduce this complexity much further, to \(\mathcal{O}(N)\) or even \(\mathcal{O}(\log^{\alpha}(N)\) using quantized tensor train representation.


  1. Get familiar with volume integral equations
  2. Pick one interesting VIE (probably with some application in mind)
  3. Get familiar with Tensor Train/ QTT formats
  4. Implement a basic solver for VIE using TT/QTT formats using TT-Toolbox

Logistic matrix factorization using Riemannian optimization

Problem description.

Low-rank matrix factorization of the form

$$ A \approx U V^{\top}, $$

where \(U\) is \(n \times r\) and \(V\) is \(m \times r\), appears in many applications, for example in recommender systems, where the rows enumerate the users, and the columns - the product. However, the typical norm used is the Frobenius norm, and it actually does not fit very well: we are interested only in the prediction of like/dislike, but not the actual numerical value. Thus, the optimization should be formulated in a non-standard way. Logistic matrix factorization is one of the possible ways to do so. Optimization of non-quadratic functionals can be done using Riemannian optimization in a very efficient way. This is the goal of this project.


  1. Understand where the logistic matrix factorization comes from
  2. Get familiar with basic Riemannian optimization techniques
  3. Implement a Riemannian optimization approach for logistic matrix factorization
  4. Compare with existing software on standard examples.


26/05/2016 A TT-eigenvalue solver that finally works
12/05/2016 Exponential machines and tensor trains
06/04/2016 Convergence analysis of a projected fixed-point iteration
30/03/2016 Compress-and-eliminate solver for sparse matrices
01/12/2015 New paper in SIMAX


We are located at the 2-nd floor of the new "Technopark-3” building in Skolkovo (few kilometers outside Moscow Ring Road). The building is accessible from Skolkovo Road (Сколковское шоссе) and Minskoe Highway (Минское шоссе).