Invited talks

Algorithmic aspects of the Lovász local lemma
Can a needle be efficiently located in a haystack? Sometimes!Lefteris Kirousis link to webpage
Department of Mathematics, National and Kapodistrian University of Athens, GreeceLet be random variables and let be "undesirable" events, each depending on a subset of the variables. Suppose that the probabilities of all events are bounded by a number p<1. Obviously, if the events are mutually independent, the probability of an assignment to the variables avoiding all undesirable events (i.e. none occurring) is at least . Therefore, as the latter probability is strictly positive, we can conclude that there is at least one assignment for which none of the events occur. By an old and much used result of Lovász, the same is true if the events are not independent, but the number of events that each individual event is correlated with is inversely proportional to p, and with constant of proportionality not larger than Euler’s number e. This sufficient condition of Lovász is fortunately weak enough to hold in several cases where the probability of an assignment avoiding all undesirable events is very small (but positive), smaller than (needle in a haystack). So rightly, the extremely simple randomized algorithm designed by Moser (thirtyfive years after Lovász' nonalgorithmic proof) to locate such an assignment (locate the needle) was hailed as a major algorithmic success. I am going to discuss a very simple direct approach to proving the correctness of Moser’s algorithm and various variants of it. More importantly, I will formulate a condition weaker than that of Lovász that is still sufficient to show the existence of a good assignment (one that avoids all undesirable events). However under only this condition, it seems that it is indeed inherently hard to locate a good assignment. Nevertheless, under this condition, the good assignment will be constructed efficiently, but alas, only through an algorithm that utilizes two interacting computing agents.

Fast Randomized Algorithms for Tensor Operations and their Applications
George Michailidis link to webpage
Professor and Director of the Informatics Institute, University of Florida, United StatesWe consider scalable randomized algorithms for many common tensor operations, including lowrank approximation and decomposition, together with tensor multiplication. Such operations arise in a number of modern day applications of tensors in image and signal processing. Specifically, we introduce polynomial time algorithms that employ a small number of lateral and/or horizontal slices of the underlying 3rd order tensor, that offer relative error guarantees for the quality of the solutions. All results can easily be extended to higher order tensors. We demonstrate the applicability of these algorithms on diverse applications. The first focuses on mass spectrometry based imaging, where we use empirical statistical leverage scores of the input data to provide provably good lowrank approximations of the measurements in the data that are expressed in terms of actual ions and positions, as opposed to the results obtained from difficulttointerpret matrix decomposition techniques. Another illustration relates to image and video recovery, through a nuclear norm minimization approach, from incomplete and noisy data is pro vided. Compared to existing stateoftheart techniques, the proposed algorithms exhibit superior performance in both speed and solution quality.

Deterministic Lipschitz global optimization algorithms and their comparison with metaheuristics
Yaroslav D. Sergeyev link to webpage
Dipartimento di Ingegneria Informatica, Modellistica, Elettronica e Sistemistica, Università della Calabria, Rende (CS), Italy  Department of Software and Supercomputing Technologies, Lobachevsky State University of Nizhni Novgorod, RussiaJoint work with Dmitri E. Kvasov and Marat S. Mukhametzhanov
Deterministic Lipschitz global optimization algorithms are considered in this lecture. Several modifications are presented and compared with widely used multidimensional metaheuristic global optimization methods: genetic algorithms, differential evolution, particle swarm optimization, artificial bee colony algorithms, and firefly algorithms. For this purpose, there has been introduced a methodology allowing one to compare stochastic methods with deterministic ones by using operational characteristics originally proposed for working with deterministic algorithms only. As a result, a visual comparison of methods having different nature on classes of randomly generated test functions becomes possible. A detailed description of the new methodology for comparing, called operational zones, is given and results of wide numerical experiments are reported. 
Generalizations of the Bolzano theorem for tackling problems with imprecise information
Michael N. Vrahatis link to webpage
Computational Intelligence Laboratory (CILab), Department of Mathematics, University of Patras, GR26110 Patras, GreeceThe first proofs of the very important and pioneering intermediate value theorem (also called Bolzano's theorem), given independently by Bolzano in 1817 and Cauchy in 1821, were crucial in the procedure of arithmetization of analysis (which was a research program in the foundations of mathematics during the second half of the 19th century). A straightforward generalization of Bolzano's theorem to continuous mappings of an $n$cube (parallelotope) into $\mathbb{R}^n$ was proposed without proof by Poincaré in 1883 and 1884 in his work on the three body problem. The Poincaré theorem was soon forgotten and it has come to be known as Miranda's theorem which partly explains the nomenclature PoincaréMiranda theorem as well as BolzanoPoincaréMiranda theorem. This theorem is closely related to important theorems in analysis and topology as well as it is an invaluable tool for verified solutions of numerical problems by means of interval arithmetic. Also, it has been shown that this theorem is equivalent to the Brouwer fixed point theorem (1912) and it is closely related to the theorems of Borsuk (1933), Kantorovich (1948) [Nobel Laureate in Economic Sciences in 1975] and Smale (1986) [Fields Medalist in 1966]. Recently a generalization of the Bolzano theorem for simplices has been proposed [Vrahatis M.N., Generalization of the Bolzano theorem for simplices, Topology and its Applications, 202, 4046, 2016]. The obtained proof is based on the Knaster KuratowskiMazurkiewicz covering principle (1929) (or KKM lemma for short). This lemma constitutes the basis for the proof of many theorems (including the famous Brouwer fixed point theorem). It is worth noting that three pioneering classical results, namely, the Brouwer fixed point theorem (1912), the Sperner lemma (1928), and the KKM lemma (1929) are mutually equivalent in the sense that each one can be deduced from another. The KKM lemma has numerous applications in various fields of pure and applied mathematics. In particular, among others, in the field of mathematical economics, the very important and pioneering extension of the KKM lemma due to Shapley (1973) [Nobel Laureate in Economic Sciences in 2012] customarily called the KnasterKuratowskiMazurkiewiczShapley theorem constitutes the basis for the proof of many theorems on the existence of solutions in game theory and in the general equilibrium theory of economic analysis.
Various generalizations of the Bolzano theorem are presented that are particular useful for the existence of a solution of a system of nonlinear equations in several variables as well as for the existence of fixed points of functions and the localization of extrema of objective functions. These generalized theorems require only the algebraic sign of the function that is the smallest amount of information (one bit of information) necessary for the purpose needed, and not any additional information. Thus, these generalized theorems are of major importance for tackling problems with imprecise (not exactly known) information. This kind of problems occurs in various scientific fields including mathematics, physics, economics, engineering, computer science, biomedical informatics, medicine and bioengineering among others. This is so, because, in a large variety of applications, precise function values are either impossible or time consuming and computationally expensive to obtain.