Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. Radha krishna ganti \coverage and rate in cellular networks with multiuser spatial multiplexing, sreejith t. To view the rest of this content please follow the download pdf link above. By kishore jaganathan, samet oymak and babak hassibi. The gaussian minmax theorem in the presence of convexity. Subspace expanders and matrix rank minimization core. Living on the edge all faculty duke electrical and. The typical scenario that arises in most big data problems is one where the ambient dimension of the signal is very large e.
Statistics machine learning, computer science information theory. In rm problem, one aims to find the matrix with the lowest rank that satisfies a set of linear constraints. The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. I am also thankful to the members of my candidacy and defense talk committee, professors joel tropp, yaser abu mostafa, p. Phase transitions in random convex programs joel a. By christos thrampoulidis, samet oymak and babak hassibi. Weiyu xu, samet oymak, juhwan yoo and matthew thill. An unbiased approach to low rank recovery request pdf. In matrix recovery, one takes n information theory. Samet oymak and mahdi soltanolkotabi \fast and reliable parameter estimation from nonlinear observations, siam journal on. Pdf in this position paper, i first describe a new perspective on machine learning ml by four basic problems or levels, namely, what to.
Information theory and its relation to machine learning article pdf available in lecture notes in electrical engineering 336 january 2015 with 1,324 reads how we measure reads. This dimension reduction procedure succeeds when it preserves certain geometric features of the set. We focus on the minimization of a leastsquares objective. You are required to follow the template, though you are free to use other typesetting software e. In the context of information theory and communications, classical coding theory is often associated with the transmission of a message in a manner which is robust to various types of corruption. Date name title of presentation 432019 yuwei hsieh, the university of southern california a semiparametric discretechoice aggregate. Petersburg,russia 31 july 5 august2011 pages22633016 4ieee ieeecatalognumber. In this paper we characterize sharp timedata tradeoffs for optimization problems used for solving linear inverse problems. Download pdf proceedings of machine learning research. Neural information processing systems neurips, 2014. Eldar, and babak hassibi simultaneously structured models with application to sparse and lowrank matrices, arxiv. Samet oymak, zalan fabian, mingchen li, mahdi soltanolkotabi sep 25, 2019 blind submission readers. Matrix rank minimization rm problems recently gained extensive attention due to numerous applications in machine learning, system identification and graphical models.
Recovery of sparse 1d signals from the magnitudes of. Simonsberkeley research fellowship on information theory, spring 2015. Isometric sketching of arbitrary sets via the restricted isometry property. Information theory proceedings isit, 20 ieee international symposium on. Information theory, pattern recognition, and neural. We empirically demonstrate that the jacobian of neural networks exhibit a lowrank structure and harness this property to develop new optimization and generalization guarantees. Neural information processing systems nips 2014, montreal, canada. The broad intent of this thesis is to explore a set of problems in coding theory, where the term \coding theory is in and of itself used broadly. Samet oymak academic experience research interests. Learning compact neural networks with regularization samet oymak1 abstract proper regularization is critical for speeding up training, improving generalization performance, and learning compact models that are cost ef. Proceedings of the thirtysecond conference on learning theory held in. On a theory of nonparametric pairwise similarity for clustering.
The heckscherohlin theory explains why countries trade goods and services with each other, the emphasize being on the difference of resources between two countries. This model shows that the comparative advantage is actually influenced by the interaction between the resources countries have relative abundance of. Ranked 1st in electrical engineering qualifying exam, caltech, january 2010. Approximation power of random neural networks deepai. See the complete profile on linkedin and discover samets.
Ieee international symposium on information theory, 20. Sharp timedata tradeoffs for linear inverse problems core. Mathematics genealogy project department of mathematics north dakota state university p. Pdf stochastic gradient descent learns state equations. In the euclidean setting, one fundamental technique for dimension reduction is to apply a random linear map to the data. We analyze two programs based on regularized nuclear norm minimization, with a goal to recover the low rank part of the adjacency matrix. Phase retrieval under a generative prior proceedings of. A survey of spectral factorization methods sayed 2001. Pdf information theory and its relation to machine learning. Connecting clustering to classification yingzhen yang feng liang shuicheng yan zhangyang wang thomas s huang pdf. Computer science information theory, computer science learning. All presentations will use my own laptop, and thus need to be turned in by noon. Simple bounds for noisy linear inverse problems with exact. By samet oymak, benjamin recht and mahdi soltanolkotabi.
While the rank function is useful for regularization it. Murat kocaoglu karthikeyan shanmugam alexandros dimakis adam klivans. Request pdf an unbiased approach to low rank recovery low rank recovery problems have been a subject of intense study in recent years. The phase transition of matrix recovery from gaussian. Gaussian comparison theorems are useful tools in probability theory. Simple bounds for noisy linear inverse problems with exact side information.
In various applications in signal processing and machine learning, the model of interest is known to be structured in several. Computer science information theory, mathematics optimization and control. Samet oymak, mahdi soltanolkotabi, and benjamin recht. These results provide a precise understanding of the various tradeoffs involved between statistical and computational resources as well as a priori side information available for such nonlinear parameter estimation problems. Department of electrical engineering, california institute of technology, pasadena, ca, usa. Anilesh kollagunta krishnaswamy stanford university. Eldar, fellow, ieee, and babak hassibi, member, ieee abstractrecovering structured models e. Fast and reliable parameter estimation from nonlinear. Samet oymak university of california, riverside mahdi soltanolkotabi university of southern california mahdi soltanolkotabi is an assistant professor in the ming hsieh department of electrical and computer engineering and computer science at the university of southern california where he holds an andrew and erna viterbi early career chair. Theory and application to carm conebeam tomography. Vaidyanathan, alex dimakis and tracy ho for the insightful comments on this dissertation.
Universality in learning from linear measurements nips. Information theory, inference, and learning algorithms david j. When available, the papers may be downloaded as pdf files, which can be read or printed using the acrobat reader. Generalization guarantees for neural networks via harnessing the lowrank structure of the jacobian samet oymak. Ramya korlakai vinayak, samet oymak, babak hassibi. Samet oymak, benjamin recht, and mahdi soltanolkotabi. A proximalgradient homotopy method for the sparse least. Sample complexity of kalman filtering for unknown systems.
Samet oymak, mahdi soltanolkotabi, and benjamin recht \sharp timedata tradeo s for linear inverse problems, transactions on information theory, june 2018. Xinghao pan dimitris papailiopoulos samet oymak benjamin recht kannan ramchandran michael jordan 2014 poster. We consider the problem of finding clusters in graphs which are partially observed. If you would like to contribute, please donate online using credit card or bank transfer or mail your taxdeductible contribution to. Econometrics seminars spring 2019 ucr department of. Sharp timedata tradeoffs for linear inverse problems. Kishore jaganathan, samet oymak, and babak hassibi. View samet oymaks profile on linkedin, the worlds largest professional community. The mathematics genealogy project is in need of funds to help pay for student help and other associated costs. Combinatorial regression and improved basis pursuit for. The problem of signal recovery from the autocorrelation, or equivalently, the magnitudes of the fourier transform, is of paramount importance in various fields of engineering. Learning compact neural networks with regularization.
This paper investigates the approximation power of three types of random neural networks. Recovery of sparse 1d signals from the magnitudes of their fourier transform. Siam journal on optimization society for industrial and. The book is provided in postscript, pdf, and djvu formats.
1299 39 1219 33 1638 440 1070 1543 667 863 1351 1591 436 1207 1288 757 331 885 287 807 1399 240 206 713 548 1454 638 492 870