Letošní škola zpracování obrazu se bude konat 19.6. (večer) - 24.6. 2015. Pro více informací kontaktujte Michala Šorela.
Program se bude skládat z referátů účastníků semináře. Přednáší se v angličtině, zpravidla dopoledne a po večeři. K dispozici bude notebook, laserové ukazovátko, promítačka a papírová tabule.
|Jitka Kostková||Duality and Geometry in SVM Classifiers (Bennett et al., 2000)
We develop an intuitive geometric interpretation of the standard support vector machine (SVM) for classification of both linearly separable and inseparable data and provide a rigorous derivation of the concepts behind the geometry. For the separable case finding the maximum margin between the two sets is equivalent to finding the closest points in the smallest convex sets that contain each class (the convex hulls). We now extend this argument to the inseparable case by using a reduced convex hull reduced away from outliers.
|Barmak Honarvar|| Application of 2D discrete transforms in signal and image processing
This talk describes how to deblur an image using the 2D discrete transforms. I will focus on the following issues:
- Understanding blur model and image deblurring in Z-domain
- Usage of 2D Z-transform for convolution and deconvolution problems
- The role of the zeros of the PSF transfer functions in transformed domain and its stability criterion
|Jan Schier||Software development flow in ImageJ: Git, Maven, unit tests, etc.
Modern software practices and tools bring many ideas which, while not-so-common in the practice of coding for reaserach, are quite important to keep the code robust and and maintainable. In this talk, I would like to share my experience with development flow used for reproducible research software development in the ImageJ ecosystem. It will cover Git workflows with git-flow, unit testing with junit and testng, configuration management with Maven and perhaps some ideas from the "Pragmatic programmer" book by Hunt and Thomas.
|Luboš Soukup||Interferometric Synthetic Aperture Radar (InSAR) - practical applications and influence to image processing theory
Main results of the just finished project "Investigation of terrestrial InSAR efficiency for deformation analysis of hazardous objects and localities" (http://www.p-insar.cz/) will be summarized. The results are both practical and theoretical. They are significant as an application of unique radar data as well as an inspiration for discovery of novel theoretical approaches to image processing. .
|Jindra Soukup||L-curve - how to find the 'ideal' regularization parameter value
In image processing, many tasks uses on the optimization/regularization approach (i.e. denoising, debluring, super-resolution). Amount of regularization (regularization parameter) is user defined parameter. This lecture will cover several methods that could help us to find 'ideal' value or regularization. Main attention will be focused on L-curve method. At the end, I will show how the other methods (GCV, Variational Bayes) are related to the L-curve.
|Jan Kotera|| Transport-based single frame SR of very low-res face images
A report on the CVPR 2015 paper by Kolouri, Rohde. Single image super resolution technique based on pre-learned LR to HR mapping with very high SR factor applied to face images.
|Aleš Zita|| Conditional Random Fields as Recurrent Neural Networks (Zheng et al., 2015)
Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate mean-field approximate inference for the Conditional Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.
|Jan Kamenický|| Compressive PCA for Low-Rank Matrices on Graphs (Shahid et al., 2016)
We introduce a novel framework for an approximate recovery of data matrices which are low-rank on graphs, from sampled measurements. The rows and columns of such matrices belong to the span of the first few eigenvectors of the graphs constructed between their rows and columns. We leverage this property to recover the non-linear low-rank structures efficiently from sampled data measurements, with a low cost (linear in n). First, a Resrtricted Isometry Property (RIP) condition is introduced for efficient uniform sampling of the rows and columns of such matrices based on the cumulative coherence of graph eigenvectors. Secondly, a state-of-the-art fast low-rank recovery method is suggested for the sampled data. Finally, several efficient, parallel and parameterfree decoders are presented along with their theoretical analysis for decoding the low-rank and cluster indicators for the full data matrix. Thus, we overcome the computational limitations of the standard linear low-rank recovery methods for big datasets. Our method can also be seen as a major step towards efficient recovery of non-linear low-rank structures. On a single core machine, our method gains a speed up of p^2/k over Robust PCA, where k<<p is the subspace dimension. Numerically, we can recover a low-rank matrix of size 10304 × 1000 in 15 secs, which is 100 times faster than Robust PCA.
|Hynek Walner|| Contrastive divergence algorithm
Contrastive divergence is an approximative algorithm for maximum likelihood estimation of probabilistic distributions with untractable partion function. It relies on an approximation of the gradient of the log-likelihood based on a short Markov chain started at the last example seen. Talk will be based on notes http://www.robots.ox.ac.uk/~ojw/files/NotesOnCD.pdf.
|Tomáš Suk||Meteor Search
A new network of observatories equipped by digital fish-eye cameras is described. Two cameras in each observatory capture a snap twice a minute. The methods of automated search of meteors in the snaps are studied. First, ground objects are removed by static mask. Then, the Moon, stars and planets are removed by subtraction of previous and next snaps in the sequence. Small symmetric point flashes are erased. The residual traces are searched by Hough transformation with fish-eye correction. The found curves are tested for linearity, oscillations, duty cycle and high frequencies. The tests should exclude airplanes, satellites and clouds. If the traces pass through the tests, they are determined as meteor trails.
|Michal Šorel||Git as a collaborative tool for writting articles and doing research
Git is a distributed version system widely used for software development. I will provide a short introduction into the system and show that besides versioning, it can be also used for backup and as a collaborative tool for writting articles and doing research.