We're thrilled to present the First Edition of Tiny Papers - a collection of concise, impactful ideas that push the boundaries of innovation.
These short but powerful contributions offer fresh perspectives, novel insights, and spark discussions that shape the future of our discipline.
The list doubles as a Detailed Technical Program so Authors will know which time slot their
presentation is according to the Conference Program.
For list of Accepted Regular Papers jump here.
For list of Accepted Symposium Papers jump here.
For list of Vision India Session Papers jump here.
Abstract: In this work, we designed a completely blind video quality assessment algorithm using the deep video prior. This work mainly explores the utility of deep video prior in estimating the visual quality of the video. In our work, we have used a single distorted video and a reference video pair to learn the deep video prior. At inference time, the learned deep prior is used to restore the original videos from the distorted videos. The ability of learned deep video prior to restore the original video from the distorted video is measured to quantify distortion in the video. Our hypothesis is that the learned deep video prior fails in restoring the highly distorted videos. The restoring ability of deep video prior is proportional to the distortion present in the video. Therefore, we propose to use the distance between the distorted video and the restored video as the perceptual quality of the video. Our algorithm is trained using a single video pair and it does not need any labelled data. We show that our proposed algorithm outperforms the existing unsupervised video quality assessment algorithms in terms of LCC and SROCC on a synthetically distorted video quality assessment dataset.
Abstract: Super Resolution (SR) plays a critical role in computer vision, particularly in medical imaging, where hardware and acquisition time constraints often result in low spatial and temporal resolution. While diffusion models have been applied for both spatial and temporal SR, few studies have explored their use for joint spatial and temporal SR, particularly in medical imaging. In this work, we address this gap by proposing to use a Latent Diffusion Model (LDM) combined with a Vector Quantised GAN (VQGAN)-based encoder-decoder architecture for joint super resolution. We frame SR as an image denoising problem, focusing on improving both spatial and temporal resolution in medical images. Using the cardiac MRI dataset from the Data Science Bowl Cardiac Challenge, consisting of 2D cine images with a spatial resolution of 256x256 and 8-14 slices per time-step, we demonstrate the effectiveness of our approach. Our LDM model achieves Peak Signal to Noise Ratio (PSNR) of 30.37, Structural Similarity Index (SSIM) of 0.7580, and Learned Perceptual Image Patch Similarity (LPIPS) of 0.2756, outperforming simple baseline method by 5% in PSNR, 6.5% in SSIM, 39% in LPIPS. Our LDM model generates images with high fidelity and perceptual quality with 15 diffusion steps. These results suggest that LDMs hold promise for advancing super resolution in medical imaging, potentially enhancing diagnostic accuracy and patient outcomes. Code link is also shared.
Abstract: Our work aims to build a model that performs dual tasks of image captioning and image generation while being trained on only one task. The central idea is to train an invertible model that learns a one-to-one mapping between the image and text embeddings. Once the invertible model is efficiently trained on one task, the image captioning, the same model can generate new images for a given text through the inversion process, with no additional training. This paper proposes a simple invertible neural network architecture for this problem and presents our current findings.
Abstract: Edge detection has been one of the most difficult challenges in computer vision because of the difficulty in identifying the borders and edges from the real-world images including objects of varying kinds and sizes. Methods based on ensemble learning, which use a combination of backbones and attention modules, outperformed more conventional approaches, such as Sobel and Canny edge detection. Nevertheless, these algorithms are still challenged when faced with complicated scene photos. In addition, the identified edges utilizing the current methods are not refined and often include incorrect edges. In this work, we used a Cascaded Ensemble Canny operator to solve these problems and detect the object edges. The most difficult Fresh and Rotten and Berkeley datasets are used to test the suggested approach in Python. In terms of performance metrics and output picture quality, the acquired results outperform the specified edge detection networks.
Abstract: Shadow removal and segmentation remain challenging tasks in computer vision, particularly in complex real world scenarios. This study presents a novel approach that enhances the ShadowFormer model by incorporating Masked Autoencoder (MAE) priors and Fast Fourier Convolution (FFC) blocks, leading to significantly faster convergence and improved performance. We introduce key innovations: (1) integration of MAE priors trained on Places2 dataset for better context understanding, (2) adoption of Haar wavelet features for enhanced edge detection and multiscale analysis, and (3) implementation of a modified SAM Adapter for robust shadow segmentation. Extensive experiments on the challenging DESOBA dataset demonstrate that our approach achieves state of the art results, with notable improvements in both convergence speed and shadow removal quality.
Abstract: Enhancing and preserving the readability of document images, particularly historical ones, is crucial for effective document image analysis. Numerous models have been proposed for this task, including convolutional-based, transformer-based, and hybrid convolutional-transformer architectures. While hybrid models address the limitations of purely convolutional or transformer-based methods, they often suffer from issues like quadratic time complexity. In this work, we propose a Mamba-based architecture for document binarisation, which efficiently handles long sequences by scaling linearly and optimizing memory usage. Additionally, we introduce novel modifications to the skip connections by incorporating Difference of Gaussians (DoG) features, inspired by conventional signal processing techniques. These multiscale high-frequency features enable the model to produce high-quality, detailed outputs.
Abstract: Inverse rendering pipelines are gaining prominence in realizing photo-realistic reconstruction of real-world objects for emulating them in virtual reality scenes. Apart from material reflectances, spectral rendering and in-scene illuminants' spectral power distributions (SPDs) play important roles in producing photo-realistic images. We present a simple, low-cost technique to capture and reconstruct the SPD of uniform illuminants. Instead of requiring a costly spectrometer for such measurements, our method uses a diffractive compact disk (CD-ROM) and a machine learning approach for accurate estimation. We show our method to work well with spotlights under simulations and few real-world examples. Presented results clearly demonstrate the reliability of our approach through quantitative and qualitative evaluations, especially in spectral rendering of iridescent materials.