Image compression using implicit neural representations

Image from [4]. Comparison of different implicit network architectures fitting a ground truth image (top left). Second and third row show first and second order derivatives.

Recently, Implicit Neural Representations (INRs) have become increasingly popular across multiple tasks and domains, such as shape representation and novel view synthesis. Even more recently they have started to be adopted in image compression pipelines [1,2], opening a different direction compared to existing neural-based compression approaches, which need a training dataset and the consequent storage of a specific latent code + a shared decoder [3]. The newly proposed approaches for image compression, instead, involve overfitting a neural network to a single image, and subsequently storing the weights of the network. Furthermore, these weights could be quantized to make the network smaller and thereby increase the compression rate. Being a relatively new direction, there is room for exploration in multiple aspects of the pipeline, mainly:

  • Network architecture (most works use Siren[4])
  • Quantization techniques
  • Neural architecture search and parameter allocation search

The student will implement and experiment with different architectures, quantization techniques and architecture/parameter search methods in order to improve the current state of INR-based image compression. The student will find an existing codebase, adapted from previous works, and is not required to explore all of the mentioned directions.

Multiple students are possible. For any information talk to [email protected] and [email protected]

Requirements:

  • Experience with Python
  • Experience with PyTorch

[1] Strümpler et al.,2022, Implicit Neural Representations for Image Compression, https://arxiv.org/abs/2112.04267
[2] Girish et al., 2023, SHACIRA: Scalable HAsh-grid Compression for Implicit Neural Representations, https://openaccess.thecvf.com/content/ICCV2023/papers/Girish_SHACIRA_Scalable_HAsh-grid_Compression_for_Implicit_Neural_Representations_ICCV_2023_paper.pdf
[3] Ballé et al., 2018, Variational Image Compression With a Scale Hyperprior, https://arxiv.org/pdf/1802.01436.pdf
[4] Sitzmann et al., 2020, Implicit Neural Representations with Periodic Activation Functions, https://arxiv.org/abs/2006.09661
[5] Gholami et al., 2021, A Survey of Quantization Methods for Efficient Neural Network Inference, https://arxiv.org/pdf/2103.13630.pdf