Redefining Image Super-Resolution

Archan Ghosh
2 min readJul 17, 2021
Photo by Sajad Nori on Unsplash

Google has proved time and again that it will always be one of the forerunners when it comes to Deep Learning solutions. Recently a team at Google Brain introduced a newer method for Image Super-Resolution.

From being just a Science Fiction dream to one of the most researched fields Image Super-Resolution has seen constant growth and innovation over the years especially with the aid of Deep Learning architectures. From the first iterations of SRCNN to SRResNET and then SRGAN that re-established how Super-Resolution was looked at.

As a young contributor to the field, I can say that it is if not one of the most interesting problem spaces to work in.
The team at Google termed their paper “SR3:Image Super-Resolution via Iterative Refinement”, which aims at adapting “denoising diffusion probabilistic models to conditional image generation and performs super-resolution through a stochastic denoising process.”

As stated in the inference they have been able to confuse even the trained eye with models spanning the scale of 8 times.

Examples Given in the paper

As you can see they have done a comparative study and have used cascading to build up a super-resolution model that can work on images of 64x64 to 512x512. This is achieved by using a double 4x Super-Resolution.

This is by far one of the most interesting approaches to the problem as it can also be unconditionally trained over natural images.

If applied correctly the technique can be the next milestone in regards to Super-Resolution.

The Paper can be found here.
The Project is available here.

--

--

Archan Ghosh

Machine Learning & Data Science Enthusiast | Learner by day, gamer by night and streamer by passion |